CloudSwitch's Founder Discusses What's In His Cloud Stack

by Sam Dean - Jun. 28, 2011Comments (0)

 Recently, in conjunction with the rise of cloud computing, OStatic has been doing an interview series focused on the components in cloud software stacks, and how they create unique advantages. Recently, we discussed Standing Cloud's approach, and we discussed the cloud stack at myClin, which offers cloud resources that make it easier for physicians and their staffs to participate in clincal trials. Also recently, Hanspeter Christ, Deputy Process Manager of the Federal Spatial Data Infrastructure (FSDI) for Swisstopo--Switzerland's federal office of topography--caught up with us to discuss Swisstopo's cloud stack. The series began with our conversation with PHP Fog founder Lucas Carlson, where he provided many insights into a smart cloud stack.

In this latest interview, we caught up with John Considine, Founder and CTO of CloudSwitch. He discusses his company's approach to allowing enterprises to run applications in the cloud without the need to rearchitect them.

Please tell us about CloudSwitch and what it does.
 
Let me start with our general description: CloudSwitch delivers the enterprise gateway to the cloud. CloudSwitch's innovative software appliance enables enterprises to run their applications in the right cloud computing environment securely, simply and without changes.

With CloudSwitch, applications remain tightly integrated with enterprise data center tools and policies, and can be moved easily between different cloud environments.  We sell a software appliance (a virtual machine) that installs in the customers data center that enables them to integrate cloud resources (from providers like Amazon and Terremark) with their internal IT systems.

Your Cloud Isolation Technology creates an overlay network that encrypts and encapsulates network traffic. Can you explain how this works in practice?
 
First, while an important ability enabled by our Cloud Isolation Technology is powerful networking, this technology is about a lot more than just networking.  At the most fundamental level, this technology is about protecting the customers servers in the cloud and mapping cloud resources to the those servers such that the customer doesn’t have to adapt to the cloud; it encompasses everything from encryption of storage and networking to providing cross hypervisor migration.

Since our focus is on making the cloud an integral part of the customers’ existing IT environment and networking is one key aspect of this integration we have built a really cool architecture and design to make this seamless.  The customer installs our software appliance into their infrastructure and can optionally connect our system to their networks at multiple “insertion points”.  The CloudSwitch software will then extend these insertion points to the cloud at the Ethernet level (layer-2).

What is actually happening is that our software is tying into the local network and then tunneling to our network components in the cloud.  This combined with the overall Cloud Isolation Technology system allows for a transparent, efficient, and secure extension of the network to arbitrary endpoints.  I want to stress that this whole solution works against multiple clouds, regions, and locations – that is, a customer can create whatever network topology they want and then connect any or all of it at layer-2 to their data center.  We’ll support layer-2 operation even in clouds that don’t natively support it (e.g. Amazon).

For the customers, it’s quite easy to build complex networking topologies in the cloud to either match their existing multi-tier architectures, or to create something new.  In the CloudSwitch product, they can create these networks be simply giving them a name, IP range, and netmask.  The only other step in this process is to (optionally) connect one of these networks to the data center.  Again, a very simple process where the operator selects the network insertion point and connects it to the named network.  CloudSwitch will take it from there and create and connect the network topologies.  Now, when they deploy servers into the cloud, the can connect them to these networks however they want (including multiple networks per server).

Can users of CloudSwitch make use of their existing application management tools, and how is it that they don't have to rearchitect existing applications?
 
This is at the heart of what we do, and what we think is important in a product that extends the data center infrastructure to the cloud. Customers can choose when and where they want to invest in developing new applications and architectures in the cloud.  To this end, we allow customers to use their existing development processes, management tools, and applications as they extend workloads into the cloud.

We provide the infrastructure to map the cloud resources into exactly what the operating systems and applications are expecting to see.  This means that the customer can use their own software, gold images, build processes, and management tools on resources that are deployed to the cloud without having to change anything.  Since we integrate the cloud resources with layer-2 networking, common management tools and processes will work against the virtual servers in the cloud just as they do against the servers within the data center.

Further, since we provide this mapping layer, the customers can run their specific version of the operating system (specific Linux kernel versions and Windows operating systems and patch levels) which eases the burden in managing their server and application lifecycle.

Our goal has been to allow the customer to exactly control the infrastructure that they deploy into the cloud.  We allow them to control the specific networking configurations, the specific virtual hardware in their machines (adapter types, MAC addresses, controllers, bridges, etc.), the hypervisor “tools” they want to use (VMware integration tools, device drivers, etc.), what OS or kernel they want to use – and all of this is independent of what the cloud providers “provide.”

Our conviction is that if the user can control the low level details, then they are not required to change their application architecture.  Note that I said “required to change” – this is an important distinction.  We want the customers to be able to choose what, where, and when they are going to change to new architectures.

Our experience over the last few years has shown that even when building new applications, many of the existing architecture problems and integration issues remain.  With CloudSwitch, you can build new applications that take advantage of existing infrastructure (think domain controllers, centralized databases, and internal applications) in a simple and highly secure infrastructure.

You claim an ecumenical approach to cloud platforms that customers use, claiming no lock-in. Can you explain this?

Sure, it’s quite simple – CloudSwitch doesn’t require you to change how you develop or manage your applications when integrating with the cloud so you don’t get locked into building applications that are specifically tailored to a cloud provider infrastructure (like specific networking, templates, kernel restrictions, or storage constructs).

A lot of people think that this is about lock into the API, but really the danger of cloud lock-in is much more subtle, you start adapting the way to develop your applications, communications, and management to the specifics of the provider.  In effect your development team is doing what they have to adjust to the foreign environment presented by the target cloud.  Then, before you know it, your applications will only run in that environment and how you deploy and manage updates is locked in to the environment as well. 

When you decide to migrate that application to another cloud, or back to your data center, you find that you have to either re-create the cloud provider’s unique architectures in your own data center, or rework your applications again depending on the target.

Now I often hear the argument that customers don’t really want to move applications around between the clouds or between the cloud and the datacenter, and for a specific application, this may be true. It’s unlikely a customer would day, “I want my application to be in Amazon today, and tomorrow I’ll move it to Terremark, and the day after back to Amazon,” but let’s look at it a little differently.  First there is the application life cycle – from creation, unit test, integration test, pre-production, production, and end-of-life maintenance. 

It’s easy to find a number of scenarios where you may want to use a different set of resources to support these various stages like: development and unit testing in an Amazon cloud, then pre-production and production in your own data center, and finally end of life back out in the clouds.  If the application has been (either intentionally or accidentally) tied to the specifics of a cloud provider, then this life cycle is severely limited unless you do a lot of additional work.

The other way to look at multiple clouds is that not all applications are the same in terms of criticality to the organization, performance requirements, security levels (compliance) – so you may want to blend the clouds. Some applications will go well into “commodity” clouds, some want to be placed in “enterprise” clouds, and some need to stay in your own data center.

Our vision is that the customer can choose the right cloud for each application.  This is why it’s so important to be isolated for the specifics of the cloud – your development and management processes and tools represent a significant investment, and you should be able to apply them across all of the clouds you target. 
 

What’s missing from most deployments in the cloud?

Real enterprise controls. This spans everything from full control of the networking, to tight integration with data center resources, to access control, to real security.  Most of these are pretty obvious, but one of the things that is happening in the cloud today is that companies are building up a management debt that I think most people are unaware of.

Specifically, there is a dependency on the cloud providers images and infrastructure that potentially will impact all of your deployed servers;  added to this is the dependency on 3rd party scripts, drivers, agents, etc. that have been installed to make the cloud work (or at least be easier).  When these dependencies are added to regular operating system updates and patches, and you add in the dependencies of the hypervisor integration tools, you have a big chain.

This means that the number of updates and patches you have to apply to your servers in the cloud is potentially huge, and perhaps worse, you don’t have control of when to deploy them.  If the cloud provider decides to make an update, you as a user have to take it;  if there is an interaction between the 3rd party tools and the OS patch, you have to take the update as well.

If you multiply this work by all of the systems deployed into the cloud, it can become a real management nightmare.  This is why we are so adamant about not adding software to the customers servers are part of the cloud deployment, we don’t want to trigger this massive dependency tree.  As an additional benefit, our Cloud Isolation Technology means that we can adjust for cloud provider changes at the infrastructure level, apply the update once for the entire cloud deployment instead of having to do it server by server – much easier to manage.
 

How does CloudSwitch approach security in the cloud?

Since we were founded on the idea of hybrid cloud computing (before it was coined as a term), we knew that security was really important.  If you extend the enterprise to the cloud, you have to make sure that you have a very good security model, or everything falls apart.

This is why we developed what we call “provable security.”  Of course we rely on strong encryption for both the networks and the storage in the cloud, but encryption by itself is not good enough.  The primary issue is that you have to separate the encryption keys from the data that is encrypted and this is hard to do in a remote system – primarily because you have to boot the system to deliver the keys, but you can’t boot an encrypted system without the keys.

The easy solution is to pass the keys to the cloud provider, but now they have both the keys and the data, and you are at risk of them “exposing” your data.  There are a lot of companies out there with good technologies for protecting your data volumes, but we believe that to have “provable security” you have to encrypt the boot volumes  and everything you do in the cloud, because a breach there can put your internal assets at risk.

Further, our solution is stronger than all of the others because we implement it within the infrastructure such that even a user with administrative access to the server in the cloud cannot disable, misconfigure, or otherwise alter the security.  This is one aspect of what makes it provable – administrators of the system cannot change the security levels, it is enforced by the CloudSwitch isolation layer. 


What are sample costs for a deployment?

Here is the model we use; first, the customer licenses our software based on an annual subscription that scales with the number of servers under management.  This makes it easy to get started in the cloud without making a big upfront commitment (just like the cloud model).

Second, the customer pays for their cloud usage directly to the cloud provider.  What we found early on is that our customers wanted to have a direct relation with the cloud provider and this is especially true with the enterprise clouds like Terremark so the product is designed to accept the customer credentials for the cloud provider such that we can control the resources, and the cloud provider will directly bill the customer.



Handrus Nogueira uses OStatic to support Open Source, ask and answer questions and stay informed. What about you?




Comments

image
Share Your Comments

If you are a member, to have your comment attributed to you. If you are not yet a member, Join OStatic and help the Open Source community by sharing your thoughts, answering user questions and providing reviews and alternatives for projects.


Promote Open Source Knowledge by sharing your thoughts, listing Alternatives and Answering Questions!