From the Founder & VP Products at CloudSwitch

Ellen Rubin

Subscribe to Ellen Rubin: eMailAlertsEmail Alerts
Get Ellen Rubin: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn

Related Topics: Cloud Computing, Rackspace Journal, Cloudonomics Journal, Memopal, Ubuntu Linux Journal, Cloud Hosting & Service Providers Journal, Cloud Backup and Recovery Journal, CIO/CTO Update

Blog Feed Post

Moving to the Cloud: Key Considerations for Cloud Storage

Cloud storage can be simultaneously simple and complex – just like cloud computing in general

This post is part of a series examining the issues involved when moving applications between internal data centers and public clouds.

The true challenges in storage and data management in the cloud result from the diverse and often unfamiliar processes and infrastructures offered by the cloud providers, including: new provisioning methods, storage properties, data population and transfer, and systems for data management (snapshots, clones, replication, backup). The cloud providers define the relationship between servers and storage and often impose constraints on everything from allocation size limits to the ways in which storage is managed. These are just some of the things you’ll want to consider as you start to think about integrating cloud computing into your existing IT environments.

I’d like to focus in detail on the complexity and variability of cloud provisioning and storage properties. There are different models for storage in existing compute clouds, with the most common being an “inclusive” storage model. In this model, each server comes with a certain amount of storage attached to it. The storage is a fixed capacity that is provisioned when you create the server from the pre-existing templates.

For example, Rackspace gives you disk space that is proportional to the memory (RAM) size you select.  The smallest memory/disk combination is 256MB of memory with 10GB of disk. With each doubling of memory, the disk space is also doubled until you get to roughly 16GB of memory and 640GB of disk.  With the new Terremark vCloud Express, you get a system disk that is predefined for each “template” server you select.  For a standard Linux distribution, you get a 10GB system disk, for Windows 2K3 you get a 20GB disk and for W2K8, you get a 40GB disk. Terremark’s vCloud Express allows you to add additional storage as new disks, while others (like Rackspace) allow to “resize” your servers and storage to create a new server with a larger disk and copy your data into it.

Amazon offers several distinct types of storage within EC2. The default storage you get with each server you create in the cloud is called “ephemeral” storage. You then have the option of allocating and attaching Elastic Block Storage (EBS), and there is also an object store system called Simple Storage Service (S3).  Ephemeral and EBS are standard “block storage” devices – meaning they are viewed and used as disks attached to your server (/dev/sdg in Linux, D: in Windows) while S3 requires an API or other tools to integrate with your systems. The good part about the EC2 storage offerings is that you have some powerful options as you build for the cloud; the hard part is mapping the proper resources to your applications and integrating this with your existing processes. Specifically, the base storage is ephemeral, which means that if you power-off the server, or it has a hard fault, all the data on that storage is lost. This means that everything on these drives (boot parameters, application updates, user data, logs, etc.) is subject to loss when you power off the machine. There are several methods of handling this situation: 1) Build your servers every time you start them from a formula or other sources such that you don’t depend on the base storage being persistent; 2) Use Amazon or third party tool sets to periodically “bundle” your servers into S3 (effectively taking a snapshot of the server); or 3) Attach EBS storage to your image and store your important data on persistent storage.

Turning to granularity, we find a wide range in the units or increments of available storage in the various cloud providers. There is the “included” storage mentioned above that is often based on the size of the server and the requested OS type. To add storage, we find cloud providers (such as Amazon) allowing 1GB increments up to 1TB, and others (like Flexiscale) allowing only fixed increments of 50GB/100GB/250GB. For Rackspace, you can resize both the server and storage according to the defined fixed ratios, but these are bound to memory and CPU so there is no independent scaling of storage. The bare-metal cloud provider NewServers allows iSCSI storage to be attached to your servers in 250GB increments. In the cloud these varied increments really matter, because you are paying by the GB/month and if you need just a little more storage, you could end up having to purchase 10x more storage than you need, or having to pay for more memory and compute than you need.

The conclusion we can draw is that there are numerous storage configuration options in the cloud, and these options become linked to the server “flavors” defined by individual cloud providers. Because you don’t have the same control or even mechanisms in the cloud as you do in the local data center, the manner in which you allocate, populate and manage data in the cloud will be different. The work you do to understand and map your applications’ requirements into cloud-based storage requires changing your processes to match those of the cloud, and often this work is cloud-specific.

Beyond configuration issues, of course, there are many other concerns. For example, with data management, you have to determine how you will get your data into the system, how to grow your systems and how to protect your data. Right now most clouds use template servers that you have to build up from a pre-installed base operating system using update mechanisms and then re-installing the application components. As for protecting your data, there are also many cloud-specific options available – from RAID-protected EBS in Amazon, to data snapshots and cloning, to backup services offered by companies like Rackspace.

The bottom line is that cloud storage can be simultaneously simple and complex – just like cloud computing in general. It’s simple to use if you just want to try something new; complex if you want to integrate cloud storage into your existing processes and infrastructures.

Next: Networking in the Cloud

Read the original blog entry...

More Stories By Ellen Rubin

Ellen Rubin is the CEO and co-founder of ClearSky Data, an enterprise storage company that recently raised $27 million in a Series B investment round. She is an experienced entrepreneur with a record in leading strategy, market positioning and go-to- market efforts for fast-growing companies. Most recently, she was co-founder of CloudSwitch, a cloud enablement software company, acquired by Verizon in 2011. Prior to founding CloudSwitch, Ellen was the vice president of marketing at Netezza, where as a member of the early management team, she helped grow the company to more than $130 million in revenues and a successful IPO in 2007. Ellen holds an MBA from Harvard Business School and an undergraduate degree magna cum laude from Harvard University.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.