Comparing Clouds: IaaS Provisioning Experience

There is no perfect cloud platform. Shocking, I know. Organizations choose the cloud that best fits their values and needs. Many factors go into those choices, and it can depend on who is evaluating the options. A CIO may care most about the vendor’s total product portfolio, strategic direction, and ability to fit into the organization’s IT environment. A developer may look at which cloud offers the ability to compose and deploy the most scalable, feature-rich applications. An Ops engineer may care about which cloud gives them the best way to design and manage a robust, durable environment. In this series of blogs posts, I’m going to look at five leading cloud platforms (Microsoft Azure, Google Compute Engine, AWS, Digital Ocean, and CenturyLink Cloud) and briefly assess the experience they offer to those building and managing their cloud portfolio. In this first post, I’ll flex the infrastructure provisioning experience of each provider.

DISCLAIMER: I’m the product owner for the CenturyLink Cloud. Obviously my perspective is colored by that. However, I’ve taught three well-received courses on AWS, use Microsoft Azure often as part of my Microsoft MVP status, and spend my day studying the cloud market and playing with cloud technology. While I’m not unbiased, I’m also realistic and can recognize strengths and weaknesses of many vendors in the space.

I’m going to assess each vendor across three major criteria: how do you provision resources, what key options are available, and what stands out in the experience.

Microsoft Azure

Microsoft added an IaaS service last year. Their portfolio of cloud services is impressive as they continue to add unique capabilities.

How do you provision resources?

Nearly all Azure resources are provisioned from the same Portal (except for a few new services that are only available in their next generation Preview Portal). Servers can be built via API as well. Users can select from a range of Windows and Linux templates (but no Red Hat Linux). Microsoft also offers some templates loaded with Microsoft software like SharePoint, Dynamics, and BizTalk Server.


When building a server, users can set the server’s name and select from a handful of pre-defined instance sizes.


Finally, the user sets the virtual machine configuration attributes and access ports.


What key options are available?

Microsoft makes it fairly easy to reference to custom-built virtual machine image templates when building new servers.


Microsoft lets you set up or reference a “cloud service” in order to set up a load balanced pool


Finally, there’s an option to spread the server across fault domains via “availability sets” and set up ports for public access.


What stands out?

Microsoft offers a “Quick Create” option where users can spin up VMs by just providing a couple basic values.


Lots of VM instance sizes, no sense of the cost while you’re walking through the provisioning process.


Developers can choose from any open source image hosted in the VM Depot. This gives users a fairly easy way to deploy a variety of open source platforms onto Azure.


Google Compute Engine

Google also added an IaaS product to their portfolio last year. They don’t appear to be investing much in the UI experience, but their commitment to fast acquisition of robust servers is undeniable.

How do you provision resources?

Servers are provisioned from the same console used to deploy most any Google cloud service. Of course, you can also provision servers via the REST API.


By default, users see a basic server provisioning page.


The user chooses a location for their server, what instance size to use, the base OS image, which network to join, and whether to provide a public IP address.


What key options are available?

Google lets you pick your boot disk (standard or SSD type).


Users have the choice of a few “availability options.” This includes an automatic VM restart for non-user initiated actions (e.g. hardware failure), and the choice to migrate or terminate VMs when host maintenance occurs.


Google let’s you choose which other Google services you can access from a cloud VM.


What stands out?

Google does a nice job of letting you opt-in to specific behavior. For instance, you choose whether to allow HTTP/HTTPS traffic, whether to use fixed or ephemeral public IPs, how host failures/maintenance should be handled, and which other services can be accessed, Google gives a lot of say to the user. It’s very clear as to what each option does. While there are some things you may have to look up to understand (e.g. “what exactly is their concept of a ‘network’?”), the user experience is very straightforward and easy enough for a newbie and powerful enough for a pro.

Another thing that stands out here is the relatively sparse set of built-in OS options. You get a decent variety of Linux flavors, but no Ubuntu. And no Windows.


Amazon Web Services

Amazon EC2 is the original IaaS, and AWS has since added tons of additional application services to their catalog.

How do you provision resources?

AWS gives you both a web console and API to provision resources. Provisioning in the UI starts by asking the user to choose a base machine image. There are a set of “quick start” ones, you can browse a massive catalog, or use a custom-built one.


Once the user chooses the base template, they select from a giant list of instance types. Like the above providers, this instance type list contains a mix of different sizes and performance levels.


At this stage, you CAN “review and launch” and skip the more advanced configuration. But, we’ll keep going. This next step gives you options for how many instances to spin up, where to put this (optionally) in a virtual private space,


Next you can add storage volumes to the instance, set metadata tags on the instance, and finally configure which security group to apply. Security groups act like a firewall policy.


What key options are available?

The broader question might be what is NOT available! Amazon gives users a broad set of image templates to pick from. That’s very nice for those who want to stand up pre-configured boxes with software ready to go. EC2 instance sizes represent a key decision point, as you have 30+ different choices. Each one serves a different purpose.

AWS offers some instance configurations that are very important to the user. Identity and Access Management (IAM) roles are nice because it lets the server run with a certain set of credentials. This way, the developer doesn’t have to embed credentials on the server itself when accessing other AWS services.  The local storage in EC2 is ephemeral, so the “shutdown behavior” option is important. If you stop a box, you retain storage, if you terminate it, any local storage is destroyed.


Security groups (shown above) are ridiculously important as they control inbound traffic. A casual policy gives you a large attack surface.

What stands out?

It’s hard to ignore the complexity of the EC2 provisioning process. It’s very powerful, but there are a LOT of decisions to make and opportunities to go sideways. Users need to be smart and consider their choices carefully (although admittedly, many instance-level settings can be changed after the fact if a mistake is made).

The AWS community catalog has 34,000+ machine images, and the official marketplace has nearly 2000 machine images. Pretty epic.


Amazon makes it easy to spin up many instances of the same type. Very handy when building large clusters of identical machines.


Digital Ocean

Digital Ocean is a fast-growing, successful provider of virtual infrastructure.

How do you provision resources?

Droplets (the Digital Ocean equivalent of a virtual machine) are provisioned via web console and API. For the web console, it’s a very straightforward process that’s completed in a single page. There are 9 possible options (of which 3 require approval to use) for Droplet sizing.


The user then chooses where to run the Droplet, and which image to use. That’s about it!

What key options are available?

Hidden beneath this simple façade are some useful options.  First, Digital Ocean makes it easy to choose which location, and see what extended options are available in each. The descriptions for each “available setting” are a bit light, so it’s up the user to figure out the implications of each.


Digital Ocean just supports Linux, but they offer a good list of distributions, and even some ready-to-go application environments.


What stands out?

Digital Ocean thrives on simplicity and clear pricing. Developers can fly through this process when creating servers, and the cost of each Droplet is obvious.


CenturyLink Cloud

CenturyLink – a global telecommunications company with 50+ data centers and $20 billion in annual revenue –  has used acquisitions to build out its cloud portfolio. Starting with Savvis in 2011, and then continuing with AppFog and Tier 3 in 2013.

How do you provision resources?

Like everyone else, CenturyLink Cloud provides both a web and API channel for creating virtual servers. The process starts in the web console by selecting a data center to deploy to, and which collection of servers (called a “group”) to add this to.


Next, the user chooses whether to make the server “managed” or not. A managed server is secured, administered, and monitored by CenturyLink engineers, while still giving the user full access to the virtual server. There are just two server “types” in the CenturyLink Cloud: standard servers with SAN-backed storage, or Hyperscale servers with local SSD storage. If the user chooses a Hyperscale server, they can then select an anti-affinity policy. The user then selects an operating system (or customized template), and will see the projected price show up on the left hand side.


The user then chooses the size of the server and which network to put it on.

What key options are available?

Unlike the other clouds highlighted here, the CenturyLink Cloud doesn’t have the concept of “instance sizes.” Instead, users choose the exact amount of CPU, memory, and storage to add to a server. For CPU, users can also choose vertical Autoscale policies that scale a server up and down based on CPU consumption.


Like a few other clouds, CenturyLink offers a tagging ability. These “custom fields” can store data that describes the server.


It’s easy to forget to delete a temporary server, so the platform offers the ability to set a time-to-live. The server gets deleted on the date selected.


What stands out?

In this assessment, only Digital Ocean and CenturyLink actually have price transparency. It’s nice to actually know what you’re spending.


CenturyLink’s flexible sizing is convenient for those who don’t want to fit their app or workload into a fixed instance size. Similar to Digital Ocean, CenturyLink doesn’t offer 19 different types of servers to choose from. Every server has the same performance profile.


Each cloud offers their own unique way of creating virtual assets. There’s great power in offering rich, sophisticated provisioning controls, but there’s also benefit to delivering a slimmed down, focused provisioning experience. There are many commonalities between these services, but each one has a unique value proposition. In my subsequent posts in this series, I’ll look at the post-provisioning management experience, APIs, and more.

Author: Richard Seroter

Richard Seroter is Director of Developer Relations and Outbound Product Management at Google Cloud. He’s also an instructor at Pluralsight, a frequent public speaker, the author of multiple books on software design and development, and a former editor plus former 12-time Microsoft MVP for cloud. As Director of Developer Relations and Outbound Product Management, Richard leads an organization of Google Cloud developer advocates, engineers, platform builders, and outbound product managers that help customers find success in their cloud journey. Richard maintains a regularly updated blog on topics of architecture and solution design and can be found on Twitter as @rseroter.

10 thoughts

  1. Very nice review! One thing would help – the use case you are provisioning the server for.

    For example, is it HPC scenario, where you are planning to bring multiple instances up and down.

    Or is it a “traditional colo” scenario – where fixed number of instances will be running continuously (you may occasionally change “hardware platform” and bring it back up).

    Or, third, you want a “load-balancing” scenario, where you build an instance, but want to set up the rules that would launch and terminate additional instances automatically.

    Provisioning an instance through the console often turns into a very small part of a complete cloud experience.

  2. A question I have, is how the resources are allocated? Is the CPU and RAM dedicated i.e. a 1:1 ratio or are they shared? Storage I know is carved up like a pie but the big ones are CPU and RAM.

    1. Most clouds have some level of over-subscription, as few VMs actually need the constant, dedicated horsepower. Providers don’t really talk much about this, but likely using a mix of homegrown algorithms and hypervisor capabilities to carefully over-allocate.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.