Guest Column | August 19, 2016

The State Of Cloud

State Of Cloud

By Deirdre Mahon, CMO, Cloud Cruiser

Cloud is about a decade old now and has seen vast improvements in stability and reliability, with a steady increase on breadth of offerings from IaaS to PaaS to SaaS. Certainly the lion’s share of cloud was taken up by SaaS, but that has changed recently with a definite movement — driven by developers and engineers — to IaaS and PaaS with a focus on creating new apps using services from public providers like Amazon Web Services.

Even as improvements have been made over time, there’s still a long way to go. Let’s look back and give ourselves a score to see how we are doing. As the saying goes, “You have to look back in history to understand what lies ahead.”

The two most obvious improvements everyone can agree on are:

Ability to scale at low initial cost: avoiding any capital outlay to access compute, storage, and networking forms the basis of any developer’s stack. All services are available on a pay-as-you-go basis and you only pay for what you use. Capacity planning becomes easier too. Getting the balance right on provisioning too many servers or provisioning too few means you either waste or don’t give end-users the best experience. By using cloud, you avoid heavy lifting such as racking and stacking servers and you move at a much faster pace which drives innovation and reduces risk. When things don’t go well, you can quickly turn off those services rather than painfully decommission and try to figure out what to do with the remaining collateral damage.

Easy access to services: public cloud services are certainly easier to access, assuming your organization has no governance regulations or concerns related to putting critical assets in the cloud. The five basic characteristics defined in the early days of cloud computing include:

  • on-demand self service
  • broad network access (via any standard device)
  • resource pooling (shared)
  • rapid elasticity (scale or shrink) to match demand
  • metering (track usage/cost)

Cloud-users don’t question the self-service capabilities cloud brings with easy access from any network-enabled device. Those two points get top scores. Both AWS and Azure have done a much better job with online tutorials and videos helping rookie users understand how and where to get started. As you’d expect, getting users to first base is paramount in order to drive revenue.

The third characteristic, resource pooling, is absolutely a feature fully available but requires some human intervention, or at least business-related decisions. Some organizations will pull in their chief security officer to make them aware of what is going to the cloud and certainly data sovereignty will come up depending on your geographic location.

Both AWS and Azure are doing a better job on global coverage. 451 Research analyst Owen Rogers covers this in a recent webcast worth checking out. He noted, “End-users are likely to pay more for hosting outside of the U.S., but that this ‘Protection Premium’ is often seen as worthwhile — data privacy, website performance, and local support appears to be driving end-users to take the safer approach rather than risk the cheaper, non-local, alternative.”

Elasticity to match demand is absolutely critical so you don’t over-spend or under-invest. Dan Hunter, a cloud architect, noted the difficulty encountered when seeking optimization in one area at the expense of another. Unless you are highly knowledgeable and have done the exact same thing before (unlikely), you won’t be operating like that “orchestra director,” which is what it feels like most days.

Dan noted maximizing performance comes at a cost. Additionally, managing micro-services in a public cloud is no easy feat and inevitably requires dedicated in-house teams to constantly keep watch. Unfortunately, IT and support teams are usually the last to know about business seasonal spikes and dips. We can improve when it comes to ease of scale-up or down.

The final characteristic — metering as you go — is the area that has received the least amount of love and attention. Many users spend millions annually, struggling to make sense of the monthly bill and have resorted to using primitive Excel spreadsheets.

Here are five recommendations toward building an effective cloud strategy:

1.On-demand self-service: just because the person has a budget threshold, a credit card, and access to the network doesn’t mean they should be consuming cloud. Consider which clouds you prefer for which workloads and communicate clearly the governance and policies around usage.

2.Network access: simple network access doesn’t mean you automatically use services. With Azure, for example, you assign who owns the subscription and they should be keepers of who gets access and with what permissions. Additionally, tagging is a great way to track exactly what services are consumed but it’s no good if users don’t tag correctly. Institute a tagging policy. It will save you hours and days of painful work each month.

3.Resource pooling: involve your security officers to avoid any potential breach and make sure you work with your cloud provider to answer all questions.

4.Elasticity: the capabilities are there, but someone on your team must keep a close eye, measuring capacity consumed and striking that balance of only using what you need, tracking against budget and, of course, buying lower cost options.

5.Metering: if you don’t have a reliable app that does this in real-time, you will struggle. After the fact is no good. You must invest in a purpose built app and it will pay off dividends almost immediately. No more after-the-fact, no more guessing, share the information with all those involved. Get ahead of the problem before it spirals.

There are three reasons why the meter aspect of cloud computing is not doing well:

1.First, your favorite Cloud provider simply cannot tell you what you are doing with any other cloud provider. Multi-cloud is not their business. More importantly, they are not able to tell you how your meter is doing, until after those services have been consumed. Ultimately, the responsibility is in your hands. If you don’t take the time to meter and measure as you go, no-one else will.

2.Cloud providers have been so focused on the other aspects of cloud services delivery, the metering and tracking of usage and spend has not been their priority. It’s never in business terms you can easily understand. Forecasting and budgeting are pretty non-existent and, they don’t take into account multi-cloud environments. Companies adopting multi-cloud (the large majority) need a neutral application to get the job done, also one that can be easily manipulated to match specific business processes. Cloud usage is no longer confined to the IT team and the app needs to be so easy that any business or finance person can log-in and glean intelligence.

3.Lastly, we have not reached a commoditized state when it comes to public cloud services. With Google entering the market, things are bound to change over the coming three years. Google has already claimed they will be the lowest cost provider and so it will just be a matter of time before the major providers see cost as a critical competitive factor. Buyers are too savvy and will easily move over to another provider if they do not get the service quality or preferred pricing they so deserve. Perhaps alliances will be built or the inevitable market consolidation will occur, but the buyer needs to proactively know what the best execution venue at the best price is. They also need to deliver the highest quality service to enable their customers to easily switch to the right venue in just a few clicks.

Before you start on your cloud journey, consider the basics and regardless of the vendor you choose, make sure you are taking full advantage of all the capabilities so you can maximize your investment.