2011 was a huge year for cloud computing. Although the past 10 years saw the steady growth and adoption of Software-as-a-Service, 2011 was definitely dominated by IaaS growth. Finally, the internet infrastructure has reached a point where it makes sense to host your servers in the cloud, and the tools are available to make this happen safely and easily.
But there still seems to be some reluctance on the part of organizations to move their servers to the cloud. And this can mainly be attributed to a short list of concerns which seem to come up over and over in cloud-related debates.
Security is still a major source of concern when it comes to cloud adoption, and the new invasive laws being proposed in Europe and the United States aren’t making anyone feel safer about their private customer data.
Despite its great track record, the cloud still lacks the ability to offer clients the transparency and control they need to feel safe.
Then, there’s the fact that a cloud provider acts as a single point of attack which could be used to access thousands or even millions of user accounts. Although this is probably unrealistic when we’re talking about a network security breach, it’s definitely a major point of concern when it comes to government interference and the use of police force to seize data and servers.
Cloud-based applications – and particularly SaaS – must offer the ability to interact with other services. This way, a company can create other applications which communicate and interact directly with the data hosted on the SaaS provider’s servers.
Ideally, data and processing should simply be a service. And companies should feel free to create their own front-end, and safely combine integrate data components into other third-party systems.
Although APIs have helped a lot in this respect, their functionality is still very limited, and there has been very little standardization enforced within this area.
Another area that requires standardization would be in the area of application and data portability.
IT projects have a high rate of failure since these new implementations are often complex, require careful planning and require buy-in from many individuals. Also, business requirements can change overnight, making a system obsolete.
The cloud adds an extra layer of complexity to this equation, since your company is developing its long-term plans on the shoulders of a third-party. What happens if the cloud host goes out of business? What happens if the service degrades later on? What if the company enters into a partnership which prohibits them from using the cloud?
Businesses need to know that they take their servers and applications back in-house, or move them to another cloud provider with ease.
Businesses need a consistent and standardized way to monitor server performance across multiple third-party providers. This was a particularly important issue in 2011, as we saw many of the leading cloud hosts crash due to activity spikes. In many industries, consistent cloud performance can be a serious business problem. 30 minutes of downtime, twice per week, would be simply unacceptable.
Larger organizations have to be very careful about closely monitoring user accounts for compliance, financial and security purposes. These processes are usually standardized internally, and any new applications need to be approved and modified to conform to these standards. This is hard to do with closed-source services which are hosted on third-party servers.
Although these issues continue to persist, there seems to be a push towards an “open cloud”, where vendors begin to highlight transparency and control as key features of their services. But we’re still probably a few years away from having this be a standard business practice within the cloud. Much of this has to do with the fact that larger established vendors will be reluctant to give up their control, while smaller vendors will lack the resources to do this well.