Vendor lock-in is a concern when moving to the Cloud. Depending too much on anything is a risk and relying on Amazon, Google or Microsoft is no different. What if they go out of business? No one expects that to happen any time soon, but unexpected things happen. It is just something to think about when deciding on a Cloud strategy. Do we put all our eggs in one basket, or not?
I don’t expect any of the Cloud’s “Big Three” will go bankrupt any time soon. On the contrary, I think they will grow and become the biggest companies in the world. So there must be other reasons why, according to the RightScale 2018 State of the Cloud Report, 81% of enterprises opt for a multi-cloud strategy.
So let’s explore some of the advantages:
If you design your solutions such that your workload can be moved to from one Cloud to the other, this provides your organization with the flexibility to make the move when pricing or differentiating services demand for it. For this to be possible, your teams need to design solutions with this portability in mind. This is not easy, especially since Cloud providers typically try to lure us into the other direction. Either by providing us with services that are “managed” and thus require less operations effort from our end, or by providing building blocks for solutions that integrate really, really well. For example, the “Big Three” have (serverless) functions-as-a-service which are well-suited for event-based solutions that integrate easily with their storage and messaging services acting as event sources. Such a function responds to an object placed in Azure Storage (Blob) or Amazon S3, perform some processing, etc. With ‘functions’ as the main abstraction and unit of deployment and scaling an unprecedented ease of development and hence productivity is offered, but the portability of such a solution is lower, even though separating the vendor-specific handler from the core logic can be used to improve portability. Hence, there is a constant trade-off to be made between productivity and portability.
A multi-cloud strategy could improve availability. If a Cloud has geographical regions and availability zones (isolated datacenters with their own networking an power supplies within these regions), then a different Cloud could be seen as the next level of isolation and availability.
For this to work for a any solution, thus allowing for seamless failover, I see many challenges and restrictions limiting innovation, for example when using a Cloud’s differentiating services. On the other hand, with the rise of containerisation and Kubernetes emerging as the de facto standard for container orchestration I think this is ever more feasible. Keep an eye out for Kubernetes federation and other multi-Cluster approaches.
There is not much that separates the core cloud experience provided by the major cloud provider, if you look at them from a distance. But there are differentiating and unique services that provide for different needs that surface when we get a better look:
And the list goes on.
Organizations moving their workloads to the Cloud are in the middle of an improvement process and want to use their move to the Cloud in order to actually implement that change. There is a will, an urgency, to up their game. An evolution from “being Agile” to “true devops”. I know this is not an easy sell, but I believe as I wrote earlier this year that having developers work with the platform of their choosing is important as it ensures ownership and taking responsibility which will result in better productivity.
So, should we go for multi-Cloud strategy? It depends, as always, on whether these advantages weigh up to the disadvantages:
Teams have more to choose from, which requires more knowledge, ideally from multiple Cloud platforms. So while having more options should be a good thing, it could be overwhelming and slow things down when searching for the perfect solution. Sometimes good is simply good enough.
Cloud providers typically charge egress fees to move data out of their Cloud. We are talking cents per GB, but ingress for one is egress for the other. So while it will not brake the bank it does add up.
Furthermore, another Cloud means yet another vendor to manage. Someone has to take care of it and it is just a bit more work on their plate.
Our systems are ever more distributed and communicate over the wire which involves latency. Co-locating in a single region will result in a lower latency than when the client and server are distributed over geographical regions. So what about inter-Cloud latency? It adds latency, period. Latency between Clouds will be greater, so involve more latency, than when co-locating in a single Cloud’s single region. That said, I do expect that geographical distance (in some unit of wire) between datacenters contributes more to an increased latency than the difference in Cloud provider; These networks are pretty fast and well-connected so practically only limited by the speed of light, but you have to measure something to know for sure.