Why multicloud is bad strategy, but open source can help

Why multicloud is bad strategy, but open source can help

Commentary: Multicloud tends to get touted as a panacea, but it causes much more harm than good. There is a smart multicloud strategy that might work.

Business man with cloud and brain concepts

Image: Getty Images/iStockphoto

You can be excused for buying into the magical unicorn hype about multicloud. After all, there is no shortage of vendors lining up to sell you multicloud management solutions. You know them well, because they’re the same vendors that were lining up to sell you hybrid clouds, private clouds, and pretty much anything that gave them license to say the word “cloud.” 

It’s not that multicloud is imaginary–it’s real. Most organizations use more than one cloud. But for most companies, this has nothing to do with some grand strategy; rather, it’s the absence of strategy that leads to running multiple clouds. Or, as Gartner styles it, it’s companies choosing different clouds for different workloads (otherwise known as “how enterprise IT has always worked”). Regardless, this non-strategy comes with added cost and complexity (also known as “how enterprise IT has always worked”). 

SEE: Managing the multicloud (ZDNet/TechRepublic special feature) | Download the free PDF version (TechRepublic)

Multicloud is great in theory

Must-read cloud

But of course I’d say that! I’m biased, right? I work for one of the cloud vendors (AWS). My bias, however, doesn’t stem from my current employer, but rather from a past employer. 

There we built almost exclusively on one cloud, but then made an executive decision (and very public commitment) to move everything to a different cloud provider. Sounds great, right? Just flip the “off” switch on Cloud X and flip the “on” switch on Cloud Y. Super simple!

SEE: Special report: Prepare for serverless computing (free PDF) (TechRepublic)

Except that it wasn’t. It turns out our developers were building on Cloud X because it had more services with richer functionality. When asked to build the same with Cloud Y, they correctly said, “We can’t. It doesn’t have what we need.” Over time, some of the functionality gaps were filled by Cloud Y, but even similar features tend to be deployed differently across cloud providers. Storage is storage, except that it’s not. It turns out that every cloud offers highly varied services. It’s also the case that a developer who is deeply versed in Azure isn’t going to magically be productive on Google Cloud or AWS

At my previous employer, we ended up keeping Cloud X running (even expanding use) while running Cloud Y side-by-side. It has been years, and while many services have finally moved to Cloud Y, many more remain. I’m not sure anyone involved would clap their hands and shout “hurray” if they were asked to take on yet another cloud.

It was this experience that caused me to write, well before joining AWS: “Vendors are cleaning up by selling multicloud snake oil while customers keep getting stuck with lowest common denominator cloud strategies and sky-high expenses.” I said it because I lived it.

SEE: Top cloud providers in 2020: AWS, Microsoft Azure, and Google Cloud, hybrid, SaaS players (TechRepublic download)

Multicloud is “worst practice”

It was this experience, as well as others, that have made me nod in agreement with the perennially cynical Corey Quinn of Duckbill Group, who described multicloud as “worst practice” not “best practice”:

[T]he idea of building workloads that can seamlessly run across any cloud provider or your own data centers with equal ease…is compelling and something I would very much enjoy.

However, it’s about as practical as saying “just write bug-free code” to your developers—or actually trying to find the spherical cow your physics models dictate should exist. It’s a lot harder than it looks.

It’s harder than it looks because each cloud is different, with different strengths and weaknesses. If you’re into serverless, great! But serverless offerings on AWS are different from those on Azure or Google Cloud. If you want to take advantage of an amazing database option on Google Cloud, guess what? You’re not going to be able to easily port that workload to any other cloud. This isn’t because cloud vendors have nefarious designs to lock you in–it’s because they’re each trying to build useful services that customers will want to use.

It’s also the case that however much you make like different services on different clouds, it’s hard to uproot their associated data and magically move that data to other clouds. Gartner analyst Marco Meinardi discussed this at length

[The] likelihood [of moving applications between cloud providers] is actually very low. Once deployed in a provider, applications tend to stay there. This is due to data lakes being hard — and expensive — to port and, therefore, end up acting as centers of gravity.

What sorts of applications work in a multicloud world? Those that can be highly abstracted away from the very things that make different clouds interesting, as analyst Kurt Marko has noted: “… transparent, incognizant workload movement is only possible on vanilla container platforms for the simplest of applications…”.

So…abandon hope all ye who enter here [at the multicloud party]? Maybe…not.

Open source to the rescue

I’m not sure multicloud makes sense for most workloads, but one area that it seems companies could pull it off is in the area of open source. Or, rather, where an open source project exists. 

If a company wants to maximize freedom/workload portability, as I’ve written, they can build with community-driven open source projects like PostgreSQL or Kubernetes or any number of projects. This won’t work for everything (while some of the building blocks of serverless compute have been open sourced, like AWS open sourcing Firecracker a year ago, most haven’t), but for some critical areas, it will. 

An enterprise that builds with PostgreSQL, for example, has the option to self-manage that database within its data center, or self-manage it on any of the clouds, or use one of the cloud providers’ managed PostgreSQL services. Open source ensures a certain amount of portability for their workloads, as well as the ability to push vendors to compete for those workloads. A customer might determine that they want to use Microsoft’s Azure Database for PostgreSQL, which digs them into the Azure camp, making it harder to move that workload. But that’s a customer choice, and would be easier for them to move that PostgreSQL workload than, say, if they’d built on a database service that is 100% exclusive to a particular cloud (e.g., Google’s Cloud Bigtable or AWS’ DynamoDB). 

Companies are already buying into these cloud-specific services and deriving real value from them. This is why multicloud will ever be with us: Not because of some grand corporate strategy (as I experienced at a previous employer, such strategies end up costing much and accomplishing little) but because individual developers will choose what works for them. Given that developers tend to like open source, using more open source may be the only truly smart multicloud strategy.

Disclosure: I work for AWS but my anti-multicloud views are mine and predate my employment at AWS by…a zillion years. 🙂

Also see

Source of Article