Companies could save billions by ditching ‘Hotel California’ cloud for own infrastructure: VC
Companies could save billions by running IT infrastructure themselves rather than in the cloud– and need to be more aware of the long-term cost implications of moving workloads to IaaS before they are forced to consider repatriating workloads to improve margin, warns venture capital firm Andreessen Horowitz (“a16z”).
“Across 50 of the top public software companies currently utilizing cloud infrastructure, an estimated $100B of market value is being lost among them due to cloud impact on margins relative to running the infrastructure themselves” the VC firm — which has nearly $16.6 billion in assets under management — assesses.
Detailing its case in a lengthy piece of analysis, the firm (which has invested in some of the world’s best known software brands, from Box to Okta, Slack to Facebook) suggests that the high profit margins of the hyperscaler cloud “oligopoly” are driven in part by running their own infrastructure, enabling ever greater reinvestment into product and talent while buoying their own share prices: great for them, not so great for users.
See also: ITRS CEO & former FTSE Group COO Guy Warren on cloud waste and FS architecture resilience
“With hundreds of billions of dollars in the balance, this paradox will likely resolve one way or the other: either the public clouds will start to give up margin, or, they’ll start to give up workloads. Whatever the scenario, perhaps the largest opportunity in infrastructure right now is sitting somewhere between cloud hardware and the unoptimized code running on it,” a16z’s Sarah Wang and Martin Casado say.
Businesses should consider system design and implementation, re-architecture, third-party cloud efficiency solutions, or moving workloads to special purpose hardware early on, they add.
Many companies realised this too late, they say, by which point refactoring workloads designed to run on hyperscaler IaaS and SaaS are hard to pry loose and repatriate: “For a new startup or a new project, the cloud is the obvious choice. And it is certainly worth paying even a moderate ‘flexibility tax’ for the nimbleness the cloud provides. The problem is, for large companies — including startups as they reach scale — that tax equates to hundreds of billions of dollars of equity value in many cases… and is levied well after the companies have already, deeply committed themselves to the cloud (and are often too entrenched to extricate themselves).”
The rise of OpEx-based IT consumption from traditional providers of hardware means infrastructure alternatives are widely available, they emphasise in what will be music to the ears of the Dells, HPEs, Lenovos and Supermicros et al of the world: “Interestingly, one of the most commonly cited reasons to move the cloud early on — a large up-front capital outlay (CapEx) — is no longer required for repatriation. Over the last few years, alternatives to public cloud infrastructures have evolved significantly and can be built, deployed, and managed entirely via operating expenses (OpEx) instead of capital expenditures” they say.
The rise of containers and Kubernetes has come partly as a response to these fears. Yet to what extent can it really be used to avoid lock-in and let you deploy “virtual data centres” that allow you to move workloads around for the most cost-effective compute, network, or storage? What are the hyperscaler services/tools that are most tricky to extricate workloads from? The Stack would love to hear your views.