In IT, the word ‘convergence’ has long connoted a promised land that lures and excites. The idea of bringing together disparate strands in a single entity brings operational harmony, relief from administrative slog, visibility, cost efficiencies, faster time to market and increased performance, writes Sammy Zoghlami, SVP EMEA, Nutanix. Execute correctly and there’s nothing not to like. That’s why today we need IT platform convergence.
In the 2010s, hyper converged infrastructure (HCI) helped IT teams to counter technology sprawl and collapsed the creaking storage/compute/networking three-tier architectures of old. Organisations that had been weighed down by expensive, elevator-sized boxes gained access to a sleeker alternative where those three elements were closely combined in one machine, resulting in faster datacentres, simpler operations via a single pane of glass, reduced floor space, lower carbon emissions, no cabling clutter and an open choice of hardware partner.
Today it’s time for a new form of IT platform convergence: this time for the various platforms on which we run IT today and which cause so much complexity, expense and chaos. In short, we need a way to make hybrid multi-cloud simple and to liberate customers to focus on hitting their business goals. And we need that platform to be open and complete with rich data services that will run and manage any application, anywhere.
This complexity arose, we should remember, for a good reason. A new architecture, cloud computing, offered a compelling alternative to the on-premises client/server status quo. Companies found that by using public cloud they could reduce deployment time and cost, easily try out ideas in a virtualised sandbox, access elastic IT resources and take out chunks of time used for patching and updating. But public cloud isn’t for everybody.
Why? Because some companies fear laws, regulations or security snafus could leave them exposed or because some workloads are just very hard and risky to move. So, many organisations have decided to leave them as is or they have modernised environments into private clouds with high levels of virtualisation to curb costs and increase server utilisation. Yet others work with co-location or outsourcing partners to access state-of-the-art systems and highly skilled people. They may be exploring even more approaches such as edge computing to bring compute and storage close to distributed applications and thereby optimise performance.
The result of all this today is the dominance of Hybrid IT where workloads are run on appropriate systems and organisations need to manage the resulting disparate services as best they can.
Of course, some zealots say that they should relinquish this complexity and go cloud-first or cloud exclusively, but few mature companies have the appetite for that and even many self-proclaimed cloud-first organisations have since gone back to a form of realpolitik just as mainframes persisted in enterprises for decades after the emergence of client/server. IT is tough…
The “New Hyperconvergence”
That’s why today we need an approach that lets CIOs wrap their arms around this underlying complexity: a sort of supra-platform or blanket fabric that covers all this. It’s effectively a new form of convergence that helps IT to manage the proliferation of platforms and data, operational architectures and security models they impact.
We need a way to manage all these platforms and quickly apply models that provide a remote control for IT where workloads can easily be visualised, managed and swapped via virtualisation, automation and management consoles.
We need a simple way to build, operate, consume and govern clouds. In short, what we need is the New Hyperconvergence: IT hubs that restore order and don’t allow our luxury of choice to become an administrative burden. The tools to do this are emerging today. They abstract control away from the unique platforms to provide administrators with a 360-degree view of operations, parsing underlying complexity to only surface what is needed.
Specifically, we need to have broader usage of containers for granular service deployment and control that goes beyond standard virtualisation. We need the ability to dynamically shift resources to appropriate clouds that are the best environments for given workloads, so that, for example, certain AI workloads go to Azure clouds.
We need capacity management so we know how to allocate resources, so we gain from balanced workloads that perform and don’t void capacity licensing. We need the ability to run an environment like HPE GreenLake on a pay-as-you-go basis. In these ways, IT can be the high-level controller rather than the plumber, allocating resources where they are needed, planning capacity, moving workloads from platform to platform and so on. This will not only make our IT systems deliver better ROI to the business, it will also support fast-changing business models, go-to-market approaches and strategy changes.
Cloud orchestration is not easy but successful implementations mask that underlying complexity just as virtualisation and containers disguise the complexity of running multiple operating systems or workloads on the same hardware. Once upon a time people talked about ‘Martini computing’: having access to IT services “anytime, anyplace, anywhere”, as the song on the TV ad promoting the drink used to have it.
Today, that’s entirely achievable and we can make our applications manageable, secure and performant regardless of what they sit on. If you are one of the majority of organisations that have accumulated multiple IT operating platforms over time, whether that is well managed or a bewildering spaghetti-like morass, it’s time to look at the next wave of convergence and to restore order and control.