Skip to content

Search the site

Stuck with on-prem infrastructure? There are ways to bring in cloud benefits

On-prem infrastructure is here to stay for many orgs, writes Andrew Oliver.

While cloud computing is useful for many services and applications, not everything is moving. Most IT spend will continue to focus on-prem infrastructure until 2025, according to Gartner, writes Andrew Oliver, senior director of product marketing at MariaDB Corporation.

Most IT shops have several cloud applications, but virtually all have their own data centre full of virtual machines and even some “bare metal.” Sometimes this data centre is like a well-organized, manicured neighbourhood of services and applications. Other times it is more of a sprawl.

Reasons for On-Prem Infrastructure Resilience

On-prem infrastructure can still benefit from some cloud-like approaches, says Andrew Oliver.
Andrew Oliver

Some may ask, “why not take this opportunity” to leave it all behind and “lift and shift” your way to cloud nirvana? Not everything works that well on the utility model. Latency requirements, compliance issues, and cloud cost economics prevent some applications from migrating. And shipping the application to someone else’s server does not necessarily make economic sense if you have recently paid for the infrastructure.

Many applications will be on-premises for many years, if not forever, with regulations preventing some applications from migrating to public clouds. Even if the regulations allow cloud migration there can be costs associated with compliance.

For example, Canadian banking regulations are more conservative than in the US, particularly for those rules that govern customer data. While AWS published a whitepaper explaining how to navigate regulation requirements for Canadian banks, in many cases it may be more cost effective to have on-premises infrastructure than to spend money on lawyers and compliance.

See also: You can now use AWS to run GPU workloads — in containers, on-premises

Resilience and latency are crucial requirements for many use cases such as manufacturing control systems. For most web applications, 95% of requests in under 3ms is excellent. However, the other 5% may mean more to a shop-floor control system. Shaving off the extra time to go across a longer network hop to a public cloud can be important to manufacturers.

Reliability is another issue. It is unthinkable to stop a productive shop floor because a network pipe or cloud service is down. Manufacturers use hybrid on-prem infrastructure with public cloud services for disaster recovery systems and other less critical functions. Not every industry or use case can tolerate cloud latency or loss of control, and in these cases a strong hybrid cloud strategy is key.

Small, custom industry or company-specific tight-purpose applications are the bread and butter of many IT application developers. This creates a sprawl of applications and application infrastructure on many smaller virtual machines, which is costly to maintain and difficult to evaluate in terms of reliability and security.

Reconsidering On-prem Infrastructure

When business units create applications and services, they usually consider only their own environment and purpose. This practice allows organisations to develop applications quickly but results in a sprawl of infrastructure, databases, and applications. This aggregates a large amount of technical debt outside of the applications themselves and results in diverse and numerous database types and versions.

In short, the business unit that funded the project stuck IT with the cost of maintaining it – and they are not motivated to move it to the cloud. Strategically it makes more sense for some of these applications to migrate, but they must be evaluated outside of their normal development cycle, because the department funding it is not motivated to care.

Organisations can benefit from budgeting an application improvement and migration function that runs outside of the normal application cycle. This function looks at the needs of the larger organisation as well as the IT department, so it is able to migrate applications to common infrastructure and make general maintenance improvements. This function should include a regular budget that’s commensurate with the number of applications that IT manages.

See also: Down in the Goldman Sachs IT engine room, old school and open source rub shoulders

The mandate should be increased reliability, lower costs, and higher maintainability. Based on this, some applications should have their own database infrastructure, while smaller custom applications that most IT organisations have should not.

In the rush to microservices, organisations deployed discrete open source databases where each application had an instance of MySQL, MariaDB, MongoDB, or PostgreSQL. Yet, a survey by MariaDB shows that most developers and IT managers agree that fewer databases help improve security. So while this practice often accelerates development, it impacts security, reliability, and maintenance.

Organisations can improve overall reliability, security, and maintenance while lowering costs by creating a few central database services. These services should have quality-of-service guarantees and a specific version-level and patch/upgrade policy.

The organisation should treat this on-prem infrastructure as an internal database-as-a-service. The drawback is that more applications will go down at once during failure or outage – but tracking aggregate outages and reliability can address this concern. Focusing on a central database service allows organisations to focus on the reliability and security of that service.

Service/Support and Augmentation

Open source has been a significant driver in making IT cost-effective. Many organisations run hundreds of small open source database instances with MySQL compatible databases, such as MySQL and MariaDB, taking the lion’s share of this market.

Self-support for an application with a few gigabytes of data and a handful of tables looks reasonable when assigned to individual applications funded by business units. When aggregating the reliability of these services and applications, it may make sense to move to supported versions and use outside services.

For example, an organisation might move from up to 12 unsupported community database instances to a similar number of schemas on a multi-writer, replicated architecture with full support. This change moves some IT costs to a vendor, which reduces spending and increases overall reliability.

Tl;dr

Applications will continue to use on-prem infrastructure or a hybrid setup for years because of regulations, costs, or specific requirements. Organisations should create specific budgets and functions to evaluate applications and infrastructure for cloud and central services migration.

Application and service sprawl will likely benefit from centralised database infrastructure services, which can lower costs, boost security, and increase aggregate reliability throughout the organisation. When services are centralised, it can be easier and more affordable to get vendor-supported versions of popular open source databases and outsource some functions.

Often the overall sprawl is a result of internally created externalities. Addressing them can improve not only maintenance but costs, security, and reliability as well.

Follow The Stack on LinkedIn

Latest