Bede Gaming’s migration off Azure’s Cosmos DB and onto MongoDB’s multicloud platform Atlas involved moving over a million customer records per minute over to a new platform. That was part of a major ongoing transformation and CTO Dan Whiteley is just getting started.
Firstly though, that migration had to happen – and with data at this scale, that’s no mean feat: “We have an expression about being ‘positively paranoid’” he tells The Stack : “Obviously you take all the measures and steps you need to get a high degree of confidence, you understand the data, do the dry runs. But until you get to actually doing it…”
Whiteley is an experienced CTO in a heavily regulated industry. He knows that in any ambitious change project there’s potential pitfalls and took strong steps to mitigate migration risk. His team kept both the existing Cosmos DB environment and the Atlas one live simultaneously (“we were still sending the data both ways”), conducted extensive data integrity checks after the migration, and “obviously had the option of roll-backs.”
He says cheerfully, “it went very, very well, no degradation!”
“Moving to MongoDB was a slam dunk. Great support. High performance. A very mature and enterprise-ready company” – CTO Dan Whiteley
By moving to MongoDB Atlas, Bede Gaming has halved data storage costs with zero performance loss and more data portability, he says.
Better yet, Whiteley thinks he’s scratching the surface; he’s modelled cost savings of 65% to 75% after optimising some of the underlying discs being used in Azure (where Atlas is running) and right-sizing instances further.
Rewind! What is Bede Gaming?
Bede Gaming, part of the MERKUR Group, is a leading supplier of software to the online gambling industry, powering some of the sector's biggest brands. It runs a live portfolio of five globally-renowned customer operators and helps its partners achieve significant digital ambitions across lottery, casino, sports betting and bingo on modern infrastructure.
The award-winning gaming platform developed by Bede processes billions of transactions per year. It is scalable, modular and adaptable with open APIs, allowing operators to use its bespoke tools or to seamlessly integrate with any third-party software.
The platform also integrates into land-based Casino Management Systems and loyalty programs offering what it describes as a "genuine omni-channel convergence solution." In addition to the platform, Bede provides native app and front-end development services as well as a robust reporting toolkit, all built on cutting edge technology.
Behind the scenes what that means for its IT environment, at bare minimum, is catching, retaining, and securing huge volumes of data.
As CTO Dan Whiteley put it, speaking at MongoDB’s.local developer event in London, where he shared his migration experience: “If you are making a deposit, or buying a lottery ticket, or hitting spin on an online game, every time you do that, there's a record of that transaction that we need to store. We’re an Azure shop and that was going into Cosmos DB.”
He needed better performance, lower cost
But things had to change.“One, we needed it portable. Two, we needed to maintain or exceed the level of performances we had. And three, we wanted to take some cost out; these are expensive services to run!”
What does he mean by “portable”?
“We wanted the ability for us to deploy in any cloud environment. By default, we’re Microsoft Azure, so we run Atlas on Azure. But in theory, we could be cloud-agnostic; if we want to run private cloud, we could completely self-host. There's a high degree of flexibility there with a tool like MongoDB. It's a great database, with a load of other services that we’re yet to kind of even push on and explore,” he admits.
"Part of being a CTO is driving a positive engineering culture" Dan Whiteley
He adds: “I’m personally a big fan of open source and open standards. That would be my default position to go to. I'm not necessarily a fan of getting locked into proprietary software by some of the big vendors.
“If you want to operate in the US, for example, you need to be able to deploy in-State,” he adds: “It's not good enough to just be in public cloud.
“So having a more portable solution, which means moving away from some of these expensive PaaS services and having a chance to reconsider the architecture and how you deploy was welcome,” concludes Whiteley.
Overcoming teething problems
Back to the migration: Whilst carefully planned and delivered over several months (the next one will be quicker, he insists) it was not without hiccups. Early performance degradation, for example, was traced back to Azure infrastructure: “We had to upsize and almost over-compensate for the migration,” he says. “The indexing was becoming memory-intensive.
“We added in some NVMe SSDs and that solved that problem.”
The initial costs were not immediately lower, either, he says: “When we migrated, we basically over-specced the machines to do the migration as quickly as possible. If we'd stayed at those rates, I'd be like, ‘this is not a good business case, right? The ROI is not where we need it to be!’
“I had the guys gradually assess and tune down the clusters,” he explains, “Now we can see performance of the system under regular load; it’s low compute, low memory; we’re predominantly doing a lot of writes…”
The costs are now half of what they were on Cosmos DB for the same workloads he says. And the migration has also opened up new opportunities: “Some of the challenges I'm looking at are around our agility and the ability to be more flexible in our deployment. How can we go to market quicker? How can we deliver change more effectively?
See also: The “hot” data challenge of renewables expansion
“The next step is like, ‘Okay, so we've got these great tools! Now how do we start to drive more value from the data? We already have a data strategy around everything from how we can protect riskier players, to personalisation of content.” he muses. “We've got this wealth of data around our end customers, and we probably don't do enough with it.”
“The other bit I am keen to look at is how we can make it easier for our end operator, so if you're a marketeer, to basically communicate with the underlying database. A big part of what we provide is tooling that marketers can use to send out campaigns. Currently, that can be cumbersome. My objective is trying to eliminate [the cohorts of data engineers some customers have needed to do this] and just make it easy.
“This is where some of the advances in generative AI and some of the LLM vector stores are coming in,” he says. “My team has embraced this.
“Part of being a CTO is about how do you drive a positive engineering culture,” he reflects. “Now, they're solving difficult technical problems with MongoDB and they’ve really embraced it. We’ve done one migration. Now we have a good run-book. We’re going to do more…”
Delivered in partnership with MongoDB