“Fighting climate change is the greatest challenge of the 21st century”, said Enpal founder and CEO Mario Kohle, when the German clean energy startup landed €215 million in Series D funding in 2023: “We want to help tackle this global issue by putting solar panels on every roof, a battery into every home, and an electric vehicle with a charger in front of every door.”
Enpal has made huge strides since that round (led by TPG Rise Climate among other investors), with over 80,000 solar panel arrays, 4,000+ heat pumps and thousands of electric vehicle (EV) chargers now live across Germany – and the startup is also eyeing broader European expansion.
Enpal’s integrated “Enpal One” box meanwhile provides energy planning capabilities with integrated power storage, an inverter and wallbox. All customers also get a mobile application that lets them check how their generation and storage assets are performing, or remotely start charging their EV at an optimal time, among other rapidly evolving capabilities.
Enpal’s expansion means that it has a growing number of networked “Internet-of-Things” assets in the field. Photovoltaic panels, heat pumps, and EVs alike generate a steady stream of critical data (from serial numbers to power usage and electricity generation patterns) that keeps customers informed and also lets Enpal spot nascent hardware issues.
As a result, Enpal’s IT team needs to be highly switched on and mindful of “future-proofing” its technology environment, as these data flows continue to expand, get richer and become increasingly diverse and complex.
Future-proofing for data expansion
The company’s Chief Architect Nils Lappe says the architecture and data plumbing the company uses to handle this has recently evolved.
The company recently adopted MongoDB Atlas – a multi-cloud developer data platform – to handle time-series data coming in from the devices and act, as Enpal’s Lappe puts it, like a “hot storage” layer for this data.
“We project that we can handle up to 100,000 devices with a single M30 cluster at something like €6600 a year; for that much data and performance that’s pretty hilarious!” he grins with clear pleasure.
See also: An industrial "love language"? It starts with good communication
Sitting down to chat with The Stack at the MongoDB.local Berlin event, he emphasises how constructive the data platform provider has been in supporting his team’s modernisation efforts: “We were on the lookout for time series databases” he says, explaining that Enpal reviewed multiple database providers and spoke with several of their consultancy arms.
“The MongoDB consultancy was the best [by] quite a distance" says Lappe.
“We had some very weird technical questions – because storage is one of the cost-drivers that we see in this whole thing, because there's just so much data! We asked weird questions like:
- ‘How would we expect compression to work?’
- ‘How does the compression algorithm [work]?’
- ‘How can we project how much data we can save by compressing our schemas?’ and 'what should an [optimal] schema look like?’
- Or simply, ‘If we do this, how big would the index be, and does it fit into memory… because that needs to be fast!’
"We got good answers to all of that,, which actually impressed us a lot."
Processing 200+ data points
This data was previously stored less efficiently in blob storage. Enpal explored and tested InfluxDB, ScyllaDB, and TimescaleDB before settling on MongoDB Atlas for its ease of use, performance and flexibility, as well as cost. (Shifting to MongoDB is set to save Enpal almost 60%, he says.)
MongoDB's aggregation pipelines (a way of structuring data processing) streamline data querying, eliminating the complexities associated with managing and joining data across multiple tables, says Lappe; something that is particularly beneficial for Enpal, as it processes 200+ data points.
It can also run across any cloud – handy given Enpal has a heavy Azure footprint – and with the company deeply mindful (like most German firms) of data protection a number of Atlas’s features shine here he says.
"Sharding is very easy with MongoDB"
“Sharding in general is a pain. It's very easy with MongoDB” says the chief architect. “And MongoDB is always very open to do design reviews, so if you have questions or if you are in doubt that your schema is fine, you can always hook them in, basically explain your use case, show the schema and get something out of it, like an improved schema…”
MongoDB's sharding capability also simplifies compliance by allowing Enpal to segregate and host data based on geographic location.
Enpal’s migration to this new approach is ongoing, he says, driven by its success: “Azure storage account [can only handle] so many connections per second, and we are about to hit that in the current tier…”
See also: How SEGA’s Felix Baker delivered a data transformation
Summing up he adds: “If you want to do fleet aggregations, that's hard to do with blob files. We are trying to move into a direction where we get more stream processing to happen and also enable some form of permanent data lake where we basically capture all of these streams.
“We are using Azure Event Hub for streaming; that offers a very convenient event capture option to persist raw and transformed data.
“We then stream our data to hot storage in MongoDB, where we can access it very cheaply, and serve customers through this hot storage. The data lake we can access on a less regular basis to serve archive data.”
His team is busy enabling Enpal’s expansion but with the right partners on board no challenge is insurmountable and as he adds, such growth is a great problem to have – keeping the planet cool has never been hotter.
Delivered in partnership with MongoDB