Skip to content

Search the site

Porsche Formula E team thinks small and local to slow down AI energy guzzling

Gen AI doesn't supplant old school analytics

Photo by Alvaro Polo / Unsplash

AI has the potential to solve massive problems. But it takes a hell of a lot of energy to do so, with an NVIDIA GPU consuming energy at a similar rate to an average household. An American one at that.

This is a challenge for any organization. But when your raison d’etre is demonstrating the potential of new technologies to contribute to sustainability – a massive problem - power hungry AI presents a major dilemma.

The answer, suggests Friedemann Kurz, Head of IT at Porsche Motorsport is to think locally. In all respects.

Porsche Motorsport is one of the leading teams in Formula E, the non-carbonated counterpart to Formula 1. As he explains, “The whole racing series, is funded to showcase that racing can be done in a sustainable way.”  

The series uses a standardized chassis, battery, and other parts, with the winning margin down to how the team ekes out the energy available. And of course, the driver’s own skill. And the series overall has a commitment to reducing emissions, including those arising through things like freight and travel, and compute time.

The firm relies on Cato Networks for its SASE security infrastructure, which has massively reduced the amount of kits it shifts from track to track. It is also critical in protecting the team’s valuable IP.

But planning race strategies and finessing out to eke out battery energy is heavily analytics based. That in itself burns energy. And Gen AI promises to be even more energy intensive.

Friedemann Kurz told The Stack during a virtual roundtable that the team’s analytics was historically based on data-driven techniques.

These include “Machine learning mechanisms, with reinforcement learning to identify patterns, with different Monte Carlo approaches to forecasting situations, to making probabilities of what's potentially going to happen.”

Nevetheless, the team was examining some potential use cases around Gen AI, he said. “There is some information in language that's interesting for us. So of course, we try to extract it, but that does not require a fully self-built solution.”

“What we are more interested is in a specialized language model on a certain problem area with a certain use case,” he said. These include identifying information “that is, for example, in the spoken word and then directly extracted and applied to a certain domain of optimization problem.”

That sounds like the sort of natural language processing associated with LLMs. But he said this could be done in a very efficient way. “We don't do big trainings. We don't need to train with big, large language models. That I think, is where the energy is going, and we are not doing such kind of things.”

Furthermore, he says, it doesn’t need to turn to the cloud. Rather, Kurz says, the answer is to have “Small local deployments on optimized machines and then have a very efficient and performant way to work on smaller and limited problems.”

It was now in a position to do “Efficient local deployments of small models, which is from just the IT perspective, a bit of a game changer, because then it's small, it's efficient, it's good to handle, and it's capable of solving specific problems in a reliable way.”

The key is the data quality, Kurz said. In Porsche’s case it’s the data it is capturing from its vehicles and drivers and from the tracks. “That's the benefit. We have a higher level of quality and comparable data and that that makes it easier to use that data to train models for sure.”

And it means that it can place more trust in the answers or solutions it gets. This was in contrast to large language models, he suggested. “They will answer every question, but you're never sure whether you can rely on it. And that's the complete opposite of what we want.”

Join peers following The Stack on LinkedIn

Latest