News
Kubecon comes to London, and NVIDIA comes to Kubecon
NVIDIA has open-sourced a Kubernetes native GPU scheduler, which it claims will help the cloud native world make more efficient use of its incredibly expensive and incredibly power-hungry hardware.
The announcement was made at Kubecon Europe in London, the Kubernetes and cloud native fest, which this year promises to be dominated by AI.
NVIDIA inherited the KAI Scheduler through its acquisition of the Run:AI orchestration suite at the close of 2024. The scheduler will still be available through the Run:AI platform, but it will also be available under the Apache 2 license.
The GPU giant said it delivers a range of benefits for users such as dynamically managing fluctuating GPU demands and GPU hogging and reducing long wait times for compute access. It should also reduce integration complexity with AI tools and frameworks.
More efficient use of GPU capacity is critical whatever the size of your rig. The current state-of-the-art H200 start at around $35,000. Assuming you can find one.
Moreover, NVIDIA’s top end GPUs use as much power as a typical household. So, leaving them underutilized is a sustainability nightmare.
NVIDIA has been accused of locking in users to its broader ecosystem with its array of CUDA tools. However, it declares its support for an array of open source projects and entities, including Kubecon’s parent organization, the Cloud Native Computing Foundation.
It declares that while some of its projects may initially be developed in a closed manner, “our intention is always to release them in the open once the hardware or features are publicly released.”
At its recent GTC jamboree CEO Jensen Huang announced it was open sourcing its NVIDIA Dynamo orchestration layer, and its cuOpt optimization engine. And it opesourced its Groot model for robotics and physical AI.