Kube Con 2026
Find us at KubeCon + CloudNativeCon Europe 2026, 23–26 March.
This year, it takes place in Amsterdam, the hometown of Nebius.
This premier vendor-neutral cloud native event brings together over 12,000 developers, IT professionals, and technology leaders from across the ecosystem to share insights, showcase innovations, and discuss the future of cloud native computing.
Do not miss our session and demo!
Let's meet in person at booth 260!Join our session and demo

Gleb Evstropov
Production AI Runs on Open Source
March 23, at Cloud Native AI + Kubeflow Day
Behind today’s large-scale AI systems is a layered open-source ecosystem. Kubernetes provides the foundation, Slurm powers high-performance batch scheduling, Ray enables distributed compute, and frameworks like PyTorch and DeepSpeed drive model training.
At Nebius, we build our AI platform directly on top of these technologies — integrating cloud-native orchestration with HPC scheduling to support large-scale training and inference workloads. We’ve also contributed back to the ecosystem by open-sourcing Soperator, a project that bridges Slurm and Kubernetes to combine topology-aware scheduling with cloud-native elasticity and automation.
In this short talk, we’ll walk through how these open-source components fit together in real production environments and why building on open foundations enables better scalability, portability, and operational transparency for teams running AI at scale.

Boris Popov
Soperator Live: Production AI Without the Pain
March 24, 10:50 at our booth 260
As high-performance computing (HPC) workloads evolve toward cloud-native environments, the convergence of Slurm and Kubernetes unlocks new levels of elasticity, automation, and operational efficiency. This demo explores how running Slurm on top of Kubernetes combines the mature workload scheduling capabilities of HPC with the dynamic infrastructure management of modern container orchestration.
We will demonstrate how Soperator enhances Slurm clusters with built-in self-healing, automated scaling of compute nodes, and declarative Infrastructure-as-Code (IaC) workflows. The session will cover deep observability using cloud-native monitoring stacks, automated node lifecycle management, storage integration patterns for high-performance and distributed workloads, and policy-driven resource optimization.
You will gain practical insights into deploying resilient, scalable, and fully automated HPC platform that bridge traditional batch scheduling with cloud-native paradigms — enabling large-scale compute workloads in hybrid and cloud environments.
RAI Amsterdam
Try Nebius AI Cloud console today




