Managed Soperator: your quick access to Slurm training

Join us for a webinar introducing Managed Soperator, Nebius AI Cloud fully managed Slurm-on-Kubernetes solution that transforms AI training infrastructure deployment.

Learn how to provision a Slurm training cluster with NVIDIA GPUs, pre-installed libraries and drivers in just minutes, eliminating the complexity of manual configuration and lengthy setup processes.

What you will learn

One-click AI training clusters: how to deploy powerful Slurm-based training environments instantly without DevOps expertise or manual configuration headaches.

Cloud-native Slurm architecture: understanding Soperator’s Kubernetes operator technology, shared root filesystem capabilities and proven scalability for multi-GPU training up to thousands GPUs.

Managed service advantages: leveraging integrated monitoring, automated security updates, enterprise-grade cloud platform features and advanced IAM without operational overhead.

Getting started & scaling options: step-by-step guidance on setting up your first cluster, scaling from 32 GPUs to enterprise solutions, and accessing professional support when needed.

Who should attend

This webinar is ideal for ML researchers, data scientists, ML developers and technical teams who want to accelerate their training workflows without infrastructure complexity.

Our hosts

Evgeny Arhipov

Head of scheduler services

René Schönfelder

Solutions Architect

Try Nebius AI Cloud console today

Get immediate access to NVIDIA® GPUs, along with CPU resources, storage and additional services through our user-friendly self-service console.