Multi-cloud AI infrastructure in action — with Nebius and dstack
Join us to explore how modern ML teams run large-scale training across multiple clouds — seamlessly and cost-efficiently.
This session shows how dstack orchestrates GPU workloads on Nebius infrastructure to accelerate experimentation, control spend, and stay vendor-neutral.
You’ll learn how to:
- Understand why GPU container orchestration matters — and what’s beyond Kubernetes and Slurm.
- Deploy your first distributed training job on Nebius (live walkthrough).
- Apply cost-optimization strategies for efficient GPU utilization.
Who should attend:
ML/AI Engineers, MLOps Engineers, Solution Architects, CTOs and anyone building or managing AI infrastructure at scale.
Try Nebius AI Cloud console today
Get immediate access to NVIDIA® GPUs, along with CPU resources, storage and additional services through our user-friendly self-service console.
