.jpg?cache-buster=2025-06-13T10:17:07.289Z)
Compute
Get AI-optimized clusters for training and inference based on the latest NVIDIA GPUs.
Flexible capacity planning
Get access to the latest NVIDIA GPU platforms or CPU-only servers and balance reserved, and on-demand pricing models that align perfectly with your needs.
AI performance without penalty
Receive bare-metal-level performance from dedicated hosts — we do not virtualize and share GPU and network cards.
InfiniBand-powered AI clusters
Create a multi-host clusters for AI workloads with non-blocking NVIDIA Quantum InfiniBand fabric. It delivers 3.2 Tbit/s throughput per 8-GPU host and ensures direct GPU-to-GPU communication.
AI-ready operational system
Save time when creating instances or configuring a cluster for AI workloads by using an AI/ML-ready image that contains pre-installed GPU and network drivers, to start a GPU-accelerated environment quickly.
Network storage volumes
Reduce the cluster recovery time by leveraging network disks mounted to every virtual instance. This provides cloud-native elasticity and a quick VM restart when failure occurs.
Integrated monitoring
Receive detailed information about your cluster and virtual machine performance by using our integrated Monitoring service. Our dashboards display AI-specific metrics alongside general system performance data.
GPU host configurations
NVIDIA GB200 NVL72
- 72x GB200 GPU 384GB
- 36x Grace CPU with 72 Arm® Neoverse™ V2 cores
- Up to 17 TB LPDDR5X
- 28.8 Tbit/s InfiniBand
- Ubuntu 24.04 LTS for NVIDIA® GPUs (CUDA® 12)
NVIDIA B200
- 1x or 8x B200 GPU 180GB SXM
- 20x or 160x vCPU Intel Emerald Rapids
- 224 or 1792 GB DDR5
- 3.2 Tbit/s InfiniBand
- Ubuntu 22.04 LTS for NVIDIA® GPUs (CUDA® 12)
NVIDIA H200
- 1x or 8x H200 GPU 141GB SXM
- 16x or 128x vCPU Intel Sapphire Rapids
- 200 or 1600 GB DDR5
- 3.2 Tbit/s InfiniBand
- Ubuntu 22.04 LTS for NVIDIA® GPUs (CUDA® 12)
NVIDIA H100
- 1x or 8x H100 GPU 80GB SXM
- 16x or 128x vCPU Intel Sapphire Rapids
- 200 or 1600 GB DDR5
- 3.2 Tbit/s InfiniBand
- Ubuntu 22.04 LTS for NVIDIA® GPUs (CUDA® 12)
NVIDIA L40S (Intel)
- 1x L40S GPU 48GB PCIe
- 8x or 40x vCPU Intel Xeon Gold
- 32 or 160 GB DDR5
- Ubuntu 22.04 LTS for NVIDIA® GPUs (CUDA® 12)
NVIDIA L40S (AMD)
- 1x L40S GPU 48GB PCIe
- 16x or 192x vCPU AMD EPYC
- 96 or 1152 GB DDR5
- Ubuntu 22.04 LTS for NVIDIA® GPUs (CUDA® 12)
CPU host configurations
Intel
- 2x or 48x vCPU Intel Xeon Gold 6338
- 8 or 192 GB DDR5
- Ubuntu 22.04 LTS
AMD
- 4x or 128x vCPU AMD EPYC 9654
- 16 or 512 GB DDR5
- Ubuntu 22.04 LTS
Try self-service console
Up to 32 NVIDIA GPUs are available immediately via web console
.jpg?cache-buster=2025-06-13T10:24:02.055Z)
Block network storage
Block network storage
Choose one of three options of network disks that differ by performance, reliability and pricing:
- SSDs with no data replication,
- SSDs with erasure coding,
- SSDs with data mirroring.

Observability and monitoring
Observability and monitoring
Control the cluster state and detect performance issues early, by using our monitoring capabilities. We display a wide range of performance metrics, from GPU utilization to InfiniBand network parameters on the web UI dashboards or as pre-assembled Grafana dashboards.

Getting started
Create and manage GPU clusters on the cloud platform on your own or contact us to learn more about working with one of our experts.
