Introducing self-service NVIDIA Blackwell GPUs on Nebius AI Cloud
Introducing self-service NVIDIA Blackwell GPUs on Nebius AI Cloud
NVIDIA HGX B200 instances are now publicly available as self-service AI clusters on Nebius AI Cloud. This means anyone can access NVIDIA Blackwell — the latest generation of NVIDIA’s accelerated computing platform — with just a few clicks and a credit card.
Nebius is eliminating barriers to cutting-edge AI compute, as part of our vision to democratize AI. No waitlists, no long-term commitments, lengthy procurement cycles or sales conversations — just immediate access through our web console or via an API with pay-as-you-go pricing.
Access the latest AI compute with just a credit card
At GTC Paris, we announced one of the first GB200 NVL72 general availability to customers in Europe. Today, we are making HGX B200 instances available to AI innovators of all sizes via our self-service portal. Whether you are an individual AI enthusiast, an ML engineer in a large research team or implementing AI in an enterprise context, access to NVIDIA B200 compute is now simpler than ever.
“Our early access to NVIDIA HGX B200 via Nebius AI Cloud has enabled us to explore new heights of inference optimization. Initial results showed promising performance improvements — about 3.5 times faster inference for diffusion models, crucial for meeting the AI industry’s growing demands.”
Kirill Solodskih, CEO and Cofounder of TheStage AI, an inference acceleration platform
AI-tailored clusters by Nebius
We deliver NVIDIA HGX B200 instances as part of Nebius AI Cloud, the full-stack AI infrastructure that we have built from the ground up for intensive and large-scale AI workloads. NVIDIA GPU clusters are interconnected by non-blocking NVIDIA Quantum-2 InfiniBand fabric and delivered with pre-installed GPU and network drivers and orchestration software (Kubernetes or Slurm).
NVIDIA HGX B200 is available on single baseboards with eight GPUs — the same form-factor as previous Hopper SXMs — enabling the HGX B200 to integrate seamlessly into Nebius’s custom-designed server racks.
No-compromise performance
Whether it’s an on-demand single-host environment or a reserved thousand-GPU installation, all AI clusters at Nebius undergo three-stage acceptance testing.
We do on-site quality control at the contract manufacturer’s facility, inspect nodes before deploying them at our data centers and then run comprehensive cluster burn-in testing before handing them over to customers. This rigorous testing ensures that the performance results for NVIDIA HGX B200 on Nebius meet NVIDIA’s own benchmarks.
The future of AI is here. Accessible to everyone
Whether you’re an individual researcher or part of a large enterprise team, you get fully tested and optimized GPU clusters, custom-designed infrastructure and no-compromise performance that ensures your AI workloads run exactly as expected.
The future of AI development is here, and it’s available on-demand. Access NVIDIA HGX B200