Nebius November digest: A new $3B agreement with Meta, data center expansion and more

November at Nebius brought yet another multi-billion hyperscaler deal and further expansion of our data center footprint, new leading benchmark results and deeper platform ecosystem integrations.

Expanding at scale: Highlights of our Q3 results

  • We announced an agreement to deliver AI infrastructure to Meta, valued at ~$3B over five years. Over the next three months, we plan to deploy the capacity needed. The two deals — this one and the one with Microsoft — will help us evolve our AI Cloud in 2026.

  • We now expect to have over 2.5 GW of contracted power by the end of 2026, including 800 MW to 1 GW of connected power. This will help our customers grow their compute and workloads globally, and at an even bigger scale.

MLPerf® Training v5.1: Leading results on NVIDIA Blackwell and Blackwell Ultra systems

We’re proud to share the results of our participation in the MLPerf® Training v5.1 benchmark, where Nebius showcased strong performance across several configurations of the latest NVIDIA Blackwell systems. This round continues our commitment to transparency and collaboration with the MLCommons® community, as we work to ensure the highest quality standards for training and fine-tuning next-generation GenAI models.

Ecosystem news

  • Nebius and Anyscale have partnered to offer better platform integration to enable teams to deploy and scale Ray-based workloads easier, faster and more cost-efficiently. Read the blog for a deeper look at the tech details behind the integration.

  • Below is a short, practical demo from our recent webinar featuring Andrey Cheptsov, Founder and CEO at dstack, an open-source control plane. Andrey shows how to set up distributed training using dstack with Nebius AI Cloud.

Video insights from key conferences

Nebius has been featured prominently at several major events in November. Explore our AI Cloud demo from KubeCon + CloudNativeCon, TechArena’s Data Insights from SC25, and our live Pitch Off! at Slush.

Tech docs updates

  • We’ve added several cloud regions recently — you can explore the characteristics of all the regions in the updated overview.

  • Our Status Board is now structured by region. This enhances transparency during service outages and provides a clearer view of Nebius AI Cloud service availability.

  • Support experience gets clearer — we updated ticket topics and their priorities. We also added guidance on how a user without a tenant can create a support ticket, making the process more accessible.

  • IAM improves project management — learn how to delete a project and review the new Resource hierarchy concept that explains how organizations, tenants, projects and resources fit together.

  • Soperator adds fine-grained control — a new guide explains how to manage file access for datasets mounted into workloads.

  • Security foundations are laid out — the Security section focuses on encryption basics across the platform.

  • gRPC API gets hands-on — a new guide shows how to invoke a Nebius AI Cloud API method using grpcurl, helping you experiment with our gRPC interface quickly.

  • Object Storage and Terraform connect — there’s now a step-by-step guide on storing a Terraform state in a bucket. You can find it both in the Object Storage documentation and in the Terraform provider documentation.

Explore Nebius AI Cloud

Explore Nebius Token Factory

Sign in to save this post