Innovative Nebius data centers and hardware

Nebius’ data centers and hardware reflect our dedication to green energy. With servers and racks designed in-house, as well as modern solutions of entire facilities, we continuously scale our GPU-equipped fleet.

Own data center in Finland

We filmed this video 60 kilometers from Helsinki, the home of the first Nebius data center. The facility is packed with modern systems in every aspect of compute, storage, and data processing. Here, we constructed our supercomputer and a supercluster of thousands of GPUs.

300 MW region in New Jersey

We’ve joined forces with DataOne to ensure that the first phase of the facility goes live in the summer of 2025. At the core of it is an innovative approach to power generation: we’ll leverage behind-the-meter electricity and advanced energy tech. The DC is expandable up to a total of 300 MW. A simple render shows how the site, currently a greenfield, might look when completed.

Colocation in Missouri

Our colocation in the Kansas City data center owned by Patmos is due to go live in March. Patmos recently repurposed the facility, converting the iconic Kansas City Star printing press to a modern AI DC. The first phase can be expanded from an initial 5 MW up to 40 MW, or about 35 thousand GPUs — NVIDIA Blackwells and Hopper H200s — at full potential capacity.

Colocation in Iceland

Meet our new colocation in the beautiful Icelandic town of Keflavik, where we will be deploying a 10 MW compute cluster. Thanks to our partner Verne’s efforts, the site runs entirely on Iceland’s 100% renewable hydroelectric and geothermal energy resources.

Physical deployment and software installation are underway, and the new capacity is expected to be fully operational and available by the end of March.

Colocation in France

Nebius’ Paris data center is a colocation based at Equinix’s PA10 campus in the Saint-Denis district of Paris. The facility is among the first in the world to adopt NVIDIA H200 GPUs.

In the photo, you can see an urban farm on the data center’s roof, heated by waste heat from the servers.

Dedicated server nodes for training and inference, all designed in-house by Nebius

Training AI models is a data-intensive process, with significant input and output. Our dedicated server node features the SXM5 GPU board with eight NVIDIA Hopper GPUs allowing the massive throughput, in which speedy data transfer is key. The aforementioned 3.2 Tbit/s are achieved through using eight NVIDIA Quantum InfiniBand network cards, 400 GB/s each.

After the training, comes inference. Once a model has been stored in the GPU’s memory, it’s all set for smaller tasks like generating text or images based on prompts. Our inference-ready solution accommodates up to four dual-slot air-cooled PCI Express 5.0 GPUs.

Using these solutions, we built a supercomputer

We weren’t aiming to create a supercomputer. Yet our R&D team decided to test a part of the platform which was free of customers’ workloads at that moment. For that, they used a benchmark from the Top 500.

ISEG is now 29th in the world’s ranking.

Ready to run your workloads on high-end hardware?