Our greenfield data center in Finland
Our Mäntsälä data center was built from the ground up by our highly experienced team with efficiency engineered into every detail, from layout to day-to-day operations. It also pioneered one of the world’s first and region-first heat recovery solutions for district heating. When operating at full capacity, the facility achieves a world-class power usage effectiveness (PUE) of 1.13.
The combination of hardware and data center designs and climate responsiveness helps keep the facility efficient across seasons. Higher temperature tolerance of our in-house hardware compared to that of many off-the-shelf server architectures makes it possible to ensure an optimal performance of chips, including those with high thermal density like NVIDIA B200, with cooling relying on filtered outdoor air, an approach known as free cooling.
Unlike conventional free cooling systems, our innovative system eliminates the need for chillers, water loops and refrigerants, reducing energy consumption and climate impact while also lowering system complexity, maintenance requirements and capital expenditures.
While cooler climates maximize its efficiency, the method is not exclusive to Nordic regions — any location where the outside air stays below 40°C for much of the year can support this model. As a result, we can recreate the same design principle across a wide range of environments, not being locked into one geography.
Equipped with sensors that monitor temperature of hardware components and pressure inside server halls in real time, the system dynamically adjusts air intake, speed of circulation and exhaust using automated dampers and fans — avoiding unnecessary power use on ventilation to keep components under optimal conditions. In colder months, server heat is partially recirculated to prevent overcooling, while in warmer weather, the system adapts to increase draw of outside air.
Unlike typical free cooling systems that simply release hot air into the environment, ours captures and repurposes it year-round — supplying heat to district heating networks in winter and supporting hot water systems during warmer months. Additionally, with no mechanical cooling in the mix to operate the existing GPU fleet, water use is currently kept to a minimum — limited solely to sanitary purposes.
Looking ahead, we are preparing to introduce liquid cooling capable of dissipating 200kW of heat per rack, a critical step to meet the requirements of next-generation higher density GPUs. While this marks a shift in the approach to cooling, our operational philosophy will remain the same: prioritizing efficiency at every layer.
To preserve the benefits of our existing setup, we’ll integrate air cooling into the hybrid system — continuing to cool components where air remains sufficient and repurposing it as a heat sink for liquid loops. In parallel, we’re engineering intelligent water flow management systems to maximize thermal performance, complete with built-in heat recovery to serve the municipality.