October has been so eventful that it’s hard to know where to begin. But the most important news is that you can now take advantage of the newly rebuilt Nebius platform! We also delivered a keynote at TechCrunch Disrupt, reduced prices dramatically, celebrated 1 year since our public launch — and more.
We have developed a new version of Nebius platform that we believe will serve your needs even better. It is already had been tested by our in-house LLM R&D team and a number of clients. In October, we rolled it out to everyone.
Our new platform features a faster storage backend, support for new GPUs and our latest ML services, better observability and a more intuitive UI. With a strong focus on AI needs, it provides enthusiasts and ML practitioners with robust, functional environments for their ambitious initiatives. By the way, we sometimes call it Newbius, the new Nebius, you know.
Our Chief Business Officer Roman Chernin delivered a keynote to the TechCrunch Disrupt audience and the world, sharing what sets us apart in the competitive race of cloud providers in AI. Among other things, Roman announced the opening of our first data center in the US.
To support your first steps in new AI projects, we introduced the Explorer Tier. With this special offer, you can enjoy NVIDIA® H100 Tensor Core SXM GPUs at just $1.5 per hour for your first 1,000 GPU hours each month.
GenAI builders no longer have to choose between quality and affordability, as Nebius AI Studio slashed prices by up to 50%. For Llama 3, Mixtral, Qwen and others, these are some of the lowest market offerings.
We’re tripling capacity at our own data center in Finland to 75 megawatts. The current expansion phase will enable us to place up to 60,000 graphic cards at the Mäntsälä location alone.
Captured on video: our Business Development Director Rashid Ivaev gave a talk about how we gathered insights into ML engineers’ needs and what drives our current developments.
Our recent blog post will guide you through the differences between large and small language models. In this cost-benefit analysis, we focused primarily on machine learning inference.
Integrating an LLM like Llama 3.1 405B, which requires major compute resources, is often painful for GenAI builders. Discover how Nebius AI Studio simplifies the process.
And here we are just a year later, delivering a keynote at TechCrunch Disrupt in San Francisco, announcing our first data center in the US, supporting customers globally, completing the full rewrite of our AI cloud from the ground up, trading on Nasdaq and so on.
There is plenty of hard work ahead, yet the future holds boundless promise, and we are filled with anticipation.