Workshop: Running OpenClaw on Nebius

OpenClaw is powerful, but getting it into a usable, always on flow state takes more than just spinning it up.

In this workshop, we’ll walk through how to run OpenClaw on Nebius, connect it to Token Factory for fast inference, and extend it with your own models when you need more control.
We’ll focus on the decisions you’ll actually face: OpenClaw or NemoClaw, choosing an open source model, local vs cloud hosting, using an inference provider vs using a dedicated GPU, how to connect and chat with your agent, and what changes when you move from a prototype to something you can rely on.

What you’ll learn

  • How to build and run OpenClaw on Nebius
  • How to connect OpenClaw to Token Factory (OpenAI-compatible inference)
  • How to bring in custom models for specific use cases
  • When to use Token Factory vs Serverless vs AI Cloud
  • How to deploy and run your own model workloads
  • Practical patterns for moving from prototype to production

Who this is for
Engineers and teams working with AI models who want a clearer, faster path from experimentation to real workloads without overcomplicating infrastructure

Our speaker

Mikhail Rozhkov

Technical Product Manager

Colin Lowenberg

Developer Advocate

Try Nebius AI Cloud console today

Get immediate access to NVIDIA® GPUs, along with CPU resources, storage and additional services through our user-friendly self-service console.