Practical Serverless AI for developers: fine-tuning, batch pipelines, and dev endpoints
Many AI workflows start simple but quickly become hard to manage. Training runs require GPU setup, batch pipelines need orchestration, and exposing models for testing often requires building infrastructure first.
In this hands-on workshop, we’ll show how developers can run common AI workloads using Serverless Jobs and Endpoints on Nebius Cloud, without managing clusters.
Using real examples, you’ll see how serverless compute can simplify everyday ML workflows.
What you’ll learn
- How to run LLM fine-tuning with Axolotl as a Serverless Job
- How to orchestrate batch pipelines with Airflow or Prefect
- How to deploy dev and evaluation endpoints
- How to inspect logs, debug failures, and manage outputs
We’ll walk through three real workloads step by step so you can see how they run in practice.
Who should attend
Built for AI developers and ML engineers who package workloads as containers and want a simpler way to run training, batch pipelines, and temporary model APIs
Meet our host

Fill out the form to register and get the recording
Try Nebius AI Cloud console today
Get immediate access to NVIDIA® GPUs, along with CPU resources, storage and additional services through our user-friendly self-service console.

