H2O LLM Studio

Updated December 4, 2023

H2O LLM Studio is an open-source, no-code graphical user interface (GUI) that allows natural language processing specialists to fine-tune state-of-the-art large language models (LLMs). It makes customizing the tuning process more flexible and provides capabilities to try out the tuned models.

While easy to use for users at any level of technical competence, H2O LLM Studio is a powerful tool that supports a wide range of hyperparameters and incorporates cutting-edge techniques like low-rank adaption (LoRA) and 8-bit model training with low memory usage. After fine-tuning a model, you can instantly export and share it.

This image is based on Ubuntu and compatible with NVIDIA® GPUs available in Nebius Israel, which allows acceleration of machine learning and other compute-intensive applications.

Deployment instructions
  1. Create an SSH key pair.

  2. Click the button in this card to go to VM creation. The image will be automatically selected under Image/boot disk selection.

  3. Under Network settings, enable a public IP address for the VM (Public IP: Auto for a random address or List if you have a reserved static address).

  4. Under Access, paste the public key from the pair into the SSH key field.

  5. Create the VM.

  6. Connect to the VM via SSH using local forwarding for TCP port 10101. For example:

    ssh -i <path_to_public_SSH_key> -L 10101:localhost:10101 <username>:<VM's_public_IP_address>

    The ufw firewall in this product only allows incoming traffic to port 22 (SSH). This is why you need local port forwarding when connecting.

  7. To access the user interface, go to http://localhost:10101 in your web browser.

H2O LLM Studio is started as a Docker container, as described in its README. The container’s port 10101 is published to the same port on your VM.

The directories /usr/local/h2o/data/ and /usr/local/h2o/output/ are mounted to the container as volumes, meaning that data used and created by H2O LLM Studio is persistent between VM restarts and shutdowns.

Billing type
Virtual Machine
Machine Learning & AI
Use cases
  • Fine-tuning large language models with high degree of customization and recent techniques.
  • Conducting experiments on models, monitoring and evaluating them.
  • Importing and merging datasets for models.
Technical support

Nebius Israel does not provide technical support for the product. If you have any issues, please refer to the developer’s information resources.

Product IDs
Product composition
Ubuntu22.04 LTS
NVIDIA CUDA Toolkit12.1.1
NVIDIA Container Toolkit1.13.3-1
NVIDIA Data Center Driver535.54.03
By using this product you agree to the Nebius Marketplace Terms of Service and the terms and conditions of the following software: H2O.aiDockerApache 2.0NVIDIA EulaUbuntu
Billing type
Virtual Machine
Machine Learning & AI