Building a VM image with a set of infrastructure tools using Packer
You can use Compute Cloud to create a VM disk image with a set of additional infrastructure tools using Packer
Use Packer to build a VM image based on Ubuntu Linux 20.04 LTS with the parameters specified in a configuration file. Add the following tools frequently used with Nebius Israel to the image:
- Nebius Israel CLI 0.91.0 or higher.
- Terraform
1.1.9. - kubectl
1.23. - Docker
20.10.16 or higher. - Git
2.25.1 or higher. - Helm
3.9.0. - jq
1.6 or higher. - tree
1.8.0 or higher. - gRPCurl
1.8.6. - Pulumi
3.33.2. - tmux
3.0a or higher.
Using Packer, an auxiliary VM is created and run with the required software installed on it. Next, an image is built based on its boot disk. After that, the auxiliary VM and boot disk are deleted.
Follow the same steps to create your own image with the necessary software suite.
To build an image and create a VM from it:
- Prepare your cloud.
- Set up a work environment.
- Prepare the image configuration.
- Build the image.
- Create a VM from the image.
If you no longer need the resources you created, delete them.
Prepare your cloud
Sign up for Nebius Israel and create a billing account:
- Go to the management console
and log in to Nebius Israel or create an account if you do not have one yet. - On the Billing
page in the management console, make sure you have a billing account linked and it has theACTIVE
orTRIAL_ACTIVE
status. If you do not have a billing account, create one.
If you have an active billing account, you can go to the cloud page
Learn more about clouds and folders.
Required paid resources
The cost of building a VM image and creating a VM from it includes:
- Fee for storing built images (see Compute Cloud pricing).
- Fee for VM computing resources (see Compute Cloud pricing).
Set up a working environment
-
Install Packer:
-
Download a Packer distribution and install it by following the instructions on the official website
. -
When the download is complete, add the path to the folder with the executable to the
PATH
variable. To do this, run the following command:export PATH=$PATH:<path_to_directory_with_executable_Packer_file>
Note
Nebius Israel requires Packer 1.5 or higher.
-
-
Configure the Compute Cloud Builder plugin
:-
Create a
config.pkr.hcl
file with the following contents:packer { required_plugins { yandex = { version = ">= 1.1.2" source = "github.com/hashicorp/yandex" } } }
-
Install the plugin:
packer init <config.pkr.hcl_file_path>
Result:
Installed plugin github.com/hashicorp/yandex v1.1.2 in ...
-
-
Get information about available subnets and availability zones. If you do not have any subnets, create one.
CLIAPI-
Run this command:
yc vpc subnet list
Result:
+----------------------+----------------------+----------------------+----------------+---------------+-----------------+ | ID | NAME | NETWORK ID | ROUTE TABLE ID | ZONE | RANGE | +----------------------+----------------------+----------------------+----------------+---------------+-----------------+ | b0c29k6anelk******** | intro2-il1-c | enp45glgitd6******** | | il1-c | [10.130.0.0/24] | | e2ltcj4urgpb******** | intro2-il1-b | enp45glgitd6******** | | il1-b | [10.129.0.0/24] | | e9bn57jvjnbu******** | intro2-il1-a | enp45glgitd6******** | | il1-a | [10.128.0.0/24] | +----------------------+----------------------+----------------------+----------------+---------------+-----------------+
-
Save the ID of the subnet (the
ID
column) to host the auxiliary VM used to create the image and the corresponding availability zone (theZONE
column). You will need these parameters in the next steps.
Use the list REST API method for the Subnet resource or the SubnetService/List gRPC API call.
-
-
Specify the values of the variables used when building the image in the command line.
export YC_FOLDER_ID=$(yc config get folder-id) export YC_ZONE="<availability_zone>" export YC_SUBNET_ID="<subnet_ID>" export YC_TOKEN=$(yc iam create-token)
Where:
YC_FOLDER_ID
: ID of the folder to host the auxiliary VM used for creating the image. Provided automatically.YC_ZONE
: ID of the availability zone to host the auxiliary VM used for creating the image. Previously obtained.YC_SUBET_ID
: ID of the subnet to host the auxiliary VM used for creating the image. Previously obtained.YC_TOKEN
: IAM token required for creating VM images. Provided automatically.
-
Generate an SSH key pair. You will need them to create a VM and connect to it.
Prepare the image configuration
-
Create an HCL
configuration file, such astoolbox.pkr.hcl
. -
In the configuration file, describe the parameters of the image to create:
# Nebius Israel Toolbox VM Image based on Ubuntu 20.04 LTS # # Provisioner docs: # https://www.packer.io/docs/builders/yandex # variable "YC_FOLDER_ID" { type = string default = env("YC_FOLDER_ID") } variable "YC_ZONE" { type = string default = env("YC_ZONE") } variable "YC_SUBNET_ID" { type = string default = env("YC_SUBNET_ID") } variable "TF_VER" { type = string default = "1.1.9" } variable "KCTL_VER" { type = string default = "1.23.0" } variable "HELM_VER" { type = string default = "3.9.0" } variable "GRPCURL_VER" { type = string default = "1.8.6" } variable "GOLANG_VER" { type = string default = "1.17.2" } variable "PULUMI_VER" { type = string default = "3.33.2" } source "yandex" "yc-toolbox" { folder_id = "${var.YC_FOLDER_ID}" source_image_family = "ubuntu-2004-lts" ssh_username = "ubuntu" use_ipv4_nat = "true" image_description = "Nebius Israel Ubuntu Toolbox image" image_family = "my-images" image_name = "yc-toolbox" subnet_id = "${var.YC_SUBNET_ID}" disk_type = "network-hdd" zone = "${var.YC_ZONE}" } build { sources = ["source.yandex.yc-toolbox"] provisioner "shell" { inline = [ # Global Ubuntu things "sudo apt-get update", "echo 'debconf debconf/frontend select Noninteractive' | sudo debconf-set-selections", "sudo apt-get install -y unzip python3-pip python3.8-venv", # Nebius Israel CLI tool "curl -s -O https://storage.il.nebius.cloud/cli/install.sh", "chmod u+x install.sh", "sudo ./install.sh -a -i /usr/local/ 2>/dev/null", "rm -rf install.sh", "sudo sed -i '$ a source /usr/local/completion.bash.inc' /etc/profile", # Docker "curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-keyring.gpg", "echo \"deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable\" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null", "sudo apt-get update", "sudo apt-get install -y docker-ce containerd.io", "sudo usermod -aG docker $USER", "sudo chmod 666 /var/run/docker.sock", "sudo useradd -m -s /bin/bash -G docker yc-user", # Docker Artifacts "docker pull hello-world", "docker pull -q amazon/aws-cli", "docker pull -q golang:${var.GOLANG_VER}", # Terraform "curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-keyring.gpg", "echo \"deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/hashicorp-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main\" | sudo tee /etc/apt/sources.list.d/hashicorp.list > /dev/null", "sudo apt-get update", "sudo apt-get install -y terraform", # kubectl "curl -s -LO https://dl.k8s.io/release/v${var.KCTL_VER}/bin/linux/amd64/kubectl", "sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl", "rm -rf kubectl", # Helm "curl -sSLO https://get.helm.sh/helm-v${var.HELM_VER}-linux-amd64.tar.gz", "tar zxf helm-v${var.HELM_VER}-linux-amd64.tar.gz", "sudo install -o root -g root -m 0755 linux-amd64/helm /usr/local/bin/helm", "rm -rf helm-v${var.HELM_VER}-linux-amd64.tar.gz", "rm -rf linux-amd64", # User can add own repo after login like this: # helm repo add stable https://charts.helm.sh/stable ## grpccurl "curl -sSLO https://github.com/fullstorydev/grpcurl/releases/download/v${var.GRPCURL_VER}/grpcurl_${var.GRPCURL_VER}_linux_x86_64.tar.gz", "tar zxf grpcurl_${var.GRPCURL_VER}_linux_x86_64.tar.gz", "sudo install -o root -g root -m 0755 grpcurl /usr/local/bin/grpcurl", "rm -rf grpcurl_${var.GRPCURL_VER}_linux_x86_64.tar.gz", "rm -rf grpcurl", # Pulumi "curl -sSLO https://get.pulumi.com/releases/sdk/pulumi-v${var.PULUMI_VER}-linux-x64.tar.gz", "tar zxf pulumi-v${var.PULUMI_VER}-linux-x64.tar.gz", "sudo cp pulumi/* /usr/local/bin/", "rm -rf pulumi-v${var.PULUMI_VER}-linux-x64.tar.gz", "rm -rf pulumi", # Other packages "sudo apt-get install -y git jq tree tmux", # Clean "rm -rf .sudo_as_admin_successful", # Test - Check versions for installed components "echo '=== Tests Start ==='", "yc version", "terraform version", "docker version", "kubectl version --client=true", "helm version", "grpcurl --version", "git --version", "jq --version", "tree --version", "pulumi version", "echo '=== Tests End ==='" ] } }
Build the image
-
In the command line, go to the directory with the image configuration file.
cd <path_to_configuration_file_directory>
-
Make sure the image configuration file is correct using this command:
packer validate yc-toolbox.pkr.hcl
Where
yc-toolbox.pkr.hcl
is the configuration file name.If the configuration is correct, you will get this message:
The configuration is valid.
-
Build the image with the command:
packer build yc-toolbox.pkr.hcl
Where
yc-toolbox.pkr.hcl
is the configuration file name. -
Once the image is built, you'll see a message notifying you of that:
... ==> Builds finished. The artifacts of successful builds are: --> yandex.yc-toolbox: A disk image was created: yc-toolbox (id: fd83j475posv********) with family name infra-images
Save the ID of the built image (the
id
parameter). Use this ID to create a VM later. -
Check if the built image is present in Nebius Israel.
CLIAPIRun this command:
yc compute image list
Result:
+----------------------+------------+-----------+----------------------+--------+ | ID | NAME | FAMILY | PRODUCT IDS | STATUS | +----------------------+------------+-----------+----------------------+--------+ | fd83j475posv******** | yc-toolbox | my-images | f2ek1vhoppg2******** | READY | +----------------------+------------+-----------+----------------------+--------+
Use the list REST API method for the Image resource or the ImageService/List gRPC API call.
Create a VM from the image
-
Specify the values of the variables used when creating your VM. To do this, run the following command:
export VM_NAME="<VM_name>" export YC_IMAGE_ID="<image_ID>" export YC_SUBNET_ID="<subnet_ID>" export YC_ZONE="<availability_zone>"
Where:
VM_NAME
: Name of the new VM.YC_IMAGE_ID
: ID of the image used to create the VM. Previously obtained.YC_SUBNET_ID
: ID of the subnet to host the VM. Previously obtained.YC_ZONE
: Availability zone for the VM. Previously obtained.
-
Create your VM from the built image.
CLIAPIRun this command:
yc compute instance create \ --name $VM_NAME \ --hostname $VM_NAME \ --zone=$YC_ZONE \ --create-boot-disk size=20GB,image-id=$YC_IMAGE_ID \ --cores=2 \ --memory=8G \ --core-fraction=100 \ --network-interface subnet-id=$YC_SUBNET_ID,ipv4-address=auto,nat-ip-version=ipv4 \ --ssh-key <path_to_public_portion_of_SSH_key>
Where:
name
: The name of the new VM.hostname
: The VM's host name.zone
: Availability zone.create-boot-disk
are the boot disk parameters:size
is disk size andimage-id
is the ID of the image used.cores
: Number of vCPUs.memory
: Amount of RAM.core-fraction
: vCPU basic performance in %.network-interface
are the network interface parameters:subnet-id
is subnet ID,ipv4-address
is the internal IPv4 address, andnat-ip-version
is the IP specification for egress NAT.ssh-key
: Public part of the SSH key.
The command outputs information about the VM created. Save the VM's public IP address:
... one_to_one_nat: address: 62.84.122.151 ...
Learn more about creating a VM from a custom image.
Use the create REST API method for the Instance resource or the InstanceService/Create gRPC API call.
-
Connect to the VM via SSH:
ssh -i <path_to_SSH_key_private_part> yc-user@<VM_public_IP_address>
How to delete the resources you created
To stop paying for the resources you created: