Kolla Ansible provides production-ready containers (here, Docker) and deployment tools for operating OpenStack clouds. This guide explains installing a single-host (all-in-one) OpenStack Cloud on a Ubuntu 24.04 server using a private network. We specify values and variables that can easily be adapted to others’ networks. We do not address encryption for the different OpenStack services and will use an HTTPS reverse proxy to access the dashboard. We will use Spice to connect to desktop VMs. This setup requires two physical NICs in the computer you will use.
SECURE_PROXY_SSL_HEADER
, as detailed at https://docs.openstack.org/security-guide/dashboard/https-hsts-xss-ssrf.html./openstack
folder for creating the disk images and volumes.
/etc/kolla/config/nfs_shares
.Some of the files listed below are available here.
Once you have obtained the source markdown file, open it in an editor and perform a find and replace for the different values you will need to customize for your setup. This will allow you to copy/paste directly from the source file.
Values to adjust (in no particular order):
eno1
is the host’s primary NIC. 10.30.0.20
is the DHCP (or manual) IP of that primary NIC.enp1s0
is the secondary NIC of the host that should not have an IP and will be used for neutron.kaosu
, the user we are using for installation./openstack
the location where we prepare the installation (in a kaos
directory) and store Cinder’s NFS disks.10.30.0.1
with your network’s gateway.10.30.0.100
is the start IP for the OpenSack Floating IPs range.10.30.0.199
is the end IP for the OpenStack Floating IPs range.10.30.0.254
is the OpenStack internal VIP address.os.example.com
, the URL for OpenStack for our HTTPS upgrading reverse proxy.We are not addressing user choices like Cinder or values for disk size/memory/number of cores/quotas in the my-init-runonce.sh
script or later command lines.
Most steps in the “Post-installation” section require you to select your preferred user/project/IPs; adapt as needed in those steps.
/etc/netplan/50-cloud-init.yaml
Here: eno1
is the primary NIC, with IP 10.30.0.20
dhcp6: false
in the netplan
for that section.enp1s0
is the secondary NIC, which should not have an IP assigned. dhcp4: false
and dhcp6: false
for enp1s0
sudo netplan apply
ssh
set up.sudo
-capable kaosu
user for our OpenStack Kolla Ansible installation: sudo adduser kaosu; sudo usermod -aG sudo kaosu
/openstack
directory for installing the different components: sudo mkdir /openstack
10.30.0.1
.eno1
on 10.30.0.20
).10.30.0.100
– 10.30.0.199
.10.30.0.254
.To enable the later 6.x kernel:
sudo apt-get install -y linux-generic-hwe-24.04 sudo reboot -h now
As the kaosu
user (latest instructions from https://docs.docker.com/engine/install/ubuntu/):
# Remove potential older versions for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; do sudo apt-get remove $pkg; done # Add Docker's official GPG key: sudo apt-get update sudo apt-get install ca-certificates curl sudo install -m 0755 -d /etc/apt/keyrings sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc sudo chmod a+r /etc/apt/keyrings/docker.asc # Add the repository to Apt sources: echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt-get update sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin sudo usermod -aG docker $USER # logout from ssh and log back in, test that a sudo-less docker is available to your user docker run hello-world
To make our kaosu
user use the sudo
command without being prompted for a password:
sudo visudo -f /etc/sudoers.d/kaosu-Overrides # Add and adapt kaosu as needed kaosu ALL=(ALL) NOPASSWD:ALL # save the file and test in a new terminal or login sudo echo works
Additional details available here and here.
We want to use NFS on /openstack/nfs
to store Cinder-created volumes:
# Install nfs server sudo apt-get install -y nfs-kernel-server # Create the destination directory and make it nfs-permissions ready sudo mkdir -p /openstack/nfs sudo chown nobody:nogroup /openstack/nfs # edit the `exports` configuration file sudo nano /etc/exports # Wihin this file: add the directory and the access host (ourselves, ie, our 10. IP) to the authorized list /openstack/nfs 10.30.0.20(rw,sync,no_subtree_check) # After saving, restart the nfs server sudo systemctl restart nfs-kernel-server # Prepare the cinder configuration to enable the NFS mount sudo mkdir -p /etc/kolla/config sudo nano /etc/kolla/config/nfs_shares # Add the "remote" to mount in the file and save 10.30.0.20:/openstack/nfs
Latest instructions available here.
We will work from/openstack/kaos
for this install as the kaosu
user (we recommend the use of a tmux
).
cd /openstack sudo mkdir kaos sudo chown $USER:$USER kaos cd kaos # Install a few things that might otherwise fail during ansible prechecks sudo apt-get install -y git python3-dev libffi-dev gcc libssl-dev build-essential libdbus-glib-1-dev libpython3-dev cmake libglib2.0-dev python3-venv python3-pip # Activate a venv python3 -m venv venv source venv/bin/activate pip install -U pip # Install extra python packages pip install docker pkgconfig dbus-python # Install Kolla Ansible from git pip install git+https://opendev.org/openstack/kolla-ansible@master # Create the /etc/kolla director, and populate it sudo mkdir -p /etc/kolla sudo chown $USER:$USER /etc/kolla cp -r venv/share/kolla-ansible/etc_examples/kolla/* /etc/kolla # we are going to do an all-in-one (single host) install, copy it in the current folder for easy edits cp venv/share/kolla-ansible/ansible/inventory/all-in-one . # Install Ansible Galaxy requirements kolla-ansible install-deps # generate random passwords (stored into /etc/kolla/passwords.yml) kolla-genpwd
Edit and adapt the sudo nano /etc/kolla/globals.yml
file as follows (search for matching keys):
kolla_base_distro: "ubuntu”
kolla_internal_vip_address: "10.30.0.254"
network_interface: "eno1"
neutron_external_interface: "enp1s0”
enable_cinder: "yes"
enable_cinder_backend_nfs: "yes"
Before we try the deployment, let’s ensure the Python interpreter is the venv
one: at the top of the /openstack/kaos/all-in-one
file, add:
localhost ansible_python_interpreter=/openstack/kaos/venv/bin/python
The proposed files are available here and here.
As the kaosu
user in /openstack/kaos
with the venv
activated:
kolla-ansible bootstrap-servers -i ./all-in-one
kolla-ansible prechecks -i ./all-in-one
kolla-ansible deploy -i ./all-in-one
If all goes well, you will have a PLAY RECAP at the end of a successful install, which might look similar to the following:
PLAY RECAP ****... localhost : ok=425 changed=280 unreachable=0 failed=0 skipped=249 rescued=0 ignored=1
The Dashboard will be on our host’s port 80 at http://10.30.0.20/. The admin user password can be found using:
fgrep keystone_admin_password /etc/kolla/passwords.yml
(still using the venv
)
Install the python openstack
command:
pip install python-openstackclient -c https://releases.openstack.org/constraints/upper/master
Create multiple post-deployment scripts, including the admin-openrc.sh
and cloud.yml
files:
kolla-ansible post-deploy -i ./all-in-one
That file should be added to your default config:
kolla-ansible post-deploy -i ./all-in-one
(requires the venv
, the openstack
command line, the cloud.yml
file, and the generated/etc/kolla/admin-openrc.sh
script)
In /openstack/kaos
, there is a venv/share/kolla-ansible/init-runonce
script to create some of the basic configurations for your cloud. Most end users will modify their EXT_NET_CIDR
, EXT_NET_RANGE
, and EXT_NET_GATEWAY
variables.
The proposed my-init-runonce.sh
executable (ie chmod +x
it) script uses larger tiny
images (5GB, as a Ubuntu server is over 2GB), and other instances only use a base image of 20GB (since you can specify your preferred disk image size during the instance creation process), its instance names following the m
naming convention and adds xxlarge
and xxxlarge
memory instances.
Adapt the USER CONF section based on your system and preferences.
% ./my-init-runonce.sh [...] -- Attempt to add external-net (if not already present) [...] -- Attempt to configure Security Groups: ssh and ICMP (ping) [...] -- Attempt to create and add a default id_ecdsa key to nova (if not already present) [...] -- Setting quota defaults following user values [...] -- Creating defaults flavors (instance type) (if not already present) [...] Done
Once run, we should have:
external-net
: the pool from which your floating IPs will be obtained.ssh
and ICMP
to the admin project’s default security group.mykey
) and added it to the admin user.project_id
).% source /etc/kolla/admin-openrc.sh % openstack flavor list +----+--------------+-------+------+-----------+-------+-----------+ | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public | +----+--------------+-------+------+-----------+-------+-----------+ | 1 | m1.tiny | 512 | 5 | 0 | 1 | True | | 2 | m2.tiny | 512 | 5 | 0 | 2 | True | | 3 | m2.small | 2048 | 20 | 0 | 2 | True | | 4 | m2.medium | 4096 | 20 | 0 | 2 | True | | 5 | m4.large | 8192 | 20 | 0 | 4 | True | | 6 | m8.xlarge | 16384 | 20 | 0 | 8 | True | | 7 | m16.xxlarge | 32768 | 20 | 0 | 16 | True | | 8 | m32.xxxlarge | 65536 | 20 | 0 | 32 | True | +----+--------------+-------+------+-----------+-------+-----------+
FYSA: From the UI, it is possible to add new flavors from Admin -> Compute -> Flavors
Note: kolla-ansible
or openstack
requires the venv
to be activated and source /etc/kolla/admin-openrc.sh
to be performed for the commands to have the correct configuration information. As kaosu
:
cd /openstack/kaos source /etc/kolla/admin-openrc.sh source venv/bin/activate
Login to your OpenStack instance by going to the web dashboard (horizon
, available on port 80) at http://10.30.0.20
The default admin
user’s password can be obtained using:
fgrep keystone_admin_password /etc/kolla/passwords.yml
Using Project -> Compute -> Overview
gives you a list of used and available resources.
Create a new project and another admin user for your account. As the admin
user:
Identity -> Projects
(left column), Create Project
and choose a name. For this example, we will use newprojectname
. That new project does not inherit the existing one’s default values. We will update the quotas in the next section.Identity -> Users
(left column), Create User
. Provide its User Name
and Password
(Confirm
), assign that user the Primary Project
created above, and give it the Admin
Role
. Enable
the account. For this example, we will use newadminuser
.The following steps use the CLI to re-add our ssh key, security groups, and quotas to the new user and its project.
Add a public ssh key (here id_ecdsa.pub
) to your new user (adapting newadminuser
):
Add the security groups and quotas to your new project (adapting newprojectname):
# Adapt newprojectname MY_PROJECT_ID=$(openstack project list | awk '/ newprojectname / {print $2}') MY_SEC_GROUP=$(openstack security group list --project ${MY_PROJECT_ID} | awk '/ default / {print $2}') # check values are assigned echo $MY_PROJECT_ID echo $MY_SEC_GROUP openstack security group rule create --ingress --ethertype IPv4 --protocol icmp ${MY_SEC_GROUP} openstack security group rule create --ingress --ethertype IPv4 --protocol tcp --dst-port 22 ${MY_SEC_GROUP} openstack quota set --force --instances 10 ${MY_PROJECT_ID} openstack quota set --force --cores 32 ${MY_PROJECT_ID} openstack quota set --force --ram 96000 ${MY_PROJECT_ID} openstack quota set --force --floating-ips 10 ${MY_PROJECT_ID}
A slightly modified version of this newproject.sh file is available:
Go to https://cloud-images.ubuntu.com/ and select the distro you want (here, we will use Noble Numbat/Ubuntu 24.04’s most current
image). Copy the URL of the QCow2 UEFI/GPT Bootable disk image
of your choice.
cd /openstack sudo mkdir cloudimg sudo chown $USER:$USER cloudimg cd cloudimg # Name it with the OS information and the date shown in the "Last modified" column wget https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img -O ubuntu2404-20250403.img
Use the openstack command line to add the image to the list of available images for all users of our cloud OS, giving it a name that indicates its content:
openstack image create --disk-format qcow2 --container-format bare --public --property os_type=linux --file ubuntu2404-20250403.img ubuntu2404server-20250403
Once completed, a table with details of the new image added to our OpenStack installation will appear. From our new admin user’s UI, select Project -> Compute -> Images
, and we will see the added image listed.
From our new admin user’s UI (which should start in our recently added project), select Project -> Network -> Network Topology
. This should show a graph with only the external-net
.
We need a network
and a router
added for VMs to communicate.
Select Create Network
Network
tab: project-net
with project
reflecting our project’s name.Enable Admin State
to make sure it is active.Shared
, this network is only for this project.Create Subnet
; we need to configure the IP details for this subnet.Availability Zone Hints
or MTU
Next
.Subnet
tab: project-subnet
.Network Address
, use a private IP range not currently used in our network, such as 10.56.78.0/24
; subnets must be independent and not currently in use.IPv4
.10.56.78.1
for the Gateway IP
; it must be in the same IP range as your subnet.Disable Gateway
.Next
.Subnet Details
tab: Enable DHCP
. We want our VM instances to get IPs automatically when they start.Allocation Pool
use something unused within the subnet range, for example, 10.56.78.100,10.56.78.200
.DNS Name Servers
(one entry per line) use Google (8.8.8.8
, 8.8.4.4
) or CloudFlare (1.1.1.1
).Host Routes
.Create
.You now have a new network ready to be used with VMs. We still need a router
.
Select Create Router
:
project-router
.Enable Admin State
to make sure it will be active.external-net
External Network
.Enable SNAT
since we do have an external network.Availabilty Zone Hints
as is.We now have a router connected to the external net
work. The IP for the router on the external net
work is automatically selected from the pool.
The router has yet to be connected to the “project network.” Hover over the “router” and select Add Interface
. Select the project-subnet
Subnet
and leave the IP Address
unspecified; it will use the configured gateway.
When we return to the Network Topology
page, we will see an external-net
connected to our project-net
by our project-router
.
From our new admin user’s UI, select Project -> Compute -> Instances
and choose Launch Instance
.
Details
tab: u24test
.Next
.Source
tab: Image
as the Select Boot Source
.Yes
for Create New Volume
; this will force the creation of the VM disk image onto the Cinder location.Volume Size
is less than the flavor’s disk size, the larger of the two will be selected.Delete Volume on Instance Delete
is a user choice. We often select Yes
.ubuntu
server image to have it become the Allocated
image.Next
.Flavor
tab: m2.tiny
flavor (2x VCPUs, 512MB RAM, 5GB disk).Volume Size
to 1GB
in the Source
tab, looking back on the Source
tab, you will see it now shows 5GB
, the size of our Flavor
.Next
.Netwoks
tab: project-net
and project-subnet
should be automatically allocated.Next
.Network Ports
, so click Next
.Allocated
Security Groups
’ default
will have ssh
and icmp
listed (feel free to verify by clicking the toggle arrow), so click Next
.mykey
will show in Key Pair
.Feel free to investigate the other available tabs. We will Launch Instance
.
After a few seconds, the instance should appear to be Running
.
ls -alh /openstack/nfs
will show the file for our newly created disk volume.
From the instance list, our running instance has an Actions
submenu (right); we can View Logs
or the interactive Console
. We can not log in using the Console
terminal; the ubuntu
user has no known password set. The instance is designed to be remotely accessed using SSH. We need to assign our instance a “Floating IP”: a public IP address that can be dynamically associated with a private instance, allowing it to be accessible from outside the private cloud.
With our instance Running
, its IP Address
is within our project’s subnet range.
We need to obtain a Floating IP
to access the instance via SSH.
In the Actions
(right) submenu for our instance row, select Associate Floating IP
:
+
and Allocate IP
from our external-net
pool.IP Address
dropdown. Make sure the Port to be associated
matches our u24test
instance and Associate
them.The IP Address
column will now show two IPs: one from the project-subnet
DHCP range and one from the external-net
pool.
From your kaosu
user, we can ssh
into the host’s created floating IP using the authorized ssh key and the default cloud image user of ubuntu
.
For example:
# Adapt the 10. IP to match your floating IP ssh -i ~/.ssh/id_ecdsa ubuntu@10.30.0.190 [...] Welcome to Ubuntu 24.04.2 LTS (GNU/Linux 6.8.0-57-generic x86_64) [...] ubuntu@u24test:~$
From there, you can confirm that your instance can connect to the Internet by running sudo apt update && sudo apt -y upgrade
.
If you have a reverse proxy setup on another host and want to benefit from https
on horizon
(the dashboard):
Proxy Host
as you would typically; here, we will use os.example.com
sudo nano /etc/kolla/horizon/_9999-custom-settings.py
and add to it:SECURE_PROXY_SSL_HEADER = ("HTTP_X_FORWARDED_PROTO", "https") CSRF_TRUSTED_ORIGINS = [ 'https://os.example.com' ]
horizon
using docker kill horizon
. Wait a few seconds, and your access via https://os.example.com
should be functional csrf_failure=Origin checking failed - https%3A//os.example.com does not match any trusted origin error
(in the address bar).horizon
is one of themIf you modify a globals.yml
configuration option,
cd /openstack/kaos source venv/bin/activate kolla-ansible reconfigure -i ./all-in-one
More kolla-ansible
CLI options at https://docs.openstack.org/kolla-ansible/latest/user/operating-kolla.html.
I experienced this in a previous installation. Luckily, it is just a matter of re-running the reconfigure
step to make it functional again.
Login as the kaosu
user
cd /openstack/kaos source venv/bin/activate pip3 install -U pip kolla-ansible -i ./all-in-one --yes-i-really-really-mean-it stop kolla-ansible -i ./all-in-one install-deps kolla-ansible -i ./all-in-one prechecks kolla-ansible -i ./all-in-one reconfigure sudo docker ps -a
Please refer to the Ubuntu 22.04 post for additional content and in the original version of this tutorial on Martial Michel’s blog.
The post Kolla Ansible OpenStack Installation (Ubuntu 24.04) appeared first on Superuser.
This public beta enables the full Ubuntu Desktop experience on the Qualcomm Dragonwing™ QCS6490 and…
Time is running out to be in full compliance with the EU Cyber Resilience Act,…
Identity management is vitally important in cybersecurity. Every time someone tries to access your networks,…
Welcome to the Ubuntu Weekly Newsletter, Issue 889 for the week of April 20 –…
Introduction I just returned from RubyKaigi 2025, which ran from April 16th to 18th at…
One of my favourite authors, Douglas Adams, once said that “we are stuck with technology…