Set up a Ubuntu based workstation | Step 2: KVM/QEMU Installation and Configuration
This article is the second tutorial of a series on how to set up a multi-purpose Ubuntu based workstation. The general idea is to isolate different tasks in different virtual machines (VMs), to allow the the user be able to try out different software stacks while keeping the host machine clean and stable.
Introduction to KVM/QEMU
Kernel-based Virtual Machine (**KVM**) is a virtualization technology built directly into the Linux kernel.
In software development, it is very common that each project requires a different environment: different Ubuntu versions, different toolchains, different system libraries. In my own case, for example, one project requires Ubuntu 24.04, while another project requires Ubuntu 20.04. Maintaining multiple physical machines—or constantly switching environments on one machine—is extremely painful---credentials, shortcuts, libraries, environment variables, and so on, all need to be reconfigured every time---I have repeatedly found myself doing work just to prepare the environment to do the real work.
One effective solution is virtualization using KVM. With KVM, you can run multiple virtual machines on a single physical host, each with a fully isolated development environment. Snapshots and cloning make it easy to experiment while keeping systems stable.
In this article, I use: - Host OS: Ubuntu 24.04 LTS - Guest OS: Ubuntu 20.04 LTS - Hypervisor: KVM / QEMU and demonstrate how to build a clean development workflow.
Method
1. Install KVM/QEMU on the host machine
Install the required packages:
sudo apt-get update sudo apt-get install -y \ qemu-kvm \ libvirt-daemon-system \ libvirt-clients \ bridge-utils \ virt-manager \ ovmf
And here is a brief introduction to the installed packages: - qemu-kvm: the hypervisor - libvirt-*: VM management services and CLI tools - bridge-utils: networking (NAT / bridge) - virt-manager: GUI management tool - ovmf: UEFI firmware support
2. Configure user permissions
To manage VMs without root privileges, add your user to the required groups:
sudo usermod -aG kvm,libvirt ${USER}
After running the above command, log out and log back in for the changes to take effect.
3. Create and manage VMs with `virt-manager`
Prepare an installation ISO for the guest OS (e.g. Ubuntu 20.04 LTS), and launch `virt-manager`:
virt-manager
Then, with the GUI, follow the following steps to create a new VM: 1. Click "File" > "New Virtual Machine" to start the VM creation wizard 2. Select "Local install media (ISO image or CDROM)" and click "Forward" 3. Click “Browse…” to locate ISO media, select the downloaded Ubuntu 20.04 ISO, and click "Forward" 4. Allocate appropriate resources for the VM (e.g. 8 CPU cores, 16 GB RAM, 64 GB disk space), and click "Forward" 5. Start guest OS installation which is the same as installing Ubuntu on a real machine
3. Launch and configure VM
After the installation is complete, you can launch the VM from `virt-manager` GUI or use CLI commands:
# list all VMs virsh list --all # start a VM (replace "pd-vm-u20" with your VM name) virsh start pd-vm-u20
Inside the VM, you can install necessary development tools, build and execute your projects as you would on a physical machine.
Extra Notes
GPU passthrough
For beginners, VM is usually regarded as performance compromised, but this is not true. VM can do serious work, for example, 3D simulation, CUDA-accelerated machine learning, etc.
For a single-developer workstation, a practical approach is to pass the dedicated GPU to the VM, which allows the VM to directly access the GPU hardware for native performance. Moreover, with GPU passthrough, you can connect a monitor to the GPU and use the VM as a full desktop environment. If your host machine's CPU has integrated graphics, you can use it for the graphic output of the host OS while dedicating the discrete GPU to the VM, achieving a seamless experience without performance compromise on either side.
In the following tutorial, I assume the host machine has the following hardware configuration: - CPU: Intel Core i9-13900K (with integrated graphics) - GPU: NVIDIA GeForce GeForce GTX TITAN X
To perform GPU passthrough, first, find the PCI IDs of the GPU:
> lspci -nn | grep -i nvidia 07:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM200 [GeForce TX TITAN X] [10de:17c2] (rev al) 07:00.1 Audio device [0403]: NVIDIA Corporation GM200 High Definition Audio [10de:0fb0] (rev a1)
Second, enable IOMMU in the host's GRUB configuration:
> sudo nano /etc/default/grub # Find the line that starts with 'GRUB_CMDLINE_LINUX_DEFAULT', add 'intel_iommu=on' and 'iommu=pt' to the existing parameters, it may look like: |----------------- text buffer of /etc/default/grub -------------| ... GRUB_CMDLINE_LINUX_DEFAULT="quiet splash intel_iommu=on iommu=pt" ... |----------------------------------------------------------------| # Save the file and update GRUB > sudo update-grub
Third, bind the GPU to the VFIO driver:
> sudo nano /etc/modprobe.d/vfio.conf # Add the following line to bind the GPU to VFIO (replace with your GPU's PCI IDs): |---------- text buffer of /etc/modprobe.d/vfio.conf ------------| options vfio-pci ids=10de:17c2,10de:0fb0 |----------------------------------------------------------------|
Fourth, ensure VFIO modules are loaded at initramfs stage:
> sudo nano /etc/initramfs-tools/modules # Add the following lines to include VFIO modules in initramfs: |----------- text buffer of /etc/initramfs-tools/modules --------| vfio vfio_iommu_type1 vfio_pci vfio_virqfd |----------------------------------------------------------------| # Rebuild initramfs > sudo update-initramfs -u
Finally, reboot the host machine for the changes to take effect. You can verify that the GPU is bound to VFIO (look at the "Kernel driver in use" field):
> lspci -nnk -d 10de: 07:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM200 [GeForce TX TITAN X] [10de:17c2] (rev al) Subsystem: NVIDIA Corporation GM200 [GeForce TX TITAN X] [10de: 1132] Kernel driver in use: vfio-pci Kernel modules: nvidiafb, nouveau 07:00.1 Audio device [0403]: NVIDIA Corporation GM200 High Definition Audio [10de:0fb0] (rev al) Subsystem: NVIDIA Corporation GM200 High Definition Audio [10de: 1132] Kernel driver in use: vfio-pci Kernel modules: snd_ha_intel
In summary, the above steps detaches the GPU from the host by binding it to`vfio-pci`. Without the setup, host machine usually loads the corresponding drivers and *claims* the GPU---you can check with the following command (*before* GPU passthrough):
> lspci -nnk -d 10de: 07:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM200 [GeForce GTX TITAN X] [10de:17c2] (rev al) Subsystem: NVIDIA Corporation GM200 [GeForce TX TITAN X] [10de: 1132] Kernel driver in use: nouveau Kernel modules: nvidiafb, nouveau 07:00.1 Audio device [0403]: NVIDIA Corporation GM200 High Definition Audio [10de:0fb0] (rev al) Subsystem: NVIDIA Corporation GM200 High Definition Audio [10de: 1132] Kernel driver in use: snd_hda_intel Kernel modules: snd_hda_intel
To attach the GPU to the VM, you can use `virt-manager` GUI: 1. Open the VM's settings in `virt-manager` 2. Go to "Add Hardware" > "PCI Host Device" 3. Select the GPU (both VGA and audio devices) from the list 4. Start the VM, and install the appropriate GPU drivers inside the VM for optimal performance
USB devices passthrough
You can also pass through USB devices to the VM---e.g. mouse, keyboard, joystick, etc.---this is especially useful when using the VM as a main work environment.
USB passthrough can be easily configured in `virt-manager` GUI: 1. Open the VM's settings in `virt-manager` 2. Go to "Add Hardware" > "USB Host Device" 3. Select the desired USB device from the list and add it to the VM. This allows the VM to directly access the USB device, providing a seamless experience as if the device were connected to a physical machine.
Commonly used CLI commands
To manage VMs from the command line, you can use `virsh` commands:
# list all VMs virsh list --all # start a VM (replace "pd-vm-u20" with your VM name) virsh start pd-vm-u20 # shutdown a VM virsh shutdown pd-vm-u20
References
1. [KVM hypervisor: a beginners’ guide](https://ubuntu.com/blog/kvm-hyphervisor)