libvirt Full Guide â
Prerequisite â
create a folder in
/opt/images
to store the OS.iso
files.setup X11 auth so that we can see the GUI later via
virt-viewer
.
RAND_HEX=$(head -c 16 /dev/urandom | xxd -p)
DISPLAY=13
xauth add localhost/unix:$DISPLAY MIT-MAGIC-COOKIE-1 $RAND_HEX
Install virt-viewer â
Install virt-viewer
sudo apt install -y virt-viewer
export DISPLAY=":0"
xhost +
Create VM â
sudo virt-install --name=ubuntu-18-04-vm-test-1 \
--vcpus=4 \
--cpuset=4-8 \
--memory=8192 \
--cdrom /opt/images/ubuntu-18.04.6-live-server-amd64.iso \
--disk size=64 \
--network bridge=virbr0
List all VM make sure to specify virsh list --inactive
flag to show even VM that had been shut down. By default list
will only show active and powered on VM.
virsh list --inactive
To console to VM SSH into vm, then
systemctl enable [email protected]
systemctl start [email protected]
on virsh
virsh console <VM-ID>
To exit from VM console and back to virsh (or host)
CTRL+]
List pool
virsh # pool-list --details --all
Name State Autostart Persistent Capacity Allocation Available
-----------------------------------------------------------------------------
default running yes yes 467.96 GiB 21.12 GiB 446.84 GiB
images running yes yes 467.96 GiB 21.00 GiB 446.96 GiB
Show VM Network Info
virsh # net-list
Name State Autostart Persistent
----------------------------------------------------------
default active yes yes
virsh # net-dumpxml default
<network>
<name>default</name>
<uuid>9e0bb112-82ee-4fd2-b66e-23856f925574</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:68:09:c9'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
backup image
virsh -c qemu:///system domblklist MACHINE_NAME
Port Forwarding from Host to VM â
Go to
/etc/libvirt/hooks
create a
qemu
file
NOTE
The file MUST BE NAMED AS qemu
, and not qemu-vmname
. libvirtd
will only hook that file!
Example qemu
hook that will port forward 10022
and 10023
to two VMs, ubuntu-18-04-vm-test-1
and ubuntu-18-04-vm-test-2
:
##!/bin/bash
# IMPORTANT: Change the "VM NAME" string to match your actual VM Name.
# In order to create rules to other VMs, just duplicate the below block and configure
# it accordingly.
# --- vm1: ubuntu-18-04-vm-test-1
if [ "${1}" = "ubuntu-18-04-vm-test-1" ]; then
# Update the following variables to fit your setup
GUEST_IP=192.168.122.111
GUEST_PORT=22
HOST_PORT=10022
if [ "${2}" = "stopped" ] || [ "${2}" = "reconnect" ]; then
/sbin/iptables -D FORWARD -o virbr0 -p tcp -d $GUEST_IP --dport $GUEST_PORT -j ACCEPT
/sbin/iptables -t nat -D PREROUTING -p tcp --dport $HOST_PORT -j DNAT --to $GUEST_IP:$GUEST_PORT
fi
if [ "${2}" = "start" ] || [ "${2}" = "reconnect" ]; then
/sbin/iptables -I FORWARD -o virbr0 -p tcp -d $GUEST_IP --dport $GUEST_PORT -j ACCEPT
/sbin/iptables -t nat -I PREROUTING -p tcp --dport $HOST_PORT -j DNAT --to $GUEST_IP:$GUEST_PORT
fi
fi
# --- vm2: ubuntu-18-04-vm-test-2
if [ "${1}" = "ubuntu-18-04-vm-test-2" ]; then
# Update the following variables to fit your setup
GUEST_IP=192.168.122.6
GUEST_PORT=22
HOST_PORT=10023
if [ "${2}" = "stopped" ] || [ "${2}" = "reconnect" ]; then
/sbin/iptables -D FORWARD -o virbr0 -p tcp -d $GUEST_IP --dport $GUEST_PORT -j ACCEPT
/sbin/iptables -t nat -D PREROUTING -p tcp --dport $HOST_PORT -j DNAT --to $GUEST_IP:$GUEST_PORT
fi
if [ "${2}" = "start" ] || [ "${2}" = "reconnect" ]; then
/sbin/iptables -I FORWARD -o virbr0 -p tcp -d $GUEST_IP --dport $GUEST_PORT -j ACCEPT
/sbin/iptables -t nat -I PREROUTING -p tcp --dport $HOST_PORT -j DNAT --to $GUEST_IP:$GUEST_PORT
fi
fi
- Restart
libvirtd
service
systemctl restart libvirtd
Reboot VM
Verify that the iptables is updated
iptables -S FORWARD | grep virbr0
Check NAT route
iptables -L -n -t nat --line-numbers | grep 192
CPU Affinity â
Check NUMA topology
virsh capabilities
Example output from nVidia NX
<topology>
<cells num='1'>
<cell id='0'>
<memory unit='KiB'>7957840</memory>
<cpus num='6'>
<cpu id='0' socket_id='0' core_id='0' siblings='0'/>
<cpu id='1' socket_id='0' core_id='1' siblings='1'/>
<cpu id='2' socket_id='1' core_id='0' siblings='2'/>
<cpu id='3' socket_id='1' core_id='1' siblings='3'/>
<cpu id='4' socket_id='2' core_id='0' siblings='4'/>
<cpu id='5' socket_id='2' core_id='1' siblings='5'/>
</cpus>
</cell>
</cells>
</topology>
Check vcpu
desktop@desktop:~$ sudo virsh vcpuinfo debian10-vm1
VCPU: 0
CPU: 5
State: running
CPU time: 32.8s
CPU Affinity: ----yy
VCPU: 1
CPU: 4
State: running
CPU time: 21.7s
CPU Affinity: ----yy