-
Notifications
You must be signed in to change notification settings - Fork 8
Xen hypervisor (under umbrella XenProject is first free Open Source virtualization for Linux. It is only Xen product that is really open source both formally and in spirit and it is included in several Linux distributions - for example in Debian 12. Homepage is on: https://xenproject.org/
On other side this project includes only simple CLI interface (mostly xl
command). If you need central management
and/or advanced features you need some additional layer:
- Ganeti - open sourced from Google but now in "maintenance" mode. Also there was made unfortunate decision to create new tools in Haskell while core is written in Python. This makes future maintenance and possible development quite difficult. Otherwise it has interesting features - using DRBD to replicate Guests disk for High-Availability. Please note that Ganeti originally supported Xen only but later switched to KVM
- XenServer 7.4 - proprietary license. Was replaced by paid-only "Citrix Hypervisor" (which is DIFFERENT from "Xen Hypervisor"!).
- XCP-ng - server free but management software also under proprietary license
I will use bare-metal Debian 12 and follow https://wiki.xenproject.org/wiki/Xen_Project_Beginners_Guide#What_is_this_Xen_Project_software_all_about.3F
However my guide will be a bit minimalist - I expect that you already have good knowledge of Debian Linux and experience with networks and virtualization in general.
Basic terms:
- Host - bare-metal (real) machine that will run Xen Hypervisor
- Guest or Domain or VM - Virtual Machine run under Xen
Host Hardware:
- CPU: AMD Athlon(tm) 64 X2 Dual Core Processor 3800+
- MB: MS-7250
- 8GB RAM
- 480GB SSD disk KINGSTON SA400S37480G (
/dev/sdb
in this guide)
Software:
- Host OS:
Debian GNU/Linux 12 (bookworm)
- main Xen packages (stock Debian 12 repository)
xen-hypervisor-4.17-amd64 4.17.2+76-ge1f9cb16e2-1~deb12u1
xen-system-amd64 4.17.2+76-ge1f9cb16e2-1~deb12u1
xen-tools 4.9.2-1
I used these initial partitions:
- scheme MBR
- /dev/sdb1, 50GB, ext4, root / - intentionally larger than in wiki, to hold install ISO images for HVM guests
- /dev/sdb2, 8GB, swap - same as RAM size
After install I installed sudo using apt-get install sudo
- using
visudo
I made these changes:# avoid DNS FQDN request on each sudo invocation Defaults !fqdn # become root without password %sudo ALL=(ALL:ALL) NOPASSWD:ALL
- and add my ordinary user to group
sudo
using:/usr/sbin/usermod -G sudo -a MY_USERNAME
Installed some essential stuff:
# gdisk - contains sgdisk that can wipe MBR & GPT signatures
sudo apt-get install lvm2 man vim curl mc lynx wget tmux bash-completion gdisk
# logout and login to get working bash-completion
exit # or Ctrl-D and login again
# make vim default editor:
sudo update-alternatives --list editor
sudo update-alternatives --set editor /usr/bin/vim.basic
Fixed terrible console font, change this in /etc/default/console-setup
:
FONTFACE="VGA"
Optionally run setupcon
on local console to apply this change now.
After install use fdisk and fill remaining space with lvm2:
sudo fdisk /dev/sdb
n # new partition
p # primary
3 # number
First Sector # ENTER to confirm
Last Sector # ENTER to confirm
t # change partition ID
3 # partition number
lvm # type
v # verify
p # print new partition table
Device Boot Start End Sectors Size Id Type
/dev/sdb1 * 2048 97656831 97654784 46.6G 83 Linux
/dev/sdb2 97656832 113281023 15624192 7.5G 82 Linux swap / Solaris
/dev/sdb3 113281024 937703087 824422064 393.1G 8e Linux LVM
w # write and quit
Then we will create PV (Physical volume) for LVM using:
sudo pvcreate /dev/sdb3
sudo pvs
PV VG Fmt Attr PSize PFree
/dev/sdb3 lvm2 --- <393.12g <393.12g
Now we create empty VG (Volume Group) - will be used by Xen guests (VMs)
sudo vgcreate xen /dev/sdb3
sudo vgs
VG #PV #LV #SN Attr VSize VFree
xen 1 0 0 wz--n- 393.11g 393.11g
Now we need to prepare Bridge interface so both Host and VMs can use same LAN card to access network (similar to Proxmox setup):
- reconfigure grub to stop insane rename of interfaces. Here are my
changes in
/etc/default/grub
(thenet.ifnames=0
will stop rename of LAN cars)GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`" Test Xen" GRUB_CMDLINE_LINUX_DEFAULT="net.ifnames=0 mitigations=off" GRUB_TERMINAL=console GRUB_DISABLE_OS_PROBER=true
- and run:
sudo update-grub
Before reboot prepare Bridge network configuration:
- following: https://wiki.xenproject.org/wiki/Xen_Project_Beginners_Guide#Setup_Linux_Bridge_for_guest_networking
- ensure that
brctl
is installed:sudo apt-get install bridge-utils
- first backup:
sudo cp /etc/network/interfaces /root/
- and edit
/etc/network/interfaces
this way:source /etc/network/interfaces.d/* auto lo iface lo inet loopback auto eth0 iface eth0 inet manual auto xenbr0 iface xenbr0 inet dhcp bridge_ports eth0
- as bonus extend
/etc/issue
to show IP address:echo 'IP: xenbr0: \4{xenbr0}' | sudo tee -a /etc/issue
- now we need reboot to stop rename of network interfaces - ensure that you have working local console access - in case when new network configuration will not work!
- WARNING! In my case I got different IP address because Bridge
xenbr0
uses its own (different!) MAC address - so DHCP server will assign different IP...
Finally we can proceed to install Xen:
- following https://wiki.xenproject.org/wiki/Xen_Project_Beginners_Guide#Installing_the_Xen_Project_Software
sudo apt-get install xen-system-amd64
- now we have to update Grub configuration (our Debian12 host has to be booted as privileged VM
called
Domain-0
):sudo update-grub Including Xen overrides from /etc/default/grub.d/xen.cfg Warning: GRUB_DEFAULT changed to boot into Xen by default! Edit /etc/default/grub.d/xen.cfg to avoid this warning. Generating grub configuration file ... ...
- also notice that all kernels are printed two times - it is OK, because there is standard (non-Xen) configuration and Xen enabled configuration for each kernel found
- now again reboot - we have to boot Debian 12 Linux as Xen task...
- ensure that you have selected Grub entry that ends with
Xen Hypervisor
- also notice that on loading there is first line:
Loading Xen
- followed by common:
Loading linux-X.Y.Z
- after boot you can find that our Linux is loaded as privileged VM (called
Domain-0
in Xen jargon). TheDomain-0
is only VM that has direct access to hardware (with some exception like SR-IOV, but let's keep things simple).
Most Xen tasks use CLI command xl
.
- to list VMs (called Domains) use:
sudo xl list Name ID Mem VCPUs State Time(s) Domain-0 0 7977 2 r----- 17.7
- and that's OK - we are actually inside that VM called
Domain-0
(!)
To create Debian VM easily we will follow:
- https://wiki.xenproject.org/wiki/Xen_Project_Beginners_Guide#Creating_a_Debian_PV_.28Paravirtualized.29_Guest
- in this case Para-virtualized (PV) - where Guest is aware that it is running as Xen and calling Xen hypervisor when needed. It has advantage that no hardware virtualization support is required. But you can't use this mode for Xen ignorant systems (Windows,...)
- first install:
sudo apt-get install xen-tools
- now try to create first PV VM with Debian using our Volume Group VG
xen
:
sudo xen-create-image --hostname=deb12-pv --memory=512mb --vcpus=1 \
--lvm=xen --size=8gb --dhcp --pygrub --dist=bookworm
TIP: If you prefer Ubuntu 22.04 you can try:
sudo xen-create-image --hostname=ubu22-pv --memory=1024mb --vcpus=1 \
--lvm=xen --size=8gb --dhcp --pygrub --dist=jammy
NOTE: You can as root
watch detail progress by using:
vm=deb12-pv
tail -f /var/log/xen-tools/$vm.log
What is PV Grub on PY Grub?
In case of PV guests there is no BIOS (or UEFI) to load Grub and load kernel and initrd into memory. For PV both must be loaded by Host (our bare-metal Debian 12). So there was implemented neat trick called PV grub - which is able to extract both kernel and initrd from Guest Image(!)
But it was always troublesome - need to support all imaginable guest filesystems that contains
/boot/grub/*
and properly parse entries from its/boot/grub/menu.lst
.To simplify these things there was later introduced PyGrub - which is basically Xen-aware GRUB. So Xen Host no longer needs to extract kernel and initrd, but can directly boot this special Grub which will carry all kernel booting work itself.
At the end there will be printed root password, you can find it later
by looking into /var/log/xen-tools/deb12-pv.log
(log named by --hostname
switch)
Also notice that there was create Domain (=VM) configuration file
as /etc/xen/deb12-pv.cfg
To see all supported distributions, please look at:
/usr/share/xen-tools/
- both Debian and Ubuntu seems to be up-to-date (and using quick
debootstrap
installation in chroot) - but last CentOS version is 6 which is not current, similarly last fedora is 17 (current is 38 or so)
To start your first VM use xl create
("create" means "Start VM" in Xen (and also LibVirt) vocabulary):
vm=deb12-pv
sudo xl create /etc/xen/$vm.cfg
Parsing config from /etc/xen/deb12-pv.cfg
Now verify that your VM (called "Domain") is really running:
sudo xl list
Name ID Mem VCPUs State Time(s)
Domain-0 0 7589 2 r----- 415.0
deb12-pv 1 512 1 -b---- 5.9
And finally you can connect to console using command:
sudo xl console $vm
Tip: if you are quick enough you may see Py GRUB menu in action.
Tip from guide: use Ctrl
-]
to escape console and sudo xl console $vm
to attach again.
Please note that
xl console VM
has sometimes small issues (not expanding to size of parent terminal or not detecting color support). However it is good enough for emergency administration.
Network interface problem in case of Debian 12 guest (Ubuntu 22.04 Jammy VM seems to work fine):
There was error printed on deb12-pv console:
[FAILED] Failed to start networking…ce - Raise network interfaces. See 'systemctl status networking.service' for details.
Although there is valid network interface:
# run in VM! ip -br l lo UNKNOWN 00:00:00:00:00:00 <LOOPBACK,UP,LOWER_UP> enX0 DOWN 00:16:3e:7e:93:4c <BROADCAST,MULTICAST>And guess what is the problem - by inspecting
/etc/network/interfaces
# VM /etc/network/interfaces auto eth0 iface eth0 inet dhcp # post-up ethtool -K eth0 tx off
That SILLY interface rename!!! But how to fix it? Remember my talk about PV Grub and Py Grub?. We have to fix it inside guest.
- edit
/boot/grub/menu.lst
this way:diff -u /boot/grub/menu.lst{.orig,} --- /boot/grub/menu.lst.orig 2023-12-11 07:33:29.000368207 +0000 +++ /boot/grub/menu.lst 2023-12-11 07:34:30.376368207 +0000 @@ -15,7 +15,7 @@ ## e.g. kopt=root=/dev/hda1 ro ## kopt_2_6_8=root=/dev/hdc1 ro ## kopt_2_6_8_2_686=root=/dev/hdc2 ro -# kopt=root=/dev/xvda2 ro elevator=noop +# kopt=root=/dev/xvda2 ro elevator=noop net.ifnames=0 ## default grub root device ## e.g. groot=(hd0,0)- to apply our
net.ifnames=0
to all menu entries run inside VM:update-grub less /boot/grub/menu.lst # verify that net.ifnames=0 was applied on each menu entry
- WARNING! In case of Ubuntu 22.04 LTS PV-VM you have to install
apt-get install grub-legacy-ec2
and runupdate-grub-legacy-ec2
to update all entries in/boot/grub/menu.lst
- and type
reboot
to reboot VM- after reboot it looks much better:
# run inside VM $ ip -br l lo UNKNOWN 00:00:00:00:00:00 <LOOPBACK,UP,LOWER_UP> eth0 UP 00:16:3e:7e:93:4c <BROADCAST,MULTICAST,UP,LOWER_UP> $ ip -br -4 a lo UNKNOWN 127.0.0.1/8 eth0 UP 192.168.0.101/24- so finally we can use that VM!
Where are data stored? In Volume Group xen
we can see these:
$ sudo lvs xen
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
deb12-pv-disk xen -wi-ao---- 8.00g
deb12-pv-swap xen -wi-ao---- 512.00m
So Logical Volume LV
was created with VM name prefix (deb12-pv
) and -disk
suffix for
root filesystem and -swap
suffix for VM swap.
Please note that Xen disk devices are used directly without partitions - so LV itself contains filesystem and/or swap. It is again mainly because there is no BIOS or UEFI (in case of PV) so there is no reason to introduce legacy partitioning. Here are few commands inside VM to explain:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
xvda1 202:1 0 512M 0 disk [SWAP]
xvda2 202:2 0 8G 0 disk /
$ lsblk -f
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS
xvda1
swap 1 281e0b0e-1441-4633-9de6-c0ba0c6f671b [SWAP]
xvda2
ext4 1.0 ada56ead-07c7-468f-9f66-0a032354e7e1 6.5G 11% /
Windows do not support Xen PV so HVM is only choice. I will try "Windows server 2008R2 Trial" image
7601.17514.101119-1850_x64fre_server_eval_en-us-GRMSXEVAL_EN_DVD.iso
that is still available:
- https://archive.org/download/wserver2008r2/7601.17514.101119-1850_x64fre_server_eval_en-us-GRMSXEVAL_EN_DVD.iso
- parent webpage: https://archive.org/details/wserver2008r2
- WITHOUT ANY WARRANTY!
I stored it on Host as:
/opt/isos/7601.17514.101119-1850_x64fre_server_eval_en-us-GRMSXEVAL_EN_DVD.iso
Now we have to follow:
First create LV (Logical Volume) - unlike PV guest it will represent whole virtual disk including partition
table completely managed by guest (using xen
VG - Volume Group):
$ sudo lvcreate -n win2008r2 -L 40G xen
Logical volume "win2008r2" created.
$ sudo lvs xen
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
deb12-pv-disk xen -wi-a----- 8.00g
deb12-pv-swap xen -wi-a----- 512.00m
ubu22-pv-disk xen -wi-a----- 8.00g
ubu22-pv-swap xen -wi-a----- 512.00m
win2008r2 xen -wi-a----- 40.00g
Additionally we have to MANUALLY prepare ISO with PV drivers (some of them - boot disk are nearly impossible to change later). To prepare PV drivers ISO do this:
sudo apt-get install mkisofs
mkdir -p ~/pvdrivers/downloads
cd ~/pvdrivers/downloads
for i in bus cons hid iface net vbd vif vkbd;do curl -fLO https://xenbits.xen.org/pvdrivers/win/xen$i.tar;done
mkdir ../image
for i in *.tar;do tar xf $i -C ../image/;done
cd ../image
curl -FLO https://web.archive.org/web/20160612060810if_/http://updates.software-univention.de/download/addons/gplpv-drivers/gplpv_Vista2008x64_signed_0.11.0.373.msi
cd ..
mkisofs -o xen-drivers.iso -V XEN_DRIVERS -r -J image/
cp xen-drivers.iso /opt/isos/
Now we have to create "Domain" (or VM) definition, file /etc/xen/win2008r2.cfg
with contents:
# /etc/xen/win2008r2.cfg
# from: https://wiki.xenproject.org/wiki/Xen_Project_Beginners_Guide#Creating_a_Windows_HVM_.28Hardware_Virtualized.29_Guest
type = 'hvm'
memory = 4096
vcpus = 1
name = "win2008r2-hv"
vif = ['bridge=xenbr0,type=vif']
disk = [
'phy:/dev/xen/win2008r2,hda,w',
'file:/opt/isos/7601.17514.101119-1850_x64fre_server_eval_en-us-GRMSXEVAL_EN_DVD.iso,hdc:cdrom,r',
'file:/opt/isos/xen-drivers.iso,hdd:cdrom,r'
]
xen_platform_pci = 1
acpi = 1
device_model_version = 'qemu-xen'
boot="d"
sdl=0
# two lines below help to avoid lagging mouse pointer
usb=1
usbdevice=['tablet']
serial='pty'
vnc=1
vnclisten=""
vncpasswd=""
Please note that key to use PV drivers later is this:
xen_platform_pci = 1
It will expose Xen PV support on PCI which will let Xen pv drivers to hook on Windows boot. See https://wiki.xenproject.org/wiki/Xen_Linux_PV_on_HVM_drivers#Enable_or_disable_Xen_Project_PVHVM_drivers_from_dom0
Start guest using:
sudo xl create /etc/xen/win2008r2.cfg
On your Client Machine (tested openSUSE LEAP 15.5 running XFce) run VNC client:
sudo zypper in tigervnc
vncviewer IP_OF_XEN_HOST:5900
TIP: Use F8 to display TigerVNC menu where you can send Ctrl-Alt-Del and other special keys.
And install Windows server as usual. Thanks to IDE emulation there should be no problem with supported hardware - both emulated target disk and installation DVD should be working.
NOTE: Anytime Windows blank screen or reboot there will be disconnect from VNC. In such case just run
vncviewer
again.
Windows Guest works fine, but under IDE emulation the speed is limited to 3MB/s (basic PIO mode on emulated IDE) which is not much fun. So w need to install PV Drivers. Such HVM guest with installed PV drivers is sometimes called PVHVM.
WARNING! In case of Windows Server 2008r2 only this driver worked without problem:
If you followed my instructions you should have gplpv_Vista2008x64_signed_0.11.0.373.msi
includded
on /opt/isos/xen-drivers.iso
which is mounted as second IDE DVD Drive under Windows guest.
Strongly recommended:
- shutdown Windows guest
- Create LVM snapshot:
sudo lvcreate --snapshot --size 20g --name snap-win2008r2 xen/win2008r2
- Start Windows VM
- Now simply run
gplpv_Vista2008x64_signed_0.11.0.373.msi
from second DVD - It will ask for restart - confirm
- Finally verify that both LAN driver and HDD driver is PV:
- run Device Manager (
devmgmt.msc
) - expand Disk Drives
- you should see name like
XEN PV DISK SCSI disk device
- expand Network Adapters
- you should see
Xen Net Device Driver
- run Device Manager (
- Also if
usb=1
andusbdevice=['tablet']
was specified in .cfg, it should fix problem with "second lagging mouse pointer".
If something go wrong you can merge snapshot and create it again using:
# Stop VM before merging snapshot!!!
# How to revert and again create snapshot
sudo lvconvert --merge /dev/xen/snap-win2008r2
sudo lvcreate --snapshot --size 20g --name snap-win2008r2 xen/win2008r2
When done you may edit /etc/xen/win2008r2.cfg
and
- replace
boot="d"
- with
boot="c"
to boot directly from HDD.
Using official drivers:
- I had on luck with: https://xenbits.xen.org/pvdrivers/win/
- Windows 2008R2 is saying that there is no signature or invalid signature
- some resources
Additionally there are also XCP-ng drivers:
- https://github.com/xcp-ng/win-pv-drivers/releases
- not yet tested.
And finally XenServer (or its XenCenter) drivers (that can be even enabled via Windows Updates).
Here is example /etc/xen/ubu22-pvhvm.cfg
for PV HVM Ubuntu Server 22.04 LTS:
type = 'hvm'
memory = 1024
vcpus = 1
name = "ubu22-pvhvm"
vif = ['bridge=xenbr0,type=vif']
disk = [
'phy:/dev/xen/ubu22-pvhm,hda,w',
'file:/opt/isos/ubuntu-22.04.3-live-server-amd64.iso,hdc:cdrom,r',
]
xen_platform_pci = 1
acpi = 1
device_model_version = 'qemu-xen'
boot="c"
sdl=0
serial='pty'
vnc=1
vnclisten=""
vncpasswd=""
Please note that boot may sometimes looks stuck, however it should resume in few minutes...
-
xl top
orxentop
to see CPU, Memory, Disk and Network usage. Please be aware that even Host runs as VM so plaintop
running on Host will not reveal resource usage (for example CPU) of guests (!) -
xl info
- basic Xen system information -
xl uptime
- just show Uptime for all VMs includingDomain-0
(your Xen Hypervisor)
CPU related commands:
xl vcpu-list
xl claims
-
xl vcpu-pin xxx
- please see https://wiki.xenproject.org/wiki/Tuning_Xen_for_Performance for details.
While running my favorite MySQL benchmark test-ATIS
(see Simple MySQL benchmarks and BTRFS for
details) I noticed strange thing:
- While mysql benchmark eats 99% CPU in guest
- there is no such comparable CPU usage in Host (CPU is just around 3%)
- there was nearly zero I/O on Guest or Host.
Why Host does not see Guest's CPU usage?
Because our Host is also run as VM (
Domain-0
) so it sees only its own CPU usage. To reveal how all VMs are using CPU one has to usexl top
(or equivalentxentop
) or similar command which quries Xen Hyperisor (which is superior even to our main running Linux kernel).
Additionally I'm curious why Ubuntu 22.04 LTS PV guest running test-ATIS
seems to be
2x times slower than under Proxmox/KVM (total time 30s as PV under Xen, using
ext4 on LVM, while under similar setup under Proxmox it takes only 15s).
Also tried PV-HVM Ubuntu and got even worse results - 51s (!)
Also did CPU Pinning experiment:
-
from https://wiki.xenproject.org/wiki/Tuning_Xen_for_Performance
-
I have only 2 physical cores (AMD X2 - Opteron, without nested paging NPT, only SVM is available) so I will:
- reserve Core 0 for Xen (Domain-0)
- reserve Core 1 for VMs (other Domains)
-
To dedicate Core 0 to Xen (Domain-0) we have to append in
/etc/default/grub
to kernel parameters:dom0_max_vcpus=1 dom0_vcpus_pin
. IMPORTANT! Must use dedicated variable with_XEN_
keyword, example:# These parameters are for Xen, not for Linux kernel! GRUB_CMDLINE_XEN_DEFAULT="dom0_max_vcpus=1 dom0_vcpus_pin"
- after reboot try
sudo xl vcpu-list
- if these arguments worked there should be only single line withDomain-0
with single CPU=0
- after reboot try
-
on VM (/etc/xen/NAME.cfg) disallow use of CPU 0:
vcpus = 1 cpus="all,^0"
-
once you boot VM the
sudo xl vcpu-list
should always show CPU=0 assigned toDomain-0
and CPU=1 to VM.
However there is no measurable improvement regarding test-ATIS
under Ubuntu 22.04 LTS with
mariadb-server: around 32s for PV VM and around 51s for PV-HVM.
There are several task schedulers under Xen. To tune them you need to first find which one is used. In my case:
$ sudo xl info | fgrep xen_scheduler
xen_scheduler : credit2
So we can use only options that are valid for xl sched-credit2
(try man xl
for details), we can
query them by using:
$ sudo xl sched-credit2
Cpupool Pool-0: ratelimit=1000us
Name ID Weight Cap
Domain-0 0 256 0
ubu22-atis 3 256 0
Research LVM-thin support. Should be there since xen-tools 4.8+:
- https://github.com/xen-tools/xen-tools/issues/47
- https://github.com/xen-tools/xen-tools/commit/5587dc796a24d6b7c7d71b63b9944568c57fc5fd
Add support for LVM thin provisioning
This adds a parameter '--lvm_thin' to xen-create-image that allows you to specify the thin pool where the thin logical volumes will be created.
If '--lvm_thin' is not specified, nothing will change and thick provisioned volumes will be created.
-
https://xenbits.xen.org/docs/unstable/man/xl.cfg.5.html
- mentions
usbdevice=['tablet']
that helps coordinate mouse pointer.
- mentions
- https://wiki.xenproject.org/wiki/Tuning_Xen_for_Performance
- https://xenbits.xen.org/people/royger/talks/performance.pdf
Copyright © Henryk Paluch. All rights reserved.
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License