- setting up Xen on debian
- xen tips
- upgrading kernels on Xen instances
- odd xen interactions
Xen is a popular free virtualization environment. The hardware boots a "hypervisor", which in turn loads a kernel as the first virtualized machine (known as the dom0). The dom0 has privileged access to the hardware, and can instruct the hypervisor to allocate access to the hardware to the other virtualized machines (known as domUs).
setting up Xen on debian
On Debian etch (4.0r1), the standard kernels don't have the capability to run under Xen virtualization yet, so you need specialized xen-specific kernels.
The simplest way to set up a Xen environment after installation is:
aptitude install xen-linux-system-2.6.18-5-xen-686 linux-image-2.6-xen-686 bridge-utils libc6-xen
If you're using amd64 as your architecture, there is no libc6-xen package:
aptitude install xen-linux-system-2.6.18-5-xen-amd64 linux-image-2.6-xen-amd64 bridge-utils
Currently, 686 and amd64 are the only two architectures supported by xen in debian etch.
Once these packages are installed, you'll want to make a handful of modifications:
limit memory consumed by dom0
to do this, you'll want to modify /boot/grub/menu.lst. Change the # xenhopt= line to include a dom0_mem value (this is specified in kilobytes). Here's an example working with 128MiB for the dom0:
## Xen hypervisor options to use with the default Xen boot option # xenhopt=dom0_mem=131072
Do not uncomment these lines! The # xenhopt= line is used by update-grub to configure the "automagic" stanzas. To make sure the update propagates to those stanzas, as root, run:
You should see the changes reflected in the automagic stanzas that include the Xen hypervisor.
sending the hypervisor output to the serial line
use this option to get the xen hypervisor to talk to the first serial line: com1=115200,8n1. Pass it to the hypervisor the same way as the dom0_mem argument, above. If you're doing this, you'll probably also want to make sure that your dom0 kernels boot with a console= argument to send it to the physical serial line.
You can use Ctrl-O to send the magic sysrq to the domU. This is the same combination you send to a domU from its console.
ask for bridged networking
There are a lot of different virtualized networking environments you might want to use with Xen. By far the simplest is the "bridged" environment. This emulates having each of your domUs on a network hub. Each machine can see the traffic entering and leaving each other machine. This makes it more likely that individual machines could spoof each other or eavesdrop on each other on the network, so if you need more network isolation of the machines, you should consider using a fancier virtualized networking arrangement.
To set up a bridged virtualized network, modify /etc/xen/xend-config.sxp on the dom0: comment out the line that says:
make sure that /var/lib/xen is big enough
on shutting down the dom0, debian's initscripts save memory images of all currently-running domU's to /var/lib/xen. However, if the partition for /var/lib/xen isn't big enough to hold the fully allocated RAM of the physical hardware, the initscripts fail to actually shut down, and appear to hang indefinitely. If you use LVM, it's a good idea to allocate a chunk of disk in a logical volume that's the size of your physical RAM plus a decent safety margin (+25% works for me). Slap a reasonable filesystem on it, and mount it at /var/lib/xen
If you get this set up early on, it might save you a few panic-ridden hours on a remote reboot.
xen-tools is a very convenient package for managing a Xen server. Install it and read through /etc/xen-tools/xen-tools.conf, changing values to match how you plan to run the machine. We at CMRG recommend using LVM wherever possible, so consider that.
with all of the above changes made, you should now be able to reboot your machine, and it will come up as the dom0.
granting access to serial hardware to a domU
You might have some physical serial ports that you want to give control over to a domU. One reason to do this would be if that serial port was connected to the console of a physical machine which you wanted to give the domU owner access to.
You need to identify the interrupts and memory ranges associated with the specific serial port, and offer them to the domU via its config file (/etc/xen/domUname.cfg in the dom0). Here's where i found this info. An example config snippet for passing ttyS1 (aka COM2):
## pass ttyS1 into the domU irq = [ 3 ] ioports = [ "2f8-2ff" ]
Note that with a stock etch image in the domU, i needed to install udev to coax the domU's device nodes (/dev/ttyS1, etc) into existence.
escaping from a xen console
If you're on the dom0, and you've attached to the virtual console of a domU with xm console domUname , you can detach from the console just as if you were detaching from an old-school telnet session. Use Ctrl+]
passing pci devices to a domU
Debian xen has built in support for passing pci devices to domU using the "pciback" module. The following xen kernel config options are enabled by default:
CONFIG_XEN_PCIDEV_FRONTEND=y # for domU kernels CONFIG_XEN_PCIDEV_BACKEND=y # for dom0 kernels
You have to first find the pci device id of the devices you want to pass to the domU. This is most easily done with lspci. The id is given in the first column, usually of form "bus:slot.func" (eg. "00:0a.0"), refered to as "PCIDEV?" below. Sometimes the device needs to be referred to by it's full address form "domain:bus:slot.func" (eg. "0000:00:0a.0"), referred to as "DPCIDEV?" below.
Before passing a pci device to a domU, it must first be hidden from dom0. You can remove the pci devices manually from the dom0 kernel:
# remove the device from the kernel module controlling it DRIVER=$(dirname $(find /sys/bus/pci/drivers -iname "$DPCIDEV0")) echo -n $DPCIDEV0 > "$DRIVER"/unbind
and then add the devices to to the pciback list:
echo -n $DPCIDEV0 > /sys/bus/pci/drivers/pciback/new_slot echo -n $DPCIDEV0 > /sys/bus/pci/drivers/pciback/bind
To have this happen automatically at boot, you just need to pass some pciback kernel parameters to the dom0. With grub, this is best done by adding the needed parameters to the xenkopt config line in the grub menu.lst:
## Xen Linux kernel options to use with the default Xen boot option # xenkopt=pciback.hide=(PCIDEV0)(PCIDEV1)... pciback.verbose_request=1
Don't forget to run update-grub. Reboot the machine and make sure the changes take effect.
Now add the pci devices to the domU config with a pci=[...] line:
pci=['PCIDEV0', 'PCIDEV1', ...]
After creating the domU, lspci on the domU should show the passed through devices.
upgrading kernels on Xen instances
when a new kernel comes in, you'll want to make sure the new kernel is available to all your virtual machines (dom0 and all the domUs). This can be tricky to get right. The kernel for the dom0 is properly handled by grub. But the kernels for the domUs need to be modified in their config files in /etc/xen/domUname.cfg, as well as having the new kernel package installed on the domU (for its modules, etc).
I've also found in etch at least that you'll need to restart the domU via the dom0 to get the new kernel. That is, simply running
root@domU:~# shutdown -r now
is not enough to pick up changes in the configuration file. Instead, you should do:
root@dom0:~# xm shutdown domU root@dom0:~# xm create domU.cfg
And remember, you probably want the new kernel package (or at least the associated modules) are properly installed on the domU before the reboot, since /lib/modules won't necessarily be properly populated inside the domU otherwise.
odd xen interactions
these aren't bugs exactly, but weird hiccups related to how the system interoperates with non-virtualized debian systems:
linux-image-2.6 packages pull in initramfs-tools, busybox, etc unnecessarily within the domU
On a normal debian system, this is an important thing to do, and it's a good idea: the system needs to be capable of managing its own kernel and initramfs, and passing them off to the bootloader. Since a Xen domU doesn't handle its own bootloader (afaik), all of these extra packages aren't necessary. In fact, the only files needed internal to the system would be the modules associated with the kernel, so that new modules can be loaded after boot. the kernel and the initramfs itself are stored outside the domU and are fed to the Xen instance by the dom0.
The best way to work around this appears to be to just install the linux-modules-2.6.X-xen-arch package directly, since that package doesn't pull in the other dependencies, and it provides the modules needed. However, there is no metapackage linux-modules-2.6-xen-686 (or linux-modules-2.6-xen-amd64), so you'll need to install each ABI-distinct version independently.
vif random allocation of MAC address causes trouble with udev's persistent-net-generator.rules
If you don't specify an explicit MAC address in your domU's configuration file (/etc/xen/domUname.cfg), The Xen daemon will allocate a random MAC address in the `00:16:3e:xx:xx:xx` range. But if you run stock debian udev, it has a set of rules (/etc/udev/persistent-net-generator.rules) to make sure that network interfaces are persistently named, by recording MAC addresses and assigning numerically-increasing ethN names to the interfaces as they're encountered.
Since at each boot, a domU will get a novel MAC address, it will be assigned a new interface name. This means that all your scripts that want to do something with eth0 will fail, because the new (and only) interface is named eth1 (or eth2, etc).
The way around this? after your domU boots for the first time, go back and specify the MAC address explicitly in the relevant config file on the dom0. An example line would look like this (though you may not want to configure the IP address statically as well):
vif = [ 'ip=10.10.10.10, mac=00:16:3E:5D:27:83' ]
You can sort all of this out exclusively from the dom0 if you like, just do:
root@dom0:~# xm network-list domUname Idx BE MAC Addr. handle state evt-ch tx-/rx-ring-ref BE-path 0 0 00:16:3E:5D:27:83 0 4 8 523 /522 /local/domain/0/backend/vif/1/0 root@dom0:~#
You can pull the info from the Mac Addr. column, and put it in /etc/xen/domUname.cfg
xen doesn't like symlinks as kernel/ramdisk options
I tried to specify a standard symlink-to-latest-kernel arrangement in the kernel and ramdisk options within xen configs. The xen instances would not boot, though. (i don't have the error message handy). I had to go back to specifying the underlying files.
If you were able to specify symlinks, it would have (at least) the following advantages:
- by referring to a centralized symlink, you could conveniently mass-upgrade many different xen instances
- by using several independent symlinks, you could make it easier to automate specific upgrades, since changing a symlink can be done cleanly from simple shell commands. editing a file is more complicated.
- by using independent symlinks controlled by different users, you could let a user change the kernel/initrd used in their own xen instance without being able to change other parameters (memory? network?) in the config file, which would be uneditable by them.