Xen 4.1 On Linux Kernel 3.2
|March 26, 2012|
|Original development and publication date.|
|April 12, 2013|
|Minor revisions to give credit to referenced wiki and to fix spelling errors.|
This document describes how to install and configure Xen 4.1 virutalization on a Linux 3.2 kernel with a headless dom0 and domUs and using a paravirtualized Linux kernel for the domUs.
Table of Contents
- 1. Purpose of This Document
- 2. Assumptions about the Audience and the Content
- 3. Specific Configurations, Notes, and Caveats
- 4. More Information
- 5. Background
- 6. Installation and Configuration
- 6.1. Linux Installation
- 6.2. Installing Xen Components
- 6.3. Configure dom0 Kernel
- 6.4. Configure GRUB
- 6.5. Configure dom0 Networking
- 6.6. Install domU File System (/root) and Configure
- 6.7. Configure domU Kernel
- 6.8. Create Xen domU Config File
- 6.9. Start domU and Connect to Console
- 6.10. Final Thoughts
- 7. Further References
This document describes how to install Xen 4.1 on a Linux 3.2.x kernel.
This document intends to provide both general and specific guidance on installing a Xen Hypervisor with a Linux dom0 (host) and Linux domU (guests). This document will not describe in detail how to setup and install a Linux system/server as there is very good documentation on the web about how to do this and the procedure for doing so varies from distribution to distribution and this document in general will try to stay away from distribution related nuances (please see the section titled "Specific Configurations, Notes, and Caveats" for more information on the specifics of the system setup used for creation of this document).
However, it is not entirely possible to stay completely away from distribution specific items as this document intends to provide fairly specific information regarding network setup required by Xen 4.1.1 for guests.
This document will not go into specifics of building the Xen hypervisor and tools as there is also good documentation on this available and most modern Linux distributions come with some sort of package management system (such as portage for Gentoo or yum for others) that automates the acquisition, building (if needed), and installation of the Xen hypervisor and associated tools.
While I have tried to be as detailed (and most would probably say verbose or overly so) as possible, the undertaking of some of the tasks below require some additional knowledge. For instance, I assume that the reader either knows some of the minutia of building a customized kernel or that other sources on the web will be sufficient to get the reader through this document. I have provided the essential pieces with regards to custom kernel configuration for dom0 and domU; however if the reader is not familiar with "make menuconfig" or one of the alternatives and further lacks an understanding of the drivers and configuration needed to successfully build a kernel (without Xen support) for your specific hardware, then this is likely to be a frustrating exercise for that reader. Also, there are some specific items around building the root file system for the domUs that might trip some people up. Again, I have found good sources for this across the internet and as I am finishing writing the bulk of the content for this article (the heavy lifting for me), I might go back and try to find some additional sources for specific aspects of items within this document for which I assume the reader is already familiar.
I also assume that the reader is familiar with general Linux administration and configuration.
One additional note, there are many and disparate ways of configuring and building a Linux system. I have not attempted to enumerate or in any way provide a comprehensive solution; however, I have tried to call out instances where I know that I am using distribution (Gentoo) specific items for the reader. If you are using a different distribution, or if you are using a different boot loader or kernel bootstrap process (for instance initramfs), you should still be able to get some value out of this document, but the specifics will have to be changed for your specific needs and I am assuming that in general the reader will understand these differences. This is but one way of many to "skin the cat"...
To be a little more specific from the title, this document describes the use of Xen 4.1.1 with kernel version 3.2.1 (Gentoo R2). I am currently booting with GRUB 0.97-r10. My current domUs are built with Linux kernel version 3.2.12 (Gentoo). I also don't use "genkernel" (Gentoo tool for configuring kernels), so I pretty much always custom config my kernels for the target machine. I also don't boot with an initial RAM disk or anything too specialized (just straight into the kernel bzImage).
I prefer to use Gentoo and have been using Gentoo for many years after moving away from Slackware sometime in the Linux kernel 1.0 era. Some of the information here may not directly apply to other flavors of Linux, but if you are familiar with setting up your specific distribution and are primarily looking for general information on kernel configuration and basic setup and operations, then you should be good. If you are using Gentoo (and portage), then this document will be particularly relevant assuming you are using the most recent versions of the software.
Further configuration information may be in order as I am utilizing a x86_64 architecture for both the dom0 and domU configurations. My domUs are configured as paravirtualized (PV) guests rather than utilizing full virtualization (HVM).
Additionally, I opted to use bridged routing for my network configuration rather than routed or other networking options.
For installation of the domUs (guests), since I knew my specific configuration and had just purchased a 1TB hard-drive specifically for this server, I utilized physical disk partitions for the virtual machines rather than a file on the host OS. Now that I am going back and thinking about how I set this up, I might have rather used LVM for managing the disks for the host OS's as it might have provided a more flexible and manageable solution than physical volumes, so I will leave it to you to determine if you want to manage physical partitions or use LVM. I will say that I have used LVM in the past and it provides some very nice features for managing disk volumes (logically of course) that you may wish to take advantage.
I would also like to offer, here and now, very upfront, that I am in no way an expert. My only claim here is that I managed to get it to work and this is how I did it. I am sure there are optimizations that can be made to this and that there are far superior experts our there. If you are looking for the most tweaked and absolute minimal overhead and greatest performance, you might want to look elsewhere. Or you could start here and then tweak and munge as you get more into the details.
If you are looking for more information on Xen, I would suggest Xen.org main site for high level information on Xen and what is now offered in the Xen "family". For specific information on the Xen Hypervisor, you should visit the Xen Hypervisor Project.
For more information on Gentoo, you should visit the Gentoo main page.
I would also like to thank Brandon Lis (KnightAR) and Chris Alfano (Themightychris) for the wiki article http://en.gentoo-wiki.com/wiki/Xen4.1 as this is where I took most of my information and though it is quite stripped down (compared to this article), it does contain most of the meat of what needs to be done and if you find this article too verbose, you might want to use that article instead and then use this article as a reference if you require more details in a section.
This section of the document provides mostly background and anecdotal information regarding some of the whys and wherefores of what I was attempting to do. This section can generally be skipped for anyone not interested in more narrative information that does not directly apply to the actual installation and configuration of the system.
I originally started using Xen some years ago and had not done any work on Linux or xen in several years, but having decided to begin doing some systems and development work on my own again, I started looking at my current infrastructure (much reduced in recent years) and initially played some with VirtualBox on my Macs. This was OK, but left me a little unsatisfied as I have never really enjoyed working with "servers" on a desktop machine...
I found an older Core2 Duo machine at work that had been "disposed" and decided it would do for a start. I shoved 16GB of memory into the machine and a shiny new 1TB disk and decided this would be a good start for bringing up more than a few virtual machines which would be more than sufficient for my needs. However, once I started working through the actual installation of the base OS and then thinking about using xen, I realized that quite a bit has changed. I had not used xen in the 4.1 era and the last kernels I had worked with were in the 2.6 and occasional 2.4 lineage, so I was interested in getting a system up and running with the current state of the technology available.
Of course, in getting started, I first turned to the trusty internet for information on how to setup my system. I had remembered there being quite a bit of good information and while it took some starts and stops I had gotten some very good results with both Linux and Windows on Xen using HVM. I had not ever done PV (paravirtualized) guests at that time. Unfortunately all the information I found was still based Linux kernels that still required applying the xen patches rather than more recent kernels where xen support is part of the mainline. Further much of the documentation I found was also still using the pre 4.x Xen release. Some of the information I found left gaping holes and it was only through piecing together quite a bit of information from numerous sources that I was able to get a working configuration for the way I wanted the system configured. As such, I threw out a quick note to my Facebook and quickly got responses that indicated I should publish out the work I had done as others might find it useful.
This section describes the installation and configuration of the system. I will start with some basic information regarding installing the OS prior to installing Xen as this is how I typically do it and how it was done for this installation. I will then talk about installation of Xen, which is also rather trivial. Finally, once this is done, I will then discuss configuration of the kernel that we will plan to use for our dom0 (host, or privileged). I will then provide information on how to get the Xen hypervisor installed and the GRUB configuration that I used to get boot the Xen HV with the configured dom0 kernel. We will then need to do some configuration of the networking on our new dom0. Then I will detail the configuration of the domU kernel as well as configuring the VM (the guest configuration file) and then how to start and work with the guests.
One interesting component of the installation is that once you have your kernel configured, you can either boot the kernel directly or you can boot the kernel in the Xen HV. I found this very convenient. One item that also confused me at first was that it is no longer required to do anything special with regards to the kernel sources to get Xen support. Everything you need has been built into recent 2.6 kernels and if you are using a version 3.x kernel, then there is no need to patch your kernel sources anymore. Most of the information I found on Xen with Gentoo still mentions emerging the xen-sources and this is no longer necessary.
While it is not required to initially install a working system, I prefer this approach for several reasons. First, if I have a working system, then I have something to fall back to if I screw up the configuration of the kernel or for some reason the kernel won't boot into the Xen HV. Also, unless you have a system already installed and configured (or you use a "live CD", you are going to have a hard time getting the initial file system setup for Xen.
You can do an initial installation, if your distribution provides the capability, and instead of installing the stock kernel, you can build the customized Xen kernel and configure your system to boot into the Xen HV out of the box. Also, you might also find that your stock kernel from your distribution is already setup to boot into the Xen HV. However, if it is not or if you are unsure, I would still suggest you install your base distribution and ensure you have a working system with all the device drivers and components that you need.
For me, that involved installing Gentoo and following the Gentoo Handbook. I specifically used the live CD installation for the amd64 (it confused me as well if you have an Intel processor, but this is the 64bit installation you want to use generally for both AMD and Intel processors (not Xeon)). Install the system and get it up and running. Poke around and make sure you can access everything you need.
If you plan to use LVM, you should probably also make sure your kernel config is working. You can also install any packages you need for other components of the system. This will also ensure you have GRUB and everything else setup to move on to installing the Xen HV and the Xen tools.
Installing Xen on a recent portage/Gentoo installation will result in a failed ebuild due to dependencies (apparently) of Xen on Python 2.7.x. I have noted these below, but I wanted to call this out here. I have documented below how I worked around this issue for my own installation.
Once you have a working Linux installation (or if you are still working from a live CD), we should now be able to install the Xen components that are needed. I installed both the Xen hypervisor as well as the Xen tools. If you are following along at home with Gentoo, then you will notice that we are not emerging the xen-sources as this is no longer necessary since all the code is not in the kernel mainline source tree.
A couple of Gentoo notes... First, on my installation, I had python 3.2 installed by default. Xen tools apparently doesn't like versions of python newer than 2.x. As a result, when I emerged xen, it failed. If you emerge --pretend xen, you will see that as a part of the dependency chain, you get a python 2.7.x (depends on where portage is today) as a dependency. You can do one of two things, you can either directly emerge a python 2.7.x yourself or you can do what I did, which is emerge xen, let it fail, then eselect (I'll show you the syntax below) the python 2.7 and resume the emerge (or start it over). Yeah, I can be lazy and you should probably just emerge the python 2.7.x prior to emerging Xen.
I guess you might be saying that Gentoo sucks right now because you can't just emerge Xen, but it has to do with several components and while I am no expert, I have some recollection that I have seen this before where the Gentoo powers that be won't monkey around with your config just to get a package to install. I kind of understand and it is just one of the lovely quirks of living with a system that is good, but that doesn't do everything for you.
I don't know if portage does this for you, but you should probably make sure that you have /boot mounted (if it is on a separate partition, which is what I do and you don't have it auto-mounted in your fstab). Xen will install the hypervisor files to /boot which, if your (real) /boot isn't mounted, might cause you problems down the road...
On Gentoo, we do the following:
# emerge xen
If you were following above and didn't emerge a recent python the emerge will fail when it tries to emerge xen-tools-4.1.1. This is OK. As a part of the dependency installations prior, you should have gotten a python 2.x install. Now we just need to select the older version of python and all will be well. The syntax and example output of the state of where your system probably is (you will notice I am running as root (I know, bad monkey), or sudo as appropriate):
# eselect python list Available Python interpreters:  python2.7  python3.2 * # eselect python set 1 # eselect python list Available Python interpreters:  python2.7 *  python3.2 # emerge xen
As you can see above, I was lazy and just did the "emerge xen" again. You could probably do an "emerge --resume" and be fine. Obviously, if you are using something other than Gentoo (or portage), you will need to do what you need to do to install xen. One other point to make here, when I "emerged xen", it also installed xen-tools (which is what created the problem with python), so you don't need to do a separate request for installation of xen-tools with Gentoo/portage. If you are using something else (yum for instance), you might want to ensure (I have almost no experience with yum or other package managers) that xen-tools are installed and if not, ensure you get them installed.
If you are not familiar with custom kernel configuration, I would strongly suggest (as I do myself) that you first work to get a working custom kernel that is not running under Xen. This is because if you don't have all your hardware configured properly, you will likely have a hard time getting the base kernel running and to add to this the frustration of not necessarily knowing if it is a hardware configuration issue (and not Xen) or whether there is some issue with your Xen configuration can be frustrating (I know). For this reason, I pretty much always configure a standard (custom) kernel and ensure all my hardware is recognized and my kernel is running smoothly prior to trying to add Xen support and then run under the Xen hypervisor (as a dom0 or privileged guest).
This section is actually interchangeable with the previous section as you can configure the kernel and then install the Xen components. Either way, we need to configure the kernel to be a dom0 kernel and enable the Xen paravirtualized drivers and such to allow us to do PV domUs...
If this is your first time configuring a kernel, it might seem a little intimidating, but really, this is one of my favorite things about Linux and it gets addictive to tweak on one's kernel build. I won't cover full kernel configuration here, as I assume you configured a kernel prior that had all the drivers you need (particularly for network devices and for disk devices). If you are unfamiliar, then I would suggest you do at least one custom kernel prior to trying to build your Xen dom0 kernel just to ensure you have all the drivers configured that you will need and you don't get frustrated wondering if it is a Xen issue (and fighting that) when it might just be a misconfigured kernel to begin with...
This is one area where I was able to get some information from the Xen site(Mainline Linux Kernel Configs, although it was a little confusing. There is nothing on the page that mentions which kernel version these configs came from. Additionally it was confusing about what is actually needed in the dom0 kernel config. But this was part of the docs that I did use to finally piece the puzzle together. I found it tricky to find exactly where these parameters they mentioned lived in the config (I use "menuconfig"). Particularly, in my dom0 kernel, I only selected the "backend" drivers. This is where I leaned heavily on the /proc/config.gz (which I always enable on my custom kernels) to figure out what was already enabled and what needed to be enabled (as well as filtering through .confg in the /usr/src/linux directory).
I typically use "make menuconfig" as I am generally working over ssh and don't generally install GUI or GUI libraries on my servers.
From the Xen site above (Mainline Linux Kernel Configs), I was able to find most of the options; however a few of the options I had a hard time finding. If you visit that site, you will find all of the options and I have provided the relevant help articles for each of the configuration options for Dom0 in the Linux 3.2.1 source tree (These should not change much for most of the 3.2.x tree, but YMMV and I will work to keep this updated as I migrate to newer kernel versions).
Kernel config items have dependencies and if you aren't finding an item, it is probably because you haven't enabled something for which it depends. The kernel config help that I have provided below will show you all dependencies (parent and child). Where I knew about them, but where they weren't recommended or covered on the Xen kernel config site, I have provided for your convenience.
It is important to note that Xen dom0 support depends on ACPI support so this must be enabled in your dom0 or you won't see some of these options at all.
I just realized that I need to add a configuration prompt to enable bridged networking as that is how I demonstrate setting up networking below.
[*] Networking support ---> Networking options ---> <*> 802.1d Ethernet Bridging
There were a number of dependencies for items required for Xen dom0 support that I could not find anywhere. Even using the "/" key in menuconfig resulted in minimal information. As a result I assume these are selected based on other items (such as processor type) and cannot be unselected. I have included these items here for the sake of completeness.
Symbol: X86_IO_APIC [=y] Type : boolean Symbol: XEN_DOM0 [=y] Type : boolean Symbol: PCI_XEN [=y] Type : boolean Selects: SWIOTLB_XEN [=y] Selected by: XEN_PCIDEV_FRONTEND [=n] && PCI [=y] && X86 [=y] && XEN [=y] Symbol: XEN_PRIVILEGED_GUEST [=y] Type : boolean
While the item in this section is not necessarily required, there are a number of options suggested or called out on the Xen kernel config wiki that will not be available to select if you haven't enabled this config item. This item is generally (in my distro at least) selected by default, but if you are tempted to disable this item, it will prevent you from selecting some options.
CONFIG_EXPERIMENTAL: Some of the various things that Linux supports (such as network drivers, file systems, network protocols, etc.) can be in a state of development where the functionality, stability, or the level of testing is not yet high enough for general use. This is usually known as the "alpha-test" phase among developers. If a feature is currently in alpha-test, then the developers usually discourage uninformed widespread use of this feature by the general public to avoid "Why doesn't this work?" type mail messages. However, active testing and use of these systems is welcomed. Just be aware that it may not meet the normal level of reliability or it may fail to work in some special cases. Detailed bug reports from people familiar with the kernel internals are usually welcomed by the developers (before submitting bug reports, please read the documents <file:README>, <file:MAINTAINERS>, <file:REPORTING-BUGS>, <file:Documentation/BUG-HUNTING>, and <file:Documentation/oops-tracing.txt> in the kernel source). This option will also make obsoleted drivers available. These are drivers that have been replaced by something else, and/or are scheduled to be removed in a future kernel release. Unless you intend to help test and develop a feature or driver that falls into this category, or you have a situation that requires using these features, you should probably say N here, which will cause the configurator to present you with fewer choices. If you say Y here, you will be offered the choice of using features or drivers that are currently considered to be in the alpha-test phase. Symbol: EXPERIMENTAL [=y] Type : boolean Prompt: Prompt for development and/or incomplete code/drivers Defined at init/Kconfig:32 Location: -> General setup
The following configuration items are found under the "Processor type and features" submenu under the main menu.
CONFIG_PARAVIRT_GUEST: Say Y here to get to see options related to running Linux under various hypervisors. This option alone does not add any kernel code. If you say N, all options in this submenu will be skipped and disabled. Symbol: PARAVIRT_GUEST [=y] Type : boolean Prompt: Paravirtualized guest support Defined at arch/x86/Kconfig:526 Location: -> Processor type and features Selected by: X86_VSMP [=n] && X86_64 [=y] && PCI [=y] && X86_EXTENDED_PLATFORM [=y]
The "Paravirtualized guest support" option (parent section) must be enabled in order to see the items in this subsection of the menu configuration for the kernel.
CONFIG_XEN: This is the Linux Xen port. Enabling this will allow the kernel to boot in a paravirtualized environment under the Xen hypervisor. Symbol: XEN [=y] Type : boolean Prompt: Xen guest support Defined at arch/x86/xen/Kconfig:5 Depends on: PARAVIRT_GUEST [=y] && (X86_64 [=y] || X86_32 [=n] && \ X86_PAE [=n] && !X86_VISWS [=n]) && X86_CMPXCHG [=y] && \ X86_TSC [=y] Location: -> Processor type and features -> Paravirtualized guest support (PARAVIRT_GUEST [=y]) Selects: PARAVIRT [=y] && PARAVIRT_CLOCK [=y]
In my configuration, the next item "Enable paravirtualization code" was such that it could not be unselected, because I had enabled a sub-menu item "Paravirtualization layer for spinlocks" which was recommended by the Xen kernel config site listed above.
CONFIG_PARAVIRT: This changes the kernel so it can modify itself when it is run under a hypervisor, potentially improving performance significantly over full virtualization. However, when run without a hypervisor the kernel is theoretically slower and slightly larger. Symbol: PARAVIRT [=y] Type : boolean Prompt: Enable paravirtualization code Defined at arch/x86/Kconfig:570 Depends on: PARAVIRT_GUEST [=y] Location: -> Processor type and features -> Paravirtualized guest support (PARAVIRT_GUEST [=y]) Selected by: X86_VSMP [=n] && X86_64 [=y] && PCI [=y] && \ X86_EXTENDED_PLATFORM [=y] || PARAVIRT_TIME_ACCOUNTING [=y] && \ PARAVIRT_GUEST [=y] || XEN [=y] && PARAVIRT_GUEST [=y] && (\ X86_64 [=y] || X86_32 [=n] && X86_PAE [=n] && !X86_VISWS [=n]) && \ X86_CMPXCHG [=y] && X86_TSC [=y] || KVM_CLOCK [=n] && \ PARAVIRT_GUEST [=y] || KVM_GUEST [=n] && PARAVIRT_GUEST [=y] || \ LGUEST_GUEST [=n] && PARAVIRT_GUEST [=y] && X86_32 [=n]
I don't know that the following is required, but it was recommended as mentioned above on the Xen kernel config site. You might also notice that this item is dependant on the "EXPERIMENTAL" option and if you haven't selected this prior to visiting this section of the kernel config, you won't see this item.
CONFIG_PARAVIRT_SPINLOCKS: Paravirtualized spinlocks allow a pvops backend to replace the spinlock implementation with something virtualization-friendly (for example, block the virtual CPU rather than spinning). Unfortunately the downside is an up to 5% performance hit on native kernels, with various workloads. If you are unsure how to answer this question, answer N. Symbol: PARAVIRT_SPINLOCKS [=y] Type : boolean Prompt: Paravirtualization layer for spinlocks Defined at arch/x86/Kconfig:578 Depends on: PARAVIRT_GUEST [=y] && PARAVIRT [=y] && SMP [=y] && \ EXPERIMENTAL [=y] Location: -> Processor type and features -> Paravirtualized guest support (PARAVIRT_GUEST [=y]) -> Enable paravirtualization code (PARAVIRT [=y])
The following item must be selected under the item specified here in order to get to the submenu that has the ACPI options. Additionally, as noted on the Xen kernel config wiki, this option is required to see many of the dom0 options specified here.
CONFIG_ACPI: Advanced Configuration and Power Interface (ACPI) support for Linux requires an ACPI-compliant platform (hardware/firmware), and assumes the presence of OS-directed configuration and power management (OSPM) software. This option will enlarge your kernel by about 70K. Linux ACPI provides a robust functional replacement for several legacy configuration and power management interfaces, including the Plug-and-Play BIOS specification (PnP BIOS), the MultiProcessor Specification (MPS), and the Advanced Power Management (APM) specification. If both ACPI and APM support are configured, ACPI is used. The project home page for the Linux ACPI subsystem is here: <http://www.lesswatts.org/projects/acpi/> Linux support for ACPI is based on Intel Corporation's ACPI Component Architecture (ACPI CA). For more information on the ACPI CA, see: <http://acpica.org/>> ACPI is an open industry specification co-developed by Hewlett-Packard, Intel, Microsoft, Phoenix, and Toshiba. The specification is available at: <http://www.acpi.info> Symbol: ACPI [=y] Type : boolean Prompt: ACPI (Advanced Configuration and Power Interface) Support Defined at drivers/acpi/Kconfig:5 Depends on: !IA64_HP_SIM && (IA64 || X86 [=y]) && PCI [=y] Location: -> Power management and ACPI options Selects: PNP [=y]
The following item is optional (thankfully since /procfs is "deprecated")...
CONFIG_ACPI_PROCFS: For backwards compatibility, this option allows deprecated /proc/acpi/ files to exist, even when they have been replaced by functions in /sys. This option has no effect on /proc/acpi/ files and functions which do not yet exist in /sys. Say N to delete /proc/acpi/ files that have moved to /sys/ Symbol: ACPI_PROCFS [=y] Type : boolean Prompt: Deprecated /proc/acpi files Defined at drivers/acpi/Kconfig:46 Depends on: ACPI [=y] && PROC_FS [=y] Location: -> Power management and ACPI options -> ACPI (Advanced Configuration and Power Interface) Support (ACPI [=y])
Configuration items found in this section are under the "Device Drivers" submenu.
You might want to take note that this configuration item is dependant upon the "CONFIG_XEN_BACKEND" item that is configured further down in the ""Xen driver support" section below. If that option is not enabled, then you likely won't have this option. If you don't see it, then go further down in the config and enable "CONFIG_XEN_BACKEND".
CONFIG_XEN_BLKDEV_BACKEND: The block-device backend driver allows the kernel to export its block devices to other guests via a high-performance shared-memory interface. The corresponding Linux frontend driver is enabled by the CONFIG_XEN_BLKDEV_FRONTEND configuration option. The backend driver attaches itself to a any block device specified in the XenBus configuration. There are no limits to what the block device as long as it has a major and minor. If you are compiling a kernel to run in a Xen block backend driver domain (often this is domain 0) you should say Y here. To compile this driver as a module, chose M here: the module will be called xen-blkback. Symbol: XEN_BLKDEV_BACKEND [=y] Type : tristate Prompt: Xen block-device backend driver Defined at drivers/block/Kconfig:488 Depends on: BLK_DEV [=y] && XEN_BACKEND [=y] Location: -> Device Drivers -> Block devices (BLK_DEV [=y])
You might want to take note that this configuration item is dependant upon the "CONFIG_XEN_BACKEND" item that is configured further down in the ""Xen driver support" section below. If that option is not enabled, then you likely won't have this option. If you don't see it, then go further down in the config and enable "CONFIG_XEN_BACKEND".
CONFIG_XEN_NETDEV_BACKEND: This driver allows the kernel to act as a Xen network driver domain which exports paravirtual network devices to other Xen domains. These devices can be accessed by any operating system that implements a compatible front end. The corresponding Linux frontend driver is enabled by the CONFIG_XEN_NETDEV_FRONTEND configuration option. The backend driver presents a standard network device endpoint for each paravirtual network device to the driver domain network stack. These can then be bridged or routed etc in order to provide full network connectivity. If you are compiling a kernel to run in a Xen network driver domain (often this is domain 0) you should say Y here. To compile this driver as a module, chose M here: the module will be called xen-netback. Symbol: XEN_NETDEV_BACKEND [=y] Type : tristate Prompt: Xen backend network device Defined at drivers/net/Kconfig:311 Depends on: NETDEVICES [=y] && XEN_BACKEND [=y] Location: -> Device Drivers -> Network device support (NETDEVICES [=y])
Configuration items found in this section are under the "Xen driver support" submenu. The following are in the order that they occurred in my menuconfig. I actually have all the items in this submenu enabled; however, the Xen kernel config wiki (above) does not call out all of them as required.
CONFIG_XEN_BALLOON: The balloon driver allows the Xen domain to request more memory from the system to expand the domain's memory allocation, or alternatively return unneeded memory to the system. Symbol: XEN_BALLOON [=y] Type : boolean Prompt: Xen memory balloon driver Defined at drivers/xen/Kconfig:4 Depends on: XEN [=y] Location: -> Device Drivers -> Xen driver support
CONFIG_XEN_SCRUB_PAGES: Scrub pages before returning them to the system for reuse by other domains. This makes sure that any confidential data is not accidentally visible to other domains. Is it more secure, but slightly less efficient. If in doubt, say yes. Symbol: XEN_SCRUB_PAGES [=y] Type : boolean Prompt: Scrub pages before returning them to system Defined at drivers/xen/Kconfig:59 Depends on: XEN [=y] && XEN_BALLOON [=y] Location: -> Device Drivers -> Xen driver support -> Xen memory balloon driver (XEN_BALLOON [=y])
CONFIG_XEN_DEV_EVTCHN: The evtchn driver allows a userspace process to triger event channels and to receive notification of an event channel firing. If in doubt, say yes. Symbol: XEN_DEV_EVTCHN [=y] Type : tristate Prompt: Xen /dev/xen/evtchn device Defined at drivers/xen/Kconfig:70 Depends on: XEN [=y] Location: -> Device Drivers -> Xen driver support
CONFIG_XEN_BACKEND: Support for backend device drivers that provide I/O services to other virtual machines. Symbol: XEN_BACKEND [=y] Type : boolean Prompt: Backend driver support Defined at drivers/xen/Kconfig:79 Depends on: XEN [=y] && XEN_DOM0 [=y] Location: -> Device Drivers -> Xen driver support
CONFIG_XENFS: The xen filesystem provides a way for domains to share information with each other and with the hypervisor. For example, by reading and writing the "xenbus" file, guests may pass arbitrary information to the initial domain. If in doubt, say yes. Symbol: XENFS [=y] Type : tristate Prompt: Xen filesystem Defined at drivers/xen/Kconfig:87 Depends on: XEN [=y] Location: -> Device Drivers -> Xen driver support
CONFIG_XEN_COMPAT_XENFS: The old xenstore userspace tools expect to find "xenbus" under /proc/xen, but "xenbus" is now found at the root of the xenfs filesystem. Selecting this causes the kernel to create the compatibility mount point /proc/xen if it is running on a xen platform. If in doubt, say yes. Symbol: XEN_COMPAT_XENFS [=y] Type : boolean Prompt: Create compatibility mount point /proc/xen Defined at drivers/xen/Kconfig:97 Depends on: XEN [=y] && XENFS [=y] Location: -> Device Drivers -> Xen driver support -> Xen filesystem (XENFS [=y])
CONFIG_XEN_SYS_HYPERVISOR: Create entries under /sys/hypervisor describing the Xen hypervisor environment. When running native or in another virtual environment, /sys/hypervisor will still be present, but will have no xen contents. Symbol: XEN_SYS_HYPERVISOR [=y] Type : boolean Prompt: Create xen entries under /sys/hypervisor Defined at drivers/xen/Kconfig:109 Depends on: XEN [=y] && SYSFS [=y] Location: -> Device Drivers -> Xen driver support Selects: SYS_HYPERVISOR [=y]
CONFIG_XEN_GNTDEV: Allows userspace processes to use grants. Symbol: XEN_GNTDEV [=m] Type : tristate Prompt: userspace grant access device driver Defined at drivers/xen/Kconfig:123 Depends on: XEN [=y] Location: -> Device Drivers -> Xen driver support Selects: MMU_NOTIFIER [=y]
CONFIG_XEN_PCIDEV_BACKEND: The PCI device backend driver allows the kernel to export arbitrary PCI devices to other guests. If you select this to be a module, you will need to make sure no other driver has bound to the device(s) you want to make visible to other guests. The parameter "passthrough" allows you specify how you want the PCI devices to appear in the guest. You can choose the default (0) where PCI topology starts at 00.00.0, or (1) for passthrough if you want the PCI devices topology appear the same as in the host. The "hide" parameter (only applicable if backend driver is compiled into the kernel) allows you to bind the PCI devices to this module from the default device drivers. The argument is the list of PCI BDFs: xen-pciback.hide=(03:00.0)(04:00.0) If in doubt, say m. Symbol: XEN_PCIDEV_BACKEND [=m] Type : tristate Prompt: Xen PCI-device backend driver Defined at drivers/xen/Kconfig:152 Depends on: PCI [=y] && X86 [=y] && XEN [=y] && XEN_BACKEND [=y] Location: -> Device Drivers -> Xen driver support
Once you have finished configuring the kernel, you will need to build and install your kernel according to the documentation of your distribution as well as make changes to your boot loader. I typically use GRUB and I have provided instructions for configuring GRUB with my typical kernel configuration below. The typical command to build a kernel is to stay in the same directory where you did "make menuconfig" and issues the "make && make modules_install" command. For my setup and configuration, I then (after a successful compile) copy the kernel to the boot partition and make the changes below to GRUB.
If you are using an x86_64 kernel, then your kernel should be located under "./arch/x86_64/boot/" and be named "bzImage". You can see my naming convention (I tried to match other conventions I have seen) below for copying to the boot partition/folder. What might not be obvious however, is that I utilize symbolic links to manage which kernel I am booting...
Once we have the kernel configured and the Xen HV installed, we need to make some changes to GRUB to boot the Xen HV. Once we move to the Xen HV, the boot process becomes, first boot Xen HV, then boot the dom0 kernel under the care of the Xen HV. In order to accomplish this, I usually setup my grub.conf as so:
1|default 0 2|timeout 30 3| 4|title Xen 4.1.1 / Gentoo Linux 3.2.1 5|root (hd0,0) 6|kernel /boot/xen.gz dom0_mem=2048M 7|module /boot/kernel root=/dev/sda3 zcache 8| 9|title Gentoo Linux Latest 10|root (hd0,0) 11|kernel /boot/kernel root=/dev/sda3 12| 13|title Gentoo Linux Latest (rescue) 14|root (hd0,0) 15|kernel /boot/kernel root=/dev/sda3 init=/bin/bb 16| 17|title Gentoo Linux Previous 18|root (hd0,0) 19|kernel /boot/kernel.prev root=/dev/sda3
You will need to copy your newly configured kernel (from our config and build above) to the /boot (or where ever you have it configured in grub) drive/partition/directory and ensure you point grub at the right one. One item to note, I saw someone recently using links to generic names to specific kernel versions and while there are probably reasons not to, I rather like this. That is why my kernel names are generic in this file (except for the descriptions). Besides, didn't I mention that I am lazy and that this makes for less typing (and less likely to miss-key a file name in the grub config)...
Configuring networking on dom0 is likely dependant on what distribution you are using, but in Gentoo, we need to edit the /etc/conf.d/net script to change it to look like the following:
1|dns_domain="hendrix.local" 2| 3|#config_eth0="10.0.1.10/24" 4|#routes_eth0="default via 10.0.1.1" 5| 6|dns_servers="10.0.1.1" 7| 8|bridge_xenbr0="eth0" 9|config_xenbr0="10.0.1.10/24" 10|routes_xenbr0="default via 10.0.1.1"
Obviously you will want to set the parameters with respect to your network; however, you will notice two things. First, the two commented out lines (lines numbered 3 & 4) were my original entries before I configured networking for Xen. Second, I had to add the bridged interface (and configure) it for Xen. Obviously this means I am using a bridged network for my VMs instead of routed. The Xen tool scripts that came out of the box expect a bridged interface named "xenbr0" to allow them to automatically create the virtual (bridged) interfaces for the domUs. This can be changed, but that is beyond the scope of this document.
One item to note, if you are doing paravirtualized (PV) guests as I have been doing and this document is focused on paravirtualized guest (and dom0 setup for the same), you will not need to install, setup, or configure a boot loader for the domUs. This makes setup of the domUs (guests) quite a bit easier, IMHO.
There are several ways to do this, but as I was installing to a physical partition (as opposed to a local file in dom0) I essentially just mounted the partition in the dom0 and then chroot into the new environment and essentially did a stage 3 (Gentoo) install to get the root file system created. Your process may need to vary depending on how your typical distribution installs the root file system (this is one of the nice things in my opinion) with the Gentoo "way" of installing a system as they provide a tarball of the root file system that can be used to get the system up and running.
There is a way to do this for local files (as opposed to a partition) using a loop back device. I have done this before and there are some good sources on the internet that show how to do this so I don't feel the need to document this here.
You will also need to add a line to the /etc/inittab file to add a line for the Xen console (that is also configured in the domU config file below). My inittab file is:
1|# 2|# /etc/inittab: This file describes how the INIT process should set up 3|# the system in a certain run-level. 4|# 5|# Author: Miquel van Smoorenburg, <firstname.lastname@example.org> 6|# Modified by: Patrick J. Volkerding, <email@example.com> 7|# Modified by: Daniel Robbins, <firstname.lastname@example.org> 8|# Modified by: Martin Schlemmer, <email@example.com> 9|# Modified by: Mike Frysinger, <firstname.lastname@example.org> 10|# Modified by: Robin H. Johnson, <email@example.com> 11|# 12|# $Header: /var/cvsroot/gentoo-x86/sys-apps/sysvinit/files/inittab-2.87,v 1.1 2010/01/08 16:55:07 williamh Exp $ 13| 14|# Default runlevel. 15|id:3:initdefault: 16| 17|# System initialization, mount local filesystems, etc. 18|si::sysinit:/sbin/rc sysinit 19| 20|# Further system initialization, brings up the boot runlevel. 21|rc::bootwait:/sbin/rc boot 22| 23|l0:0:wait:/sbin/rc shutdown 24|l0s:0:wait:/sbin/halt -dhp 25|l1:1:wait:/sbin/rc single 26|l2:2:wait:/sbin/rc nonetwork 27|l3:3:wait:/sbin/rc default 28|l4:4:wait:/sbin/rc default 29|l5:5:wait:/sbin/rc default 30|l6:6:wait:/sbin/rc reboot 31|l6r:6:wait:/sbin/reboot -dk 32|#z6:6:respawn:/sbin/sulogin 33| 34|# new-style single-user 35|su0:S:wait:/sbin/rc single 36|su1:S:wait:/sbin/sulogin 37| 38|# TERMINALS 39|c1:12345:respawn:/sbin/agetty 38400 tty1 linux 40|c2:2345:respawn:/sbin/agetty 38400 tty2 linux 41|c3:2345:respawn:/sbin/agetty 38400 tty3 linux 42|c4:2345:respawn:/sbin/agetty 38400 tty4 linux 43|c5:2345:respawn:/sbin/agetty 38400 tty5 linux 44|c6:2345:respawn:/sbin/agetty 38400 tty6 linux 45| 46|# SERIAL CONSOLES 47|#s0:12345:respawn:/sbin/agetty 9600 ttyS0 vt100 48|#s1:12345:respawn:/sbin/agetty 9600 ttyS1 vt100 49| 50|# What to do at the "Three Finger Salute". 51|ca:12345:ctrlaltdel:/sbin/shutdown -r now 52| 53|# Used by /etc/init.d/xdm to control DM startup. 54|# Read the comments in /etc/init.d/xdm for more 55|# info. Do NOT remove, as this will start nothing 56|# extra at boot if /etc/init.d/xdm is not added 57|# to the "default" runlevel. 58|x:a:once:/etc/X11/startDM.sh 59| 60|h0:12345:respawn:/sbin/agetty 9600 hvc0 screen
The /etc/inittab file above is the standard one for my distribution (Gentoo) except for the last line in the file (line #60), which I had to add to the inittab in the domU configuration in order to get the Xen console (text as you will note in the domU kernel config below) to work properly.
Because I was using a fully paravirtualized guest, I do not need to install, setup, or configure a boot loader (GRUB in my preferred case) for the domUs. This makes configuration quite a bit easier. I also only setup a physical root partition and a separate swap partition for each VM. I will call this out later as well, but I thought you might want to know this tidbit somewhere around the initial domU configuration.
One of the cool things about the domU kernels is that you can pretty much strip all the drivers (network, disk, etc.) from the kernel just leaving the Xen frontend drivers. I found this interesting and didn't exactly trust this the first few kernels I rolled, but once I got more familiar and had at least a bootable PV domU kernel, I decided to try it out and was pleasantly surprised. You can also safely strip out ACPI support in your domUs as this isn't (apparently) used in a PV guest either.
I need to provide the specific kernel config (make menuconfig) to illustrate the specific kernel options that need to be selected (and that can be safely unselected)... Stay tuned...
In the interim, the Xen kernel config wiki from my Section 4, “More Information” section above is pretty good with respect to the domU configuration and most of the options (with respect to the domU) were pretty easy to find based on that documentation. I had the most difficulty with the dom0 configuration and trying to find all the kernel config options that were called out in the documentation. As well as experimenting to be sure I only had to enable the "backend" components of the drivers for that (dom0) configuration which also ensuring in the domUs that I had the corresponding "frontend" drivers enabled.
I created a specific directory to hold my Xen configuration files (they are also linked to the automatic startup directory in /etc when they are ready for prime time and I want them auto started at boot), and the kernel files. After much deliberation and gnashing of teeth, I decided to keep them in the "/home/xen" directory with two subdirectories, one for kernels named the same and the other named "config". This works for me, but you are welcome to do this differently if you wish.
1|# Config file for base or template pv VM instance 2|#kernel = "/home/xen/kernels/kernel-3.2.12-gentoo-domU.new" 3|kernel = "/home/xen/kernels/kernel" 4|memory = 2048 5|name = "base" 6|vcpus=1 7|disk = [ 'phy:/dev/sdb10,sdb1,w', 'phy:/dev/sdb11,sdb2,w' ] 8|root = "/dev/xvdb2" 9|vif = [ 'ip=10.0.1.99' ] 10|extra = "xencons=tty" 11|sdl=0 12|opengl=0 13|vnc=0 14|serial='pty' 15|tsc_mode=0 16|localtime=1
This is a pretty typical config and there are good sources and examples of other ways to configure the disks (using lvm and files rather than physical partitions), so you will need to adjust these to your configuration and how you have the disks configured. I passed two physical disks to the guest domU, one for the root file system and the other for a swap partition.
Also important to note, although I am still not clear on the details myself, is that I had trouble with connecting to the domU consoles until I made the change to the /etc/inittab file in the domU configuration as specified in Section 6.6, “Install domU File System (/root) and Configure”, and then also added line 14 (from the program listing above) to the domU config file. You should also note that I have specifically disabled "sdl", "opengl", and "vnc" in the domUs. I don't know if this is necessarily required, but as neither the dom0 and domU have any graphics support, I don't even think that if I enabled that I would be able to connect any way other than through the Xen console or through ssh.
I found the method of getting the various Xen tools up and running different from most of the documentation on the web. After a little research, I think this is because of the changes between Xen 3.x and 4.x. As such, you might find that there is no longer a "xend" service available to start. With Gentoo, you can change this through "use" variables (in make.conf), but I was able to get it working without having to do this and since the default is not to create a "xend" service (in the startup director(ies)), I will eventually provide documentation on how to do that here.
Don't forget to document how you got the console to work...
In order to accomplish the installation and configuration, while I was not able to find a complete reference for the given software versions of the system I wished to use, there were a number of sites that I used to piece together enough information to be able to get the system I wanted configured and working and these sites should be acknowledged as well the reader or administrator may find additional information on these sites.