Xen has really taken off since last December, when the leaders of the Xen project formed a corporation to sell and support Xen and they immediately secured $6m from venture capitalists Kleiner Perkins Caufield & Byers and Sevin Rosen Funds. Xen is headed up by Ian Pratt, a senior faculty member at the University of Cambridge in the UK, who is the chief technology officer at XenSource, the company that has been created to commercialize Xen.
Pratt told me in December that he had basically been told to start a company to support Xen because some big financial institutions on Wall Street and in the City (that’s London’s version of Wall Street for the Americans reading this who may not have heard the term) insisted that he do so because they loved what Xen was doing.
Seven years ago, Ian Pratt joined the senior faculty at the University of Cambridge in the UK, and after being on the staff for two years, he came up with a schematic for a futuristic, distributed computing platform for wide area network computing called Xenoserver. The idea behind the Xenoserver project is one that now sounds familiar, at least in concept, but sounded pretty sci-fi seven years ago: hundreds of millions of virtual machines running on tens of millions of servers, connected by the Internet, and delivering virtualized computing resources on a utility basis where people are charged for the computing they use.
The Xenoserver project consisted of the Xen virtual machine monitor and hypervisor abstraction layer, which allows multiple operating systems to logically share the hardware on a single physical server, the Xenoserver Open Platform for connecting virtual machines to distributed storage and networks, and the Xenoboot remote boot and management system for controlling servers and their virtual machines over the Internet.
Work on the Xen hypervisor began in 1999 at Cambridge, where Pratt was irreverently called the XenMaster by project staff and students. During that first year, Pratt and his project team identified how to do secure partitioning on 32-bit X86 servers using a hypervisor and worked out a means for shuttling active virtual machine partitions around a network of machines. This is more or less what VMware does with its ESX Server partitioning software and its VMotion add-on to that product.
About 18 months ago, after years of coding the hypervisor in C and the interface in Python, the Xen portion of the Xenoserver project was released as Xen 1.0. According to Pratt, it had tens of thousands of downloads. This provided the open source developers working on Xen with a lot of feedback, which was used to create Xen 2.0, which started shipping last year. With the 2.0 release, the Xen project added the Live Migration feature for moving virtual machines between physical machines, and then added some tweaks to make the code more robust.
Xen and VMware’s GSX Server and EXS Server have a major architectural difference. VMware’s hypervisor layer completely abstracts the X86 system, which means any operating system supported on X86 processors can be loaded into a virtual machine partition. This, said Pratt, puts tremendous overhead on the systems. Xen was designed from the get-go with an architecture focused on running virtual machines in a lean and mean fashion, and Xen does this by having versions of open source operating systems tweaked to run on the Xen hypervisor.
That is why Xen 2.0 only supports Linux 2.4, Linux 2.6, FreeBSD 4.9 and 5.2, and NetBSD 2.0 at the moment; special tweaks of NetBSD and Plan 9 are in the works, and with Solaris 10 soon to be open-source, that will be available as well.
With Xen 1.0, Pratt had access to the source code to Windows XP from Microsoft, which allowed the Xen team to put Windows XP inside Xen partitions. With the future Pacifica hardware virtualization features in single-core and dual-core Opterons and Intel creating a version of its Vanderpool virtualization hardware features in Xeon and Itanium processors also being made for Pentium 4 processors (this is called Silvervale for some reason), both Xen and VMware partitioning software will have hardware-assisted virtual machine partitioning.
While no one is saying this because they cannot reveal how Pacifica or Vanderpool actually work, these technologies may do most of the X86 abstraction work, and therefore should allow standard, compiled operating system kernels run inside Xen or VMware partitions. That means Microsoft can’t stop Windows from being supported inside Xen over the long haul.
Thor Lancelot Simon, one of the key developers and administrators at the NetBSD Foundation that controls the development of NetBSD, reminded everyone that NetBSD has been supporting the Xen 1.2 hypervisor and monitor within a variant of the NetBSD kernel (that’s NetBSD/xen instead of NetBSD/i386) since March of last year. Moreover, the foundation’s own servers are all equipped with Xen, which allows programmers to work in isolated partitions with dedicated resources and not stomp all over each other as they are coding and compiling.
We aren’t naive enough to think that any system has perfect security; but Xen helps us isolate critical systems from each other, and at the same time helps keep our systems physically compact and easy to manage, he said. When you combine virtualization with Xen with NetBSD’s small size, code quality, permissive license, and comprehensive set of security features, it’s pretty clear you have a winning combination, which is why we run it on our own systems.
NetBSD contributor Manuel Bouyer has done a lot of work to integrate the Xen 2.0 hypervisor and monitor into the NetBSD-current branch, and he said he would be making changes to the NetBSD/i386 release that would all integrate /xen kernels into it and will allow Xen partitions to run in privileged and unprivileged mode.
The Xen 3.0 hypervisor and monitor is expected some time in the next few months, with support for 64-bit Xeon and Opteron processors. XenSource’s Pratt told me recently that Xen 4.0 is due to be released in the second half of 2005, and it will have better tools for provisioning and managing partitions. It is unclear how the NetBSD project will absorb these changes, but NetBSD 3.0 is expected around the middle of 2005. The project says that they plan to try to get one big release of NetBSD out the door once a year going forward.