A Virtual Story – Virtualization Option Exploration

Published by Torry Crass on

In this post, I'm not going to go into the pro's and con's of the various virtualization options that are currently out there as there are already plenty of side-by-side comparisons.  If you're looking for a technical comparison, this probably isn't the best resource as I'm going to provide an account of my experience going through the process and where I've ended up so far without an in-depth technical rundown.

I have a virtual server farm that is mostly for development and testing purposes with around 10 virtual machines in it at any given time.  That environment is currently running in VMWare ESXi 4.1 and it is in bad need of an upgrade both software and hardware.  I set out in effort to find hardware as well as a new virtualization platform to work with in effort to transition into something more open source but still as enterprise capable as VMWare's products.

The Hardware
I settled on a low cost Antec case, Intel Motherboard, 3.3GHz i5 Quad Core, 32GB of Geil DDR3 RAM, HighPoint Raid Controller with 3 x 2TB Samsung Drives and a RaidMax power supply all in all, the cost was around $500 or so which is certainly reasonable by my standards.

The Software
After a lot of reading and consideration between KVM, Xen, and just staying with VMWare I decided to give KVM a try.  Based on reviews, the stability, scalability, performance, support and platform life expectancy were all on an up trend.

And We're Off… Maybe Not
With the hardware and software decisions made I got things underway.  I fussed around with the hardware setup for quite a while initially trying to use the onboard Intel Motherboard Raid otherwise known as fakeraid.  At one point I even said the hell with KVM and attempted to go back to VMWare ESXi 5.0 in order to simply get the system loaded and running.  This failed as well.  There's a very good reason this is called fakeraid, it is a total PoS and if you have any doubts start digging around the net and you'll see problem after problem popping up about it.  Avoid using this at all costs.  Because of the fakeraid trouble I eventually had to add 2 x 40GB drives in a mirrored setup to load the actual OS onto and install the HighPoint controller to support the 3 x 2TB drives in a Raid5 configuration.

Because of the new OS drives I installed the latest Debian stable (squeeze) release and used a software mirror across the two 40GB drives.  This actually seems to have worked quite well.  Once Debian was up and functioning I went ahead and got KVM and the tools for it installed using a combination of HowTo articles, including:

http://wiki.debian.org/KVM
http://www.palegray.net/tutorials/debian-6-kvm-howto/
http://wiki.kartbuilding.net/index.php/KVM_Setup_on_Debian_Squeeze/

One of the hardest things I ran into was getting the networking set up correctly with bridging.  Once set up it's working decent enough, though I still have to work on some of it.  A couple points on this:

  1. If you want to use a "LAN" IP from your normal network, the one your Debian install uses, you need to tell the VM that you're setting up to use the br0 interface and simply set the IP address to a LAN IP and it ought to work.  This took me quite some time to figure out.
  2. In conjunction with the above note, you should not really need to use aliasing when it comes to your network adapters and IP's.  I thought that I would need to do so, but I'm finding out that so far, it's not what I've needed or wanted in the first place.

Once this was up and running I installed virt-manager on an Ubuntu 12.04 laptop that I had laying around and connected into the system.  This was a good exposure to get an idea of what it was all about.  While virt-manager functioned reasonably well, I didn't feel that it was something that I could use for easy management of the environment going forward.

The Search for Management
I need to come up with a way to manage the environment that's both robust with features, stable, scalable and allows ease of use by less technical people.  I was expecting this to be a fairly daunting task.  I was not disappointed.  KVM has a large list of usable management tools here:

http://www.linux-kvm.org/page/Management_Tools

oVirt Attempt – Fail Boat
Going through the list and after looking into various reviews I decided that I'd like to give oVirt a try.  Well, I can start with saying this was a mistake and it didn't go very far.  One of the key failures to this was two fold.  It wasn't explained very well as to whether you need the oVirt Node or oVirt Engine and where you need to get them installed to make it function.  The next even bigger problem that I ran into was the fact that oVirt is meant to run on Fedora or CentOS, not Debian.  This was clearly not going to play well with my current direction.  I did venture down the road of both mistakenly trying to use a oVirt Node in a VM and I did try to look up build information related to getting oVirt running on Debian.  While build information is ridiculously scarce and not very clear, it is out there and I'm sure some people have gotten it running.  I decided to abandon the oVirt endevour completely (for now) because I was looking to avoid the use of Fedora and/or CentOS as I equate them to Windows Millenium or Vista in the equivalence of bulkiness and overall performance.  Not to mention I wasn't interested in re-installing my build from the ground up using those as a core (which looks to be the best way to get oVirt up and running, and I should say, it looks like a very solid way to do it, just not what I wanted).  Building the engine very quickly came off the table as I wanted an solution that was stable and production capable and the lack of documentation on oVirt running on Debian did not give me warm fuzzies.  I wasn't about to become a beta-tester for this setup.

Other Options
So, after the massive time sink that oVirt was, I began the search again.  Some reviews were suggesting ProxMox but after looking into it, I wasn't getting warm fuzzies about a production environment either.  The more I looked, the less confident about finding a good enterprise/production grade solution I became.  While a few other options did exist, such as CloudStack and ConVirt, they were commercial/pay-to-play for some features and I was trying to stay open source.  The other option that I was considering exploring (but didn't) is OpenNebula.  This solution has good community backing and looks like it might provide a quality feature set to work with.  In the end, after more research and reviews, I decided on Archipel especially because it looked like it would work find on Debian.  I thought this was a very interesting solution as it uses a Jabber server to handle the VM infrastructure.

Archipel – Workable
I gave it a whirl, installing eJabberd as specified and then installing the Archipel agent on the system and the Archipel client on an Nginx web server.  While the process was certainly time consuming (I didn't expect any to be quick) I did have it up and running by evenings end.  The documentation was good, the Wiki's were great and some parts of the install were amazingly simple.  I made a few mistakes that had to be straightened out with the help of kind individuals in their IRC channel.  I was able to get everything up and running and even install a Windows XP node and import the VMs that I'd created in virt-manager.

So, I thought I'd provide a couple notes on pro's, con's and obsticles that I encountered:

  • eJabberd setup isn't too bad but you really do need to follow THEIR installation for it, if you start going through various HowTo's around the net you could run into trouble.  I did have trouble getting the extra modules up and running.  I thought I did that part right but they're still not functional.
  • Setup of the Archipel agent is actually very simple once you have the eJabberd server up and running.  Install, edit a couple of config file entries and it should be peachy.
  • The Archipel client is even simpler, all you need is a basic webserver, Nginx worked excellent for this.  You toss the web files into a directory and have Nginx serve it.  Hit it in your browser and you're good to go.
  • Overall, the pros are:
    • It works with Debian
    • Has good install documentation
    • A helpful community
    • The management interface allows real-time chat and team communication through the use of Jabber/XMPP
    • And is a damn cool, unique way to manage VMs
  • Overall, the cons are:
    • It is quite clunky in the web interface.  You have to be patient, refreshes can take a few seconds to populate the data.
    • Floppy support is non-existant.  Don't try to use the web interface to load a driver disk on OS install or connect a floppy; these all take editing the xml file for the VM because the interface attempts to use the ide bus instead of the fdc bus for floppy access.
    • Some of the inital setup is a bit confusing.  You need to add your "server" as a contact to gain access to it.  You also need to make sure that when you're adding things you use name@server, otherwise it won't know where to pull stuff from.
    • When it imports VMs it doesn't move them into its structure, it leaves the file structure how it was.  While this is fine for integrity with other applications, it creates quite the file system mess.
    • The VNC console can be a little flakey requiring you to click off of the VM and back onto it before launching the screen.
    • Permissions management, while it exists and seems straight forward enough, could have more documentation so it's easier to figure out how to add users, I've added some and they haven't even shown up yet…
    • While it looks mostly stable, I'm not sure how much backing it has compared to other projects like oVirt and OpenNebula.  They claim to work with the oVirt team and that's great, hopefully it continues long term.

Summary
While this has been a long (several months) project and it's certainly not perfect, After trial, error, troubleshooting and a crazy amount of research I do have things up and running to the point where I can now use them.  This experience has shown me that while right now, these systems may not be ready for an enterprise production environment, they have a lot of potential and as long as things keep moving forward, it won't be long before they certainly can stand toe-to-toe with the old standard of VMWare.

Epilogue 11/24/12
I wanted to point out that I believe KVM itself is very close to enterprise ready and with the right management interfaces should prove to be quite powerful in the future.  I think it's the management interfaces that need to catch up to the hypervisor.  There may be a few out there like Cloudstack, oVirt, ConVirt and such that are commercial solutions which provide the support and functionality required for enterprise operations.  Unfortunately, these were not within the true scope of my evaluation.  Additionally, I was able to overcome most of my immediate challenges with Archipel and as such, intend to continue it's use moving forward.