A long long time ago, I virtualized all my home infrastructure onto an ESXi 4.0 server. It has run perfectly fine, minus one hard drive failure, for quite a few years. Lately, though, I had been wanting to upgrade it because it’s not terribly fast and I have run out of resources to be able to add new VMs. It was running on a dual cpu machine (single core) with 160Gb HDD and 4Gb of ram, and I was just using it all up. No more ram for new stuff.
I decided that I would upgrade the matching spare server I had and try out KVM because I had used it a bit for RedHat training and it worked so well. Of course, Fessenden’s law, as opposed to Murphy’s law, stated simply that “Something will go wrong.” And it did. Over and over again.
First off, let me say that on an enterprise class server system, if it says it needs registered ECC ram, it is NOT kidding. I must have swapped ram around in that server 50 times before I noticed 2 sticks of non-registered ram in there. Once I got over that, I had 8Gb of ram and a new 250Gb HDD and I was ready to rock! Or so I thought.
I decided to use CentOS 6 as my virtualization host OS and that went right on but I soon discovered that my CPU doesn’t support virtualization. Ugh. So I decided that I would switch gears and go with virtualbox instead so that I could continue using my current hardware. I have often used virtualbox on other machines and it is a fantastic platform. I set about getting things running.
When I installed the base OS, I did a minimal install. No GUI, etc.. There is no sense in putting stuff on there you don’t need on a server right? Well, the very first thing I found was that I could not use the virtualbox gui controls because I did not have any X installed. To rectify that:
yum -y install xorg-x11-xauth dejavu-lgc-sans-fonts
You need the auth to be able to forward your X session, and need the fonts to be able to actually see words on your app.
Next I copied all my vmdk files to the new server. This takes a LONG time for old servers to move around 100Gb. Once there, however, I discovered that virtualbox cannot read native vmdk files. Ugh again.
yum -y install qemu-kvm
And then I could convert the vmdks to raw images, and then again to native vdi files for virtualbox.
qemu-img convert machine-flat.vmdk machine.bin
vboxmanage convertfromraw --format VDI machine.bin machine.vdi
I put all my machines together and noticed that virtualbox was complaining about uuid on some of the disk images. To fix that:
vboxmanage internalcommands sethduuid machine.vdi
The first machine I started up was a CentOS 6 machine and that fired right up, however, udev immediately reassigned my ethernet device to eth1. In order to get thatr back where it was supposed to be I had to go into /etc/udev/rules.d/70-persistent-net.rules and delete the ethernet rules in there and reboot.
Along about this time my server powered off. No idea why. It powered itself back on again about 30 seconds later. I checked everything on the server and it looked fine. Curious, but I kept on going.
Next I tried to start up my remaining Centos 5 VMs. These were problematic. The very first thing I noticed here was that they were barking because I never uninstalled the vmware drivers. I fired them back up on the original server and ran the vmware-uninstall.pl program. I turned them back off and spent hours re copying the over, and then reformatting the vmdk files into vdi.
Starting them back up, I found that, again, they would not run. This time I received the error that it could not locate any LVM partitions. This, it turns out, is because the initrc files did not have the appropriate drivers in them. Fixing this was fun. First off, you need to add a cdrom drive to the vm and put a CentOS rescue cd/dvd there. Boot it up in rescue mode, chroot to the /mnt/sysimage and then fix the /etc/modprobe.conf file:
alias scsi_hostadapter mptbase
#alias scsi_hostadapter1 mptspi
#alias scsi_hostadapter2 ata_piix
alias scsi_hostadapter1 mptscsih
alias scsi_hostadapter2 mptscsih
The entries with the #s are the ones I had to change. Then I needed to rebuild all of the initrd images.
for file in $(ls init* | cut -d'-' -f2,3 | cut -d'.' -f1-6); do mkinitrd -v -f /boot/initrd-$file.img $file; done
After that, the machines came right up! Of course, the host powered right off. Several times over the next day. Grrr.
I figured that there was a hardware issue with the host somewhere and resolved to buy myself a new server. I picked an open box refurb from microcenter that had 8Gb ram, a 750Gb HDD and a nice quad core cpu that supported virtualization. Wohoo! I can now switch to KVM!
I set up the new machine and installed KVM and started copying vmdk files over again and, bingo, kernel panic. I rebooted and the machine would not even get past bios. This went on for a couple days until I took the machine back to microcenter. I picked up a different machine, better quad core with 12Gb of ram and 1Tb HDD and set about getting it running.
This time, success! I set up CentOS 6 and KVM, added the bridged networking and copied over the vmdk files. KVM will read vmdk files but I decided to convert to a more native format, qcow2, the preferred format for qemu, anyhow. that is fairly simple to do.
qemu-img convert -O qcow2 machine-flat.vmdk machine.qcow2
I put all the machines back together again and started them back up. I still had to do the initrd fixes on the CentOS 5 VMs to get them going, but after that all has been running fantastically!
Somewhere along the line here I figured out that my issue with my secondary server powering off was a bad port on my UPS.
KVM is really easy to run and manage for a Linux geek as opposed to VMware 4. The native gui tools do the job just fine, although they are not quite as intuitive to me as VMWare’s VIC. I am quite happy, though, with the switch. I now have more than twice the resources of my initial virtualization environment. Now I am good to go for several more test VMs and the new machine is nice and quiet and doesn’t have to hide under my couch