Virtualization is the greatest thing since sliced bread, except when it’s a big pain in the neck. We’ve got a VM host that our IT administrator set up to support development and testing machines. It’s a real nice box — a Dell server with a pair of Xeons and 32GB of RAM.
Unfortunately, we spent money on the metal, but skimped on the software. Our guy used Microsoft’s Virtual Server on this box, largely because it’s free and “it’s from Microsoft, so it must be good.”
Yes, in general, it’s good. Performance has been fine, and management is tolerable, though not great. I ran into a brick wall today, though, that really highlights why you need to consider a higher-end product for a large-scale server (our machine currently hosts around 30 VM’s).
It turns out that Microsoft’s Virtual Server won’t let you overcommit memory. What’s that mean? It means that if you have 10 VM’s, each configured to use a Gig of RAM, you’re going to need 10Gb of RAM on the host, plus a little for overhead. Even if each of these 10 VM’s is only really using half of its allocated RAM, MS Virtual Server still needs to reserve all of it in physical RAM.
This, in a word, bites.
VMWare’s ESX server, by contrast, lets you set up a minimum size and a maximum size for each VM. The minimum size is still physically reserved, but the difference between the min size and the max size is allocated dynamically when needed (and available). If you stop to think about it, the other resources on the server are already managed this way — CPUs are shared, disk space is allocated on demand, and so on. Why would you want to be required to supply physical RAM for all of your VM’s?
In practice, this limitation is a nagging problem for small-scale servers and workstations, but if you’re planning a server installation, you’d be well-advised to weigh the cost of additional RAM for your server when you look at the relatively higher cost of products like ESX server.