As a provider that leverages Microsoft’s Hyper-v technology to deliver cost effective high performance cloud resources to our customers, I’m excited to see where Microsoft is taking vNext. Currently, virtual machines (VM) in the cloud act very similar to their physical counterparts. If you wish to upgrade processors or add memory to a server, you power down the server, swap out or add the new resources, then power the server back on.
Hyper-V currently supports dynamic memory allocation in which you can specify start-up, minimum and maximum memory values. The VM will boot with the start-up value, then automatically adjust the allocation within the constrains established, allwhile maintaining a buffer of free memory that is also user definable in the form of a percentage. Dynamic memory allocation can certanly help some organizations oversubscribe their current infrastructure, allowing them to pack more VM’s per physical server, so long as they monitor the underlying host’s memory usage or leverage some type of host optimization system (ie. Virtual Machine Manager) to shuffle VM’s around when memory becomes a constraint on physical hosts.
For a few reasons, we don’t leverage dynamic memory in our private cloud service offering. First, we found dynamic memory will sometimes introduce erratic performance to the virtual machine, so we’ve always reserve this feature primarily for our lab and internal environments. When we do leverage dynamic memory, it’s primarily used for services such as domain controllers where it’s handy to have a few gigabytes of memory to interact with when you need it, but when no one logged in, it may only need a quarter of that to do it’s job successfully. We never give a demanding server such as SQL dynamic memory, although we wish we could though. The other dislike we have with the current iteration of dynamic memory has to do with the way Microsoft’s memory ballooning gets represented in the task manager for windows and top for linux. It inaccurately portrays the virtual machine as consuming all available memory, which has a tenancy to confuse our users and makes initial troubleshooting processes more difficult.
In the next iteration of Windows Server/Hyper-V, hot memory re-sizing will be introduced. This will allow static amounts of memory to be added or removed on the fly without virtual machine downtime. The stipulations imposed at this time are the guest OS needs to be a vNext OS and you cannot remove memory that’s being used. The latter doesn’t concern me much as I wouldn’t want to impose memory swapping on a VM by taking away memory in use, but hopefully in time Microsoft will be able to extend the host memory re-sizing technology to at least Windows Server 2012 R2 as it will save a lot of downtime for applications as they grow. With a little luck, this might have an impact on Hyper-V’s dynamic memory problems at the same time, yielding even more avenues for delivering cost effective and highly reliable resources to consumers.