Linux on VMware – why use partitioning?

This is an interesting question…

I don’t think there’s a definitive answer, but I can give some historical context on how best-practices surrounding this topic may have changed over time.

I’ve had to support thousands of Linux VMs deployed in various forms across VMware environments since 2007. My approach to deployment has evolved, and I’ve had the unique (sometimes unfortunate) experience of inheriting and refactoring systems built by other engineers.

The old days…

Back in the day (2007), my early VMware systems were partitioned just like my bare metal systems. On the VMware side, I was using split 2GB thick files to comprise the VM’s data, and didn’t even think about the notion of multiple VMDKs, because I was just happy that virtualization could even work!

Virtual Infrastructure…

By ESX 3.5 and the early ESX/ESXi 4.x releases (2009-2011), I was using Linux, partitioned as normal atop monolithic Thick provisioned VMDK files. Having to preallocate storage forced me to think about Linux design in a similar manner as I would with real hardware. I was creating 36GB, 72GB, 146GB VMDK’s for the operating system, partitioning the usual /, /boot, /usr, /var, /tmp, then adding another VMDK for the “data” or “growth” partition (whether that be /home, /opt or something application-specific). Again, the sweet-spot in physical hard disk sizes during this era was 146GB, and since preallocation was a requirement (unless using NFS), I needed to be conservative with space.

The advent of thin provisioning

VMware developed better features around Thin provisioning in later ESXi 4.x releases, and this changed how I began to install new systems. With the full feature set being added in 5.0/5.1, a new type of flexibility allowed more creative designs. Mind you, this was keeping pace with increased capabilities on virtual machines, in terms of how many vCPUS and how much RAM could be committed to individual VMs. More types of servers and applications could be virtualized than in the past. This is right as computing environments were starting to go completely virtual.

LVM is awful…

By the time full hot-add functionality at the VM level was in place and common (2011-2012), I was working with a firm that strove to maintain uptime for their clients’ VMs at any cost (stupid). So this included online VMware CPU/RAM increases and risky LVM disk resizing on existing VMDKs. Most Linux systems in this environment were single VMDK setups with ext3 partitions on top of LVM. This was terrible because the LVM layer added complexity and unnecessary risk to operations. Running out of space in /usr, for instance, could result in a chain of bad decisions that eventually meant restoring a system from backups… This was partially process and culture-related, but still…

Partition snobbery…

I took this opportunity to try to change this. I’m a bit of a partition-snob in Linux and feel that filesystems should be separated for monitoring and operational needs. I also dislike LVM, especially with VMware and the ability to do what you’re asking about. So I expanded the addition of VMDK files to partitions that could potentially grow. /opt, /var, /home could get their own virtual machine files if needed. And those would be raw disks. Sometimes this was an easier method to expand particular undersized partition on the fly.

Obamacare…

With the onboarding of a very high-profile client, I was tasked with the design of the Linux VM reference template that would be used to create their extremely visible application environment. The security requirements of the application required a unique set of mounts, so worked with the developers to try to cram the non-growth partitions onto one VMDK, and then add separate VMDKs for each mount that had growth potential or had specific requirements (encryption, auditing, etc.) So, in the end, these VMs were comprised of 5 or more VMDKs, but provided the best flexibility for future resizing and protection of data.

What I do today…

Today, my general design for Linux and traditional filesystems is OS on one thin VMDK (partitioned), and discrete VMDKs for anything else. I’ll hot-add as necessary. For advanced filesystems like ZFS, it’s one VMDK for the OS, and another VMDK that serves as a ZFS zpool and can be resized, carved into additional ZFS filesystems, etc.

Leave a Comment