HP DL380p Gen8 (p420i controller) I/O oddity on XFS partitions

XFS and EL6 have fallen into an ugly state… I’ve abandoned XFS on EL6 systems for the time being due to several upstream features/changes slipping into the Red Hat kernel…

This one was a surprise and caused some panic: Why are my XFS filesystems suddenly consuming more space and full of sparse files?

Since November 2012, the XFS version shipping in kernels newer than 2.6.32-279.11.1.el6 have an annoying load and performance issue stemming from Red Hat Bugzilla 860787. Since then, I’ve had unpredictable performance and higher run queues than average.

For new systems, I’m using ZFS or just ext4. For older systems, I’m freezing them at 2.6.32-279.11.1.el6.

Try rolling back to that version with:

yum install kernel-2.6.32-279.11.1.el6.x86_64

In addition to the above, due to the type of RAID controller you’re using, the typical optimizations are in order:

Mount your XFS filesystems noatime. You should also leverage the Tuned framework with:

tuned-adm profile enterprise-storage

to set readahead, nobarrier and I/O elevator to a good baseline.


Edit:

There are plenty of recommendations surrounding XFS filesystem optimization. I’ve used the filesystem exclusively for the past decade and have had to occasionally adjust parameters as underlying changes to the operating system occurred. I have not experienced a dramatic performance decrease such as yours, but I also do not use LVM.

I think it’s unreasonable to expect EL5 to act the same way as EL6, given the different kernel generation, compiled-in defaults, schedulers, packages, etc.

What would I do at this point??

  • I would examine the mkfs.xfs parameters and how you’re building the systems. Are you using XFS partitioning during installation or creating the partitions after the fact? I do the XFS filesystem creation following the main OS installation because I have more flexibility in the given parameters.

  • My mkfs.xfs creation parameters are simple: mkfs.xfs -f -d agcount=32 -l size=128m,version=2 /dev/sdb1 for instance.

  • My mount options are: noatime,logbufs=8,logbsize=256k,nobarrier I would allow the XFS dynamic preallocation to run natively and not constrain it like you have here. My performance improved with it.

  • So I don’t use LVM. Especially on top of hardware RAID… Especially on HP Smart Array controllers, where there are some LVM-like functions native to the device. However, using LVM, you don’t have access to fdisk for raw partition creation. One thing that changed from EL5 to EL6 is the partition alignment in the installer and changes to fdisk to set the starting sector on a cylinder boundary.

  • Make sure you’re running your HP Smart Array controllers and drives at the current revision level. At that point, it makes sense to update the entire server to the current HP Service Pack for ProLiant firmware revision. This is a bootable DVD that will upgrade all detected components in the system.

  • I’d check RAID controller settings. Pastebin the output of hpacucli ctrl all show config detail. Here’s mine. You want a cache ratio biased towards writes versus reads. 75:25 is the norm. The default strip size of 256K should be fine for this application.

  • I’d potentially try this without LVM.

  • What are your sysctl.conf parameters?

Leave a Comment