Interpreting ethtool Coalesce output

the delay between the tx and rx events and the generation of interrupts for those events. rx-frames[-irq] rx-usecs[-irq] tx-frames[-irq] tx-usecs[-irq] The frames parameters specify how many packets are received/transmitted before generating an interrupt. The usecs parameters specify how many microseconds after at least 1 packet is received/transmitted before generating an interrupt. The [-irq] parameters are … Read more

CPU0 is swamped with eth1 interrupts

Look in the /proc/irq/283 directory. There is a smp_affinity_list file which shows which CPUs will get the 283 interrupt. For you this file probably contains “0” (and smp_affinity probably contains “1”). You can write the CPU range to the smp_affinity_list file: echo 0-7 | sudo tee /proc/irq/283/smp_affinity_list Or you can write a bitmask, where each … Read more

Performance Tuning a High-Load Apache Server

I’ll start by admitting that I don’t much about running stuff in clouds – but based on my experience elsewhere, I’d say that this webserver config reflects a fairly low volume of traffic. That the runqueue is so large suggests that there just isn’t enough CPU available to deal with it. What else is in … Read more

Virtualization – Ten 1Gbps links or one 10Gbps link? (Performance)

1 x 10Gb link for performance. Otherwise if a single server needs to use 1.1Gbs to another server it can’t because most load balancing systems use destination MAC or IP (Which would be the same). This also eliminates issues where links are busier then other links because of the same fact, if the hash works … Read more

Low latency TCP settings on Ubuntu

Honestly, I wouldn’t be using Ubuntu for this… but there are options that can be applied to any Linux variant. You’ll want to increate your network stack buffers: net.core.rmem_default = 10000000 net.core.wmem_default = 10000000 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 If the application is writing to disk, maybe a scheduler/elevator change would be necessary (e.g. … Read more