Remix.run Logo
tanelpoder 7 hours ago

I understand that it's the interrupt-based I/O completion workloads that suffered from IOMMU overhead in your tests?

IOMMU may induce some interrupt remapping latency, I'd be interested in seeing:

1) interrupt counts (normalized to IOPS) from /proc/interrupts

2) "hardirqs -d" (bcc-tools) output for IRQ handling latency histograms

3) perf record -g output to see if something inside interrupt handling codepath takes longer (on bare metal you can see inside hardirq handler code too)

Would be interesting to see if with IOMMU each interrupt handling takes longer on CPU (or is the handling time roughly the same, but interrupt delivery takes longer). There may be some interrupt coalescing thing going on as well (don't know exactly what else gets enabled with IOMMU).

Since interrupts are raised "randomly", independently from whatever your app/kernel code is running on CPUs, it's a bit harder to visualize total interrupt overhead in something like flamegraphs, as the interrupt activity is all over the place in the chart. I used flamegraph search/highlight feature to visually identify how much time the interrupt detours took during stress test execution.

Example here (scroll down a little):

https://tanelpoder.com/posts/linux-hiding-interrupt-cpu-usag...

eivanov89 6 hours ago | parent | next [-]

BTW, the whole situation with IRQ accounting disabled reminds me the -fomit-frame-pointer case. For a long time there was no practical performance reason, but the option had been used... Making slower and harder to build stacks both for perf analyses and for stack unwinding in languages like C++.

After careful reading I'm surprised how small IRQ squares build up 30%. Should search for interrupts when I inspect our flamegraphs next time.

tanelpoder 6 hours ago | parent [-]

I was doing over 11M IOPS during that test ;-)

Edit: I wrote about that setup and other Linux/PCIe root complex topology issues I hit back in 2021:

https://news.ycombinator.com/item?id=25956670

singron 3 hours ago | parent | next [-]

FYI 11M IOPS in terms of AWS EBS is 138 gp3 volumes (80K IOPS each), which costs about $56K/month or about $1.3M over 2 years. If anyone was considering using EBS for high-IOPS workloads, don't.

I think your test had 10 980 Pros, which were probably around $120 each at the time (~$1200 total). SSDs are wildly more expensive now, but even if you spend $500 each, it's nowhere close to EBS.

It's apples vs oranges, but sometimes you just want fruit.

eivanov89 6 hours ago | parent | prev [-]

That's super hot. Especially the update with the 37M IOPS reference. Might be very useful for my next tasks related to a setup with 6 NVMe disks: 1. Get all disks saturated through the network (including RDMA usage). 2. Play with io_uring to share a polling thread. Currently, no luck: if I share kernel poller between two devices then improvement is just +30% (at a cost of 1 core). Considering alternative schemes now.

eivanov89 7 hours ago | parent | prev [-]

Unfortunately, we don't have a proper measurements for IOPOLL mode with and without IOMMU, because initially we didn't configure IOPOLL properly. However, I bet that this mode will be affected as well, because disk still has to write using IOMMU.

You suggest a very interesting measurements. I will keep it in my mind and try during next experiments. Wish I have read this before to apply during the past runs :)

tanelpoder 7 hours ago | parent [-]

Yeah you'd still have the IOMMU DMA translation, but would avoid the interrupt overhead...