Remix.run Logo
spockz a day ago

For me the main takeaway of this is that you want to have automated performance tests in place combined with insights into flamegraphs by default. And especially for these kind of major language upgrade changes.

malkia a day ago | parent | next [-]

Benchmarking requires a bit of different setup than the rest of the testing, especially if you want down to the ms timings.

We have continous benchmarking of one of our tools, it's written in C++, and to get "same" results everytime we launch it on the same machine. This is far from ideal, but otherwise there be either noisy neighbours, pesky host (if it's vm), etc. etc.

One idea that we thought was what if we can run the same test on the same machine several times, and check older/newer code (or ideally through switches), and this could work for some codepaths, but not for really continous checkins.

Just wondering what folks do. I can assume what, but there is always something hidden, not well known.

spockz a day ago | parent | next [-]

I agree for measuring latency differences you want similar setups. However, by running two versions of the app concurrently on the same machine they both get impacted more or less the same by noisy neighbours. Moreover, by inspecting the flamegraph you can, manually, see these large shifts of time allocation quickly. For automatic comparison you can of course use the raw data.

In addition you can look at total cpu seconds used, memory allocation on kernel level, and specifically for the jvm at the GC metrics and allocation rate. If these numbers change significantly then you know you need to have a look.

We do run this benchmark comparison in most nightly builds and find regressions this way.

malkia a day ago | parent [-]

Good points there - Thanks @spockz!

esafak 4 hours ago | parent | prev [-]

https://en.wikipedia.org/wiki/Hardware_performance_counter can help with noisy neighbors. I am still getting into this.

esafak a day ago | parent | prev [-]

What are folks using for perf testing on JVM these days?

cogman10 a day ago | parent | next [-]

For production systems I use flight recordings (jfrs). To analyze I use java mission control.

For OOME problems I use a heap dump and eclipse memory analysis tool.

For microbenchmarks, I use JMH. But I tend to try and avoid doing those.

spockz a day ago | parent | prev | next [-]

I use jmh for micro benchmarks on any code we know is sensitive and to highlight performance differences between different implementations. (Usually keep them around but not run on CI as an archive of what we tried.)

Then we do benchmarking of the whole Java app in the container running async-profiler into pyroscope. We created a test harness for this that spins up and mocks any dependencies based on api subscription data and contracts and simulates performance.

This whole mechanism is generalised and only requires teams that create individual apps to work with contract driven testing for the test harness to function. During and after a benchmark we also verify whether other non functionals still work as required, i.e. whether tracing is still linked to the right requests etc. This works for almost any language that we use.

noelwelsh a day ago | parent | prev | next [-]

jmh is what I've always used for small benchmarks.

gavinray a day ago | parent | prev | next [-]

async-profiler

21 hours ago | parent | prev [-]
[deleted]