Remix.run Logo
brabel 4 days ago

Optimizing for the JVM is a nightmare (or trivial depending on your point-of-view).

Another anecdote: I profiled our Java-based server and concluded that most of the time was being spent on creating temporary DI containers for request handling. I spent several days trying to avoid copying the "app" scope components to the "request" scope container (we forked and heavily modified our DI container implementation) so that creating request containers was cheaper. I wrote a JMH benchmark that tried to emulate the real thing based on the real data (with something like 40% of components in the "app" scope, 60% in the "request" scope). My optimisation made the code noticably faster, I can't remember by how much exactly, but probably something like 25% faster.

When I ran the "real" performance test that simulates real-world usage of the server (not just the containers) the performance was slightly worse!! Ended up throwing the code away, as the OP. I had already spent too much time on it so I couldn't dig much further into why that was. But from other experiments, I can say that the JVM, thanks to JIT compilation of bytecode, will heavily optimize the hot path anyway based on the actual usage patterns, so much that the Java code you wrote may resemble very little of what machine code is actually executing in the end. That's why I said it may also be "trivial" to optimize JVM code: just let the JIT do its job.

Of course you may still obtain big gains when you optimize code that was doing stupid things, but because of JIT "magic", even your stupid code might run really fast given enough JIT passes.

necovek 4 days ago | parent [-]

Definitely, sometimes, testing in production is the only way to know something works or doesn't (or the only non-cost-prohibitive way).