| ▲ | j-vogel 7 hours ago | |
Author here. DB and external service calls are often the biggest wins, thanks for calling that out. In my demo app, the CPU hotspots were entirely in application code, not I/O wait. And across a fleet, even "smaller" gains in CPU and heap compound into real cost and throughput differences. They're different problems, but your point is valid. Goal here is to get more folks thinking about other aspects of performance especially when the software is running at scale. | ||
| ▲ | PathOfEclipse 3 hours ago | parent [-] | |
My experience profiling is that I/O wait is never the problem. However, the app may actually be spending most of it's CPU time interacting with database. In general, networks have gotten so fast relative to CPU that the CPU cost of marshalling or serializing data across a protocol ends up being the limiting factor. I got a major speedup once just by updating the JSON serialization library an app used. | ||