| ▲ | colechristensen 4 days ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
> (target 80% resource utilization, funny things happen after that — things I don’t quite understand). The closer you get to 100% resource utilization the more regular your workload has to become. If you can queue requests and latency isn't a problem, no problem, but then you have a batch process and not a live one (obviously not for games). The reason is because live work doesn't come in regular beats, it comes in clusters that scale in a fractal way. If your long term mean is one request per second what actually happens is you get five requests in one second, three seconds with one request each, one second with two requests, and five seconds with 0 requests (you get my point). "fractal burstiness" You have to have free resources to handle the spikes at all scales. Also very many systems suffer from the processing time for a single request increasing as overall system loads increase. "queuing latency blowup" So what happens? You get a spike, get behind, and never ever catch up. https://en.wikipedia.org/wiki/Network_congestion#Congestive_... | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | sovietmudkipz 4 days ago | parent [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Yea. I realize I ought to dig into things more to understand how to push past into 90%-95% utilization territory. Thanks for the resource to read through. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||