| ▲ | mpyne 4 days ago | |||||||||||||||||||||||||||||||
You absolutely do not want 90-95% utilization. At that level of utilitization random variability alone is enough to cause massive whiplash in average queue lengths. The cycle time impact of variability of a single-server/single-queue system at 95% load is nearly 25x the impact on the same system at 75% load, and there are similar measures for other process queues. As the other comment notes, you should really work from an assumption that 80% is max loading, just as you'd never aim to have a swap file or swap partition of exactly the amount of memory overcommit you expect. | ||||||||||||||||||||||||||||||||
| ▲ | rcxdude 4 days ago | parent | next [-] | |||||||||||||||||||||||||||||||
Man, if there's one idea I wish I could jam into the head of anyone running an organization, it would be queuing theory. So many people can't understand that slack is necessary to have quick turnaround. | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
| ▲ | sovietmudkipz 3 days ago | parent | prev [-] | |||||||||||||||||||||||||||||||
I target 80% utilization because I’ve seen that figure multiple times. I suppose I should rephrase: I’d like to understand the constraints and systems involved that make 80% considered full utilization. There’s obviously something that limits a OS; is it tunable? Questions I imagine a thorough multiplayer solutions engineer would be curious of, the kind of person whose trying to squeeze as much juice out of the hardware specs as possible. | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||