▲ | kqr 6 days ago | |
It depends on how variable the load is, compared to how fast the servers can scale up and down, etc. I often have as a rule of thumb to have enough headroom to be able to deal with twice the load while staying within a triple of the response time. You can solve the equations for your specific case, but eyeballing graphs such as [1] I end up somewhere in the area of 40 %. The important part is of course to ask yourself the question "how much increased load may I need to handle, and how much can I degrade system performance in doing so?" You may work in an industry that only ever sees 10 % additional load at timescales where scaling is unfeasible, and then you can pick a significantly higher normal utilisation level. Or maybe you're in an industry where you cannot degrade performance by more than 10 % even if hit by five times the load – then you need a much, much more conservative target for utilisation. |