▲ | pickledish a day ago | |
Another potential avenue for problems like this, which I'm a fan of, is taking advantage of k8s's static CPU policy: https://kubernetes.io/docs/tasks/administer-cluster/cpu-mana... Using this (plus guaranteed QoS), you end up with containers which can only even "see" a subset of cores of the whole node (and they get those cores all to themselves), which is great for reducing noisy neighbors when big machines are running many different services. | ||
▲ | ewidar a day ago | parent | next [-] | |
I am assuming that in OP's case, they want their go process to "see" the machine as it is though, to surface more accurate/better stats? Interesting link nonetheless, thanks! | ||
▲ | cbatt a day ago | parent | prev | next [-] | |
Ah interesting. I'll have to dive in deeper here. If I understand correctly this essentially gives you exclusive core pinning? Do you find that you this reduces total utilization when workloads burst but they don't leverage the other unused cores? | ||
▲ | 0cf8612b2e1e a day ago | parent | prev [-] | |
I thought this was why you are supposed to use ‘nproc’ instead of manually parsing cpuinfo or some other mechanism to determine CPU count. There are various ways in which a given process can be limited to a subset of system resources. |