| ▲ | 112233 14 hours ago |
| Is there a way to make linux kernel schedule in a "batch friendly way"? Say I do "make -j" and get 200 gcc processes diong jobserver LTO link with 2GB RSS each. In my head, optimal way through such mess is get as many processes as can fit into RAM without swapping, run them to completion, and schedule additional processes as resources become available. A depth first, "infinite latency" mode. Any combination of cgroups, /proc flags and other forbidden knobs to get such behaviour? |
|
| ▲ | Neywiny 12 hours ago | parent | next [-] |
| "make -j" has OOMd me more than it's worth. If it's a big project I just put in how many threads I want. I do hear your point but that is a solved problem. |
| |
| ▲ | 112233 11 hours ago | parent [-] | | actually, global jobserver is another unsolved thing that seems unvelievable nobody has done yet. You have server. Server spins N containers (kubes, dockers, multiple user sessions ...), each of them is building something. There is no mechanism to run batch of tasks in parallel in a way that uses available cores. Some special cases (make/ninja/gcc) work, but no general mechanism I know of |
|
|
| ▲ | direwolf20 13 hours ago | parent | prev [-] |
| It's not possible for the kernel to predict the memory needs of a process unfortunately |
| |
| ▲ | 112233 12 hours ago | parent | next [-] | | But how about not scheduling swapped out processes if there currently is no free ram for their current RSS? of course kernel cannot know that a new process will balloon to eat all RAM, but once it has done so, is there a way to let it run to completion without being swapped out to "improve responsivity"? | | |
| ▲ | man8alexd 10 hours ago | parent [-] | | There is no actual swapping in the modern kernels. Nowadays, it is paging, when the kernel pages out individual unused memory pages, not entire processes, so it keeps all non-blocked processes running, but only necessary memory pages in the memory. |
| |
| ▲ | man8alexd 13 hours ago | parent | prev [-] | | It is possible to measure process memory utilitsation and set appropriate cgroup limits. |
|