| ▲ | ahartmetz 4 hours ago | |||||||||||||||||||||||||||||||||||||||||||
PREEMPT_LAZY triggering on page faults seems like a bad idea in light of this. It is probably not a good idea to suspend processes right when they get unexpectedly bogged down. The logic makes a little more sense for syscalls that are expected to take long compared to a scheduling quantum (a few milliseconds). But page faults are mostly invisible and unplannable. It only took a few decades for Linux to get a good CPU scheduler and good I/O schedulers, too. I don't get how such an important area can be so bad for so long. But then, bad scheduling is everywhere. I find it to be a pretty fun area to work in, but, judging by how much it is less than half-assed in much existing software, most developers seem to hate dealing with it? | ||||||||||||||||||||||||||||||||||||||||||||
| ▲ | AlienRobot 3 hours ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||||||||
One thing I miss from using Windows is that the desktop didn't just freeze completely if you ran out of RAM. At first I thought that maybe Linux doesn't have ways to give priority to the desktop environment (a.k.a. "graphical shell") which is why running out of RAM means your cursor starts lagging, clicking on things stops working, etc. But maybe Linux is just bad at that in general and a single process eating too much RAM can simply bring the whole system to a halt as it tries to move and compress RAM to a pagefile on an HDD (not SSD). Every time it happens to me I just find it so incredible. Here I am with a PC with a multiple cores, multiple processors, and a single process eating all the RAM can bottleneck ALL of them at once? Am I misunderstanding something? Shouldn't it, ideally, work in such way that so long as one processor is free, the system can process mouse input and render the cursor and do all the desktop stuff no matter what I/O is happening in the background? Since it's Linux maybe it's just my DE/distro (Cinnamon/Mint). Maybe it does allocations under the assumption there will always be a few free bytes in RAM available, so it halts if RAM runs out while some other DE wouldn't. But even then you'd think there would be a way to just reserve "premium" memory for critical processes so they never become unresponsive. I wonder if other people have the same experience as me. This part of Linux just always felt fundamentally poor for me. | ||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||
| ▲ | bobmcnamara 4 hours ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||||||||
Userspace spinlocks seem like a risky idea too. What if it was on a VM and the core holding the lock got descheduled from the hypervisor? | ||||||||||||||||||||||||||||||||||||||||||||