Remix.run Logo
LegionMammal978 4 days ago

On my desktop system, most of my problems with swap come from dealing with the aftermath of an out-of-control process eating all my RAM. In this case, the offending program demands memory so quickly that everything from legitimate programs gets swapped out. These programs proceed to run poorly for the next several minutes to an hour depending on usage, since the OS only swaps pages back in once they are referenced, even if there is plenty of free space not even being used in the disk cache.

Eventually I wrote a small script that does the equivalent of "sudo swapoff -a && sudo swapon -a" to eagerly flush everything to RAM, but I was surprised by how many people seemed to think there's no legitimate reason to ever want to do so.

Sophira 3 days ago | parent | next [-]

> I was surprised by how many people seemed to think there's no legitimate reason to ever want to do so.

Sounds like it's as legitimate as running the sync command - ie. ideally you should never need to do it, but in practice you sometimes do.

aidenn0 3 days ago | parent [-]

I still run "sync" before removing a USB drive. I'm sure it's entirely unnecessary now, but old habits die hard.

ziml77 3 days ago | parent [-]

You definitely want to ensure buffers are flushed. Because very, very annoyingly it's not default behavior on Linux distros for removable devices to be mounted with write caching disabled. I don't even know of an easy option to make Linux do that. I think you'd need to write some custom udev rule

aidenn0 2 days ago | parent [-]

> Because very, very annoyingly it's not default behavior on Linux distros for removable devices to be mounted with write caching disabled

About a decade ago, removable devices definitely were mounted with the "sync" option on some distros. It really tanked write performance though, so perhaps that's why they changed it. Certainly Plasma (and probably most DEs; I only use plasma) will tell you when the device is fully unmounted when you use the udisks integration.

ziml77 2 days ago | parent [-]

The problem is that the write buffer turns copy progress into a complete lie. The last time I put a large file on a removable drive from Linux, the copy finished suspiciously fast. But I thought that surely Linux wouldn't be using a write buffer when Windows hasn't used that on removable devices for 20 years, so I went on my way and shut down the computer... which led to me just sitting at a gray screen. I had to just wait there with no indication of progress or even that it was doing anything at all.

If I wasn't aware of what was happening here I likely would have just force shut down the computer after a minute of waiting. And I suspect if I had done that and checked the drive it would have appeared like the file was there, while actually missing part of the data.

aidenn0 2 days ago | parent [-]

I did some digging and found:

1. Arch uses the "flush" mount option by default when using udisks (which is how removable devices are mounted interactively from a DE).

2. Manjaro has a package called "udev-usb-sync" that matches USB devices in udev and limits the write-buffer size. However, it appears to (by default, you can instead specify a constant value) calculate the buffer size based on the USB transfer speed, and given the fact that I have some USB 3.1 devices that cannot maintain 1MB/s of throughput while others that can maintain over 200MB/s of throughput and both report the same transfer speed to Linux, I don't know how effective it is.

ciupicri 3 days ago | parent | prev | next [-]

To add to the injury swapoff doesn't read from disk sequentially, but in some "random" order, so if you're using a hard-disk it's a huge pain, although even SSDs would benefit from this.

grogers 3 days ago | parent [-]

When I was last messing with this ~10 years ago, even with SSD swapoff was just insanely slow. Even relatively small single digit GB swap partitions would take many minutes to drain. I think it was loading one page at a time from swap or something.

hugo1789 3 days ago | parent | prev | next [-]

That works if there is enough memory after the "bad" process has been killed. The question is, is it necessary? Many systems can live with processes performing a little bit poorly for some minutes and I wouldn't do it.

creer 3 days ago | parent | next [-]

It's fine that "many systems" can. But there is no easy way when the user or system can't. Flushing back to RAM is slow - that's not controversial. So it would help if there was a way to do this in advance of the need for the programs where that matters.

aeonik 3 days ago | parent [-]

You mean like vmtouch and madvise?

I use vmtouch all the time to preload or even lock certain data/code into RAM.

michaelt 3 days ago | parent | prev [-]

> The question is, is it necessary? Many systems can live with processes performing a little bit poorly for some minutes and I wouldn't do it.

The outage ain't resolved until things are back to operating normally.

If things aren't back to 100% healthy, could be I didn't truly find the root cause of the problem - in which case I'll probably be woken up again in 30 minutes when the problem comes back.

whatevaa 3 days ago | parent [-]

Desktops are not servers. There could be no problem, just some hungry legitimate program (or vm).

4 days ago | parent | prev | next [-]
[deleted]
Zefiroj 3 days ago | parent | prev | next [-]

Check out the lru_gen_min_ttl from MGLRU.

man8alexd 3 days ago | parent | prev [-]

[dead]