| ▲ | RantyDave 4 hours ago | ||||||||||||||||
"a thousand-day uptime shouldn’t be folklore" I reboot a lot. Mostly I want to know that should the system need to reboot for whatever reason, that it will all come back up again. I run a very lightly loaded site and I highly doubt anybody notices the minute (or so) loss of service caused by rebooting. Pretty sure I don't feel bad about this. | |||||||||||||||||
| ▲ | klempner 2 hours ago | parent | next [-] | ||||||||||||||||
There's a weird fetishization of long uptimes. I suspect some of this dates from the bad old days when Windows would outright crash after 50 days of uptime. In the modern era, a lightly (or at least stably) loaded system lasting for hundreds or even thousands of days without crashing or needing a reboot should be a baseline unremarkable expectation -- but that implies that you don't need security updates, which means the system needs to not be exposed to the internet. On the other hand, every time you do a software update you put the system in a weird spot that is potentially subtly different from where it would be on a fresh reboot, unless you restart all of userspace (at which point you might as well just reboot). And of course FreeBSD hasn't implemented kernel live patching -- but then, that isn't a "long uptime" solution anyway, the point of live patching is to keep the system running safely until your next maintenance window. | |||||||||||||||||
| |||||||||||||||||
| ▲ | arthurfirst 4 hours ago | parent | prev [-] | ||||||||||||||||
Regularly (every 3 years or so) had 1000+ days of uptime with FreeBSD rack servers on Supermicro mobos. I built the servers myself and then shipped to colo half way around the world. I got over 1400 once and then I needed to add a new disk. They ran for almost 13 years with some disk replacements, CPU upgrades, and memory additions | |||||||||||||||||
| |||||||||||||||||