| ▲ | swatcoder 9 hours ago | ||||||||||||||||
> You're not thinking about the system dependencies. You're correct, because it's completely neurotic to worry about phantom bugs that have no actual presence of mind but must absolutely positively be resolved as soon as a candidate fix has been pushed. If there's a zero day vulnerability that affects your system, which is a rare but real thing, you can be notified and bypass a cooldown system. Otherwise, you've presumably either adapted your workflow to work around a bug or you never even recognized one was there. Either way, waiting an extra <cooldown> before applying a fix isn't going to harm you, but it will dampen the much more dramatic risk of instability and supply chain vulnerabilities associated with being on the bleeding edge. | |||||||||||||||||
| ▲ | jcalvinowens 9 hours ago | parent [-] | ||||||||||||||||
> You're correct, because it's completely neurotic to worry about phantom bugs that have no actual presence of mind but must absolutely positively be resolved as soon as a candidate fix has been pushed. Well, I've made a whole career out of fixing bugs like that. Just because you don't see them doesn't mean they don't exist. It is shockingly common to see systems bugs that don't trigger for a long time by luck, and then suddenly trigger out of the blue everywhere at once. Typically it's caused by innocuous changes in unrelated code, which is what makes it so nefarious. The most recent example I can think of was an uninitialized variable in some kernel code: hundreds of devices ran that code reliably for a year, but an innocuous change in the userland application made the device crash on startup almost 100% of the time. The fix had been in stable for months, they just hadn't bothered to upgrade. If they had upgraded, they'd have never known the bug existed :) I can tell dozens of stories like that, which is why I feel so strongly about this. | |||||||||||||||||
| |||||||||||||||||