| ▲ | darkamaul 7 hours ago |
| The "use cooldown" [0] blog post looks particularly relevant today. I'd argue automated dependency updates pose a greater risk than one-day exploits, though I don't have data to back that up. That's harder to undo a compromised package already in thousands of lock files, than to manually patch a already exploited vulnerability in your dependencies. [0] https://blog.yossarian.net/2025/11/21/We-should-all-be-using... |
|
| ▲ | plomme 3 hours ago | parent | next [-] |
| Why not take it further and not update dependencies at all until you need to because of some missing feature or systems compatibility you need? If it works it works. |
| |
| ▲ | skybrian 3 hours ago | parent | next [-] | | The arguments for doing frequent releases partially apply to upgrading dependencies. Upgrading gets harder the longer you put it off. It’s better to do it on a regular schedule, so there are fewer changes at once and it preserves knowledge about how to do it. A cooldown is a good idea, though. | |
| ▲ | kunley 2 hours ago | parent | prev | next [-] | | > Why not take it further and not update dependencies at all until you need to because of some missing feature or systems compatibility you need? If it works it works. Indeed there are people doing that and communities with a consensus such approach makes sense, or at least is not frowned upon. (Hi, Gophers) | |
| ▲ | SkyPuncher 2 hours ago | parent | prev | next [-] | | This works until you consider regular security vulnerability patching (which we have compliance/contractual obligations for). | |
| ▲ | bigstrat2003 2 hours ago | parent | prev | next [-] | | That is indeed what one should do IMO. We've known for a long time now in the ops world that keeping versions stable is a good way to reduce issues, and it seems to me that the same principle applies quite well to software dev. I've never found the "but then upgrading is more of a pain" argument to be persuasive, as it seems to be equally a pain to upgrade whether you do it once every six months or once every six years. | |
| ▲ | jonfw 2 hours ago | parent | prev | next [-] | | There is a Goldilocks effect. Dependency just came out a few minutes ago? There is no time for the community to catch the vulnerability, no real coverage from dependency scans, and it's a risk. Dependency came out a few months ago? It likely has a large number of known vulns | |
| ▲ | yupyupyups an hour ago | parent | prev | next [-] | | Just make sure to update when new CVEs are revealed. Also, some software are always buggy and every version is a mixed bag of new features, bugs and regressions. It could be due to the complexity of the problem the software is trying to solve, or because it's just not written well. | |
| ▲ | tim1994 2 hours ago | parent | prev | next [-] | | Because updates don't just include new features but also bug and security fixes. As always, it probably depends on the context how relevant this is to you. I agree that cooldown is a good idea though. | | |
| ▲ | ryandrake 2 hours ago | parent | next [-] | | > Because updates don't just include new features but also bug and security fixes. This practice needs to change, although it will be almost impossible to get a whole ecosystem to adopt. You shouldn’t have to take new features (and associated new problems) just to get bug fixes and security updates. They should be offered in parallel. We need to get comfortable again with parallel maintenance branches for each major feature branch, and comfortable with backporting fixes to older releases. | | |
| ▲ | tshaddox 27 minutes ago | parent | next [-] | | Are you just referring to backporting? | |
| ▲ | nine_k an hour ago | parent | prev [-] | | Semver was invented to facilitate that. Only if everyone adhered to it. | | |
| ▲ | ghurtado 15 minutes ago | parent [-] | | > Semver was invented to facilitate that First time I've heard that. How does semver facilitate backporting? |
|
| |
| ▲ | theptip 2 hours ago | parent | prev | next [-] | | IMO for “boring software” you usually want to be on the oldest supported main/minor version, keeping an eye on the newest point version. That will have all the security patches. But you don't need to take every bug fix blindly. | | | |
| ▲ | shermantanktop 2 hours ago | parent | prev [-] | | For any update: - it usually contains improvements to security - except when it quietly introduces security defects which are discovered months later, often in a major rev bump - but every once in a while it degrades security spectacularly and immediately, published as a minor rev |
| |
| ▲ | an hour ago | parent | prev [-] | | [deleted] |
|
|
| ▲ | jacquesm 6 hours ago | parent | prev | next [-] |
| But even then you are still depending on others to catch the bugs for you and it doesn't scale: if everybody did the cooldown thing you'd be right back where you started. |
| |
| ▲ | bootsmann 3 minutes ago | parent | next [-] | | It does scale against this form of attack.
This attack propagates by injecting itself into the packages you host. If you pull only 7d after release you are infected 7d later. If your customers then also only pull 7d later they are pulling 14d after the attack has launched, giving defenders a much longer window by slowing down the propagation of the worm. | |
| ▲ | falcor84 3 hours ago | parent | prev | next [-] | | I don't think that this Kantian argument is relevant in tech. We've had LTS versions of software for decades and it's not like every single person in the industry is just waiting for code to hit LTS before trying it. There are a lot of people and (mostly smaller) companies who pride themselves on being close to the "bleeding edge", where they're participating more fully in discovering issues and steering the direction. | |
| ▲ | nine_k an hour ago | parent | prev | next [-] | | To find a vulnerability, one does not necessarily deploy a vulnerable version to prod. It would be wise to run a separate CI job that tries to upgrade to the latest versions of everything, run tests, watch network traffic, and otherwise look for suspicions activity. This can be done relatively economically, and the responsibility could be reasonably distributed across the community of users. | |
| ▲ | woodruffw 5 hours ago | parent | prev | next [-] | | The assumption in the post is that scanners are effective at detecting attacks within the cooldown period, not that end-device exploitation is necessary for detection. (This may end up not being true, in which case a lot of people are paying security vendors a lot of money to essentially regurgitate vulnerability feeds at them.) | |
| ▲ | vintagedave 2 hours ago | parent | prev [-] | | That worried me too, a sort of inverse tragedy of the commons. I'll use a weeklong cooldown, _someone else_ will find the issue... Until no-one does, for a week. To stretch the original metaphor, instead of an overgrazed pasture, we grow a communally untended thicket which may or may not have snakes when we finally enter. |
|
|
| ▲ | Sammi 5 hours ago | parent | prev | next [-] |
| Pretty easy to do using npm-check-update: https://www.npmjs.com/package/npm-check-updates#cooldown In one command: npx npm-check-updates -c 7
|
| |
| ▲ | tragiclos 3 hours ago | parent [-] | | The docs list this caveat: > Note that previous stable versions will not be suggested. The package will be completely ignored if its latest published version is within the cooldown period. Seems like a big drawback to this approach. | | |
| ▲ | nfriedly 2 hours ago | parent [-] | | I could see it being a good feature. If there have been two versions published within the last week or two, then there are reasonable odds that the previous one had a bug. | | |
| ▲ | hokkos 41 minutes ago | parent [-] | | some lib literally publish a new package at every PR merged, so multiple times a day. |
|
|
|
|
| ▲ | Ygg2 6 hours ago | parent | prev [-] |
| I don't buy this line of reasoning. There are zero/one day vulnerabilities that will get extra time to spread. Also, if everyone switches to the same cooldown, wouldn't this just postpone the discovery of future Shai-Huluds? I guess the latter point depends on how are Shai-Huluds detected. If they are discovered by downstreams of libraries, or worse users, then it will do nothing. |
| |
| ▲ | hyperpape 2 hours ago | parent | next [-] | | For zero/one days, the trick is that you'd pair dependency cooldowns with automatic scanning for vulnerable dependencies. And in the cases where you have vulnerable dependencies, you'd force update them before the cooldown period had expired, while leaving everything else you can in place. | |
| ▲ | __s 5 hours ago | parent | prev | next [-] | | There are companies like Helix Guard scanning registries. They advertise static analysis / LLM analysis, but honeypot instances can also install packages & detect certain files like cloud configs being accessed | | |
| ▲ | Yokohiii 3 hours ago | parent [-] | | But relying on the goodwill of commercial sec vendors is it's own infrastructure risk. | | |
| ▲ | perlgeek 24 minutes ago | parent | next [-] | | You can also pay a commercial sec vendor if you don't want to rely on their goodwill. | |
| ▲ | limagnolia an hour ago | parent | prev [-] | | So don't rely on their goodwill? Instead, pay them, under a contract.. or do it yourself. |
|
| |
| ▲ | wavemode 2 hours ago | parent | prev [-] | | Your line of reasoning only makes sense if literally almost all developers in the world adopt cooldowns, and adopt the same cooldown. That would be a level of mass participation yet unseen by mankind (in anything, much less something as subjective as software development). I think we're fine. |
|