| ▲ | layer8 9 hours ago |
| People in this thread are worried that they are significantly vulnerable if they don't update right away. However, this is mostly not an issue in practice. A lot of software doesn't have continuous deployment, but instead has customer-side deployment of new releases, which follow a slower rhythm of several weeks or months, barring emergencies. They are fine. Most vulnerabilities that aren't supply-chain attacks are only exploitable under special circumstances anyway. The thing to do is to monitor your dependencies and their published vulnerabilities, and for critical vulnerabilities to assess whether your product is affect by it. Only then do you need to update that specific dependency right away. |
|
| ▲ | embedding-shape 9 hours ago | parent | next [-] |
| > for critical vulnerabilities to assess whether your product is affect by it. Only then do you need to update that specific dependency right away. This is indeed what's missing from the ecosystem at large. People seem to be under the impression that if a new release of software/library/OS/application is released, you need to move to it today. They don't seem to actually look through the changes, only doing that if anything breaks, and then proceed to upgrade because "why not" or "it'll only get harder in the future", neither which feel like solid choices considering the trade-offs. While we've seen to already have known that it introduces massive churn and unneeded work, it seems like we're waking up to the realization that it is a security tradeoff as well, to stay at the edge of version numbers. Sadly, not enough tooling seems to take this into account (yet?). |
| |
| ▲ | dap 8 hours ago | parent | next [-] | | At my last job, we only updated dependencies when there was a compelling reason. It was awful. What would happen from time to time was that an important reason did come up, but the team was now many releases behind. Whoever was unlucky enough to sign up for the project that needed the updated dependency now had to do all those updates of the dependency, including figuring out how they affected a bunch of software that they weren't otherwise going to work on. (e.g., for one code path, I need a bugfix that was shipped three years ago, but pulling that into my component affects many other code paths.) They now had to go figure out what would break, figure out how to test it, etc. Besides being awful for them, it creates bad incentives (don't sign up for those projects; put in hacks to avoid having to do the update), and it's also just plain bad for the business because it means almost any project, however simple it seems, might wind up running into this pit. I now think of it this way: either you're on the dependency's release train or you jump off. If you're on the train, you may as well stay pretty up to date. It doesn't need to be every release the minute it comes out, but nor should it be "I'll skip months of work and several major releases until something important comes out". So if you decline to update to a particular release, you've got to ask: am I jumping off forever, or am I just deferring work? If you think you're just deferring the decision until you know if there's a release worth updating to, you're really rolling the dice. (edit: The above experience was in Node.js. Every change in a dynamically typed language introduces a lot of risk. I'm now on a team that uses Rust, where knowing that the program compiles and passes all tests gives us a lot of confidence in the update. So although there's a lot of noise with regular dependency updates, it's not actually that much work.) | | |
| ▲ | lock1 7 hours ago | parent | next [-] | | I think it also depends on the community as well. Last time I touched Node.js and Javascript-related things, every time I tried to update something, it practically guaranteed something would explode for no reason. While my recent legacy Java project migration from JDK 8 -> 21 & a ton of dependency upgrades has been a pretty smooth experience so far. | | |
| ▲ | Terr_ 5 hours ago | parent [-] | | Yeah, along with any community's attitudes to risk and quality, there is also a varying, er, chronological component. I'd prefer to upgrade around the time most of the nasty surprises have already been discovered by somebody else, preferably with workarounds developed. At the same time, you don't want to be so far back that upgrading uncovers novel migration problems, or issues that nobody else cares about anymore. |
| |
| ▲ | JoshTriplett 7 hours ago | parent | prev | next [-] | | > I'm now on a team that uses Rust, where knowing that the program compiles and passes all tests gives us a lot of confidence in the update. That's been my experience as well. In addition, the ecosystem largely holds to semver, which means a non-major upgrade tends to be painless, and conversely, if there's a major upgrade, you know not to put it off for too long because it'll involve some degree of migration. | |
| ▲ | coredog64 6 hours ago | parent | prev | next [-] | | My current employer publishes "staleness" metrics at the project level. It's imperfect because it weights all the dependencies the same, but it's better than nothing. | |
| ▲ | ozim 3 hours ago | parent | prev [-] | | Update at least quarterly so you don’t have them stale an super hard to update |
| |
| ▲ | jerf 8 hours ago | parent | prev | next [-] | | I fought off the local imposition of Dependabot by executive fiat about a year ago by pointing out that it maximizes vulnerabilities to supply chain attacks if blindly followed or used as a metric excessively stupidly. Maximizing vulnerabilities was not the goal, after all. You do not want to harass teams with the fact that DeeplyNestedDepen just went from 1.1.54-rc2 to 1.1.54-rc3 because the worst case is that they upgrade just to shut the bot up. I think I wouldn't object to "Dependabot on a 2-week delay" as something that at least flags. However working in Go more than anything else it was often the case even so that dependency alerts were just an annoyance if they aren't tied to a security issue or something. Dynamic languages and static languages do not have the same risk profiles at all. The idea that some people have that all dependencies are super vital to update all the time and the casual expectation of a constant stream of vital security updates is not a general characteristic of programming, it is a specific characteristic not just of certain languages but arguably the community attached to those languages. (What we really need is capabilities, even at a very gross level, so we can all notice that the supposed vector math library suddenly at version 1.43.2 wants to add network access, disk reading, command execution, and cryptography to the set of things it wants to do, which would raise all sorts of eyebrows immediately, even perhaps in an automated fashion. But that's a separate discussion.) | | |
| ▲ | catlifeonmars 5 hours ago | parent | next [-] | | I use a dependabot config that buckets security updates into a separate pull than other updates. The non-security update PRs are just informational (can disable but I choose to leave them on), and you can actually spend the time to vet the security updates | |
| ▲ | skybrian 8 hours ago | parent | prev | next [-] | | It seems like some of the arguments in favor of doing frequent releases apply at least a little bit for dependency updates? Doing updates on a regular basis (weekly to monthly) seems like a good idea so you don't forget how to do them and the work doesn't pile up. Also, it's easier to debug a problem when there are fewer changes at once. But they could be rescheduled depending on what else is going on. | |
| ▲ | dudeinjapan 3 hours ago | parent | prev [-] | | Dependabot only suggest upgrades when there are CVEs, and even then it just alerts and raises PRs, it doesn’t force it on you. Our team sees it as a convenience, not a draconian measure. |
| |
| ▲ | tracnar 8 hours ago | parent | prev | next [-] | | You could use this funky tool from oss-rebuild which proxies registries so they return the state they were at a past date: https://github.com/google/oss-rebuild/tree/main/cmd/timewarp | |
| ▲ | pas 8 hours ago | parent | prev | next [-] | | > "it'll only get harder in the future" that's generally true, no? of course waiting a few days/weeks should be the minimum unless there's a CVE (or equivalent) that's applies | |
| ▲ | hypeatei 9 hours ago | parent | prev | next [-] | | > Sadly, not enough tooling seems to take this into account Most tooling (e.g. Dependabot) allows you to set an interval between version checks. What more could be done on that front exactly? Devs can already choose to check less frequently. | | |
| ▲ | mirashii 9 hours ago | parent [-] | | The check frequency isn't the problem, it's the latency between release and update. If a package was released 5 minutes before dependabot runs and you still update to it, your lower frequency hasn't really done anything. | | |
| |
| ▲ | 8 hours ago | parent | prev | next [-] | | [deleted] | |
| ▲ | stefan_ 6 hours ago | parent | prev [-] | | Thats because the security industry has been captured by useless middle manager types who can see that "one dependency has a critical vulnerability", but could never in their life scrounge together the clue to analyze the impact of that vulnerability correctly. All they know is the checklist fails, and the checklist can not fail. (Literally at one place we built a SPA frontend that was embedded in the device firmware as a static bundle, served to the client and would then talk to a small API server. And because these NodeJS types liked to have libraries reused for server and frontend, we would get endless "vulnerability reports" - but all of this stuff only ever ran in the clients browser!) |
|
|
| ▲ | bumblehean 6 hours ago | parent | prev | next [-] |
| >The thing to do is to monitor your dependencies and their published vulnerabilities, and for critical vulnerabilities to assess whether your product is affect by it. Only then do you need to update that specific dependency right away. The practical problem with this is that many large organizations have a security/infosec team that mandates a "zero CVE" posture for all software. Where I work, if our infosec team's scanner detect a critical vulnerability in any software we use, we have 7 days to update it. If we miss that window we're "out of compliance" which triggers a whole process that no one wants to deal with. The path of least resistance is to update everything as soon as updates are available. Consequences be damned. |
| |
| ▲ | tetha 4 hours ago | parent | next [-] | | I really dislike that approach. We're by now evaluating high-severity CVEs ASAP in a group to figure out if we are affected, and if mitigations apply. Then there is the choice of crash-patching and/or mitigating in parallel, updating fast, or just prioritizing that update more. We had like 1 or 2 crash-patches in the past - Log4Shell was one of them, and blocking an API no matter what in a component was another one. In a lot of other cases, you could easily wait a week or two for directly customer facing things. | |
| ▲ | BrenBarn 6 hours ago | parent | prev [-] | | > The practical problem with this is that many large organizations have a security/infosec team that mandates a "zero CVE" posture for all software. The solution is to fire those teams. | | |
| ▲ | acdha 4 hours ago | parent | next [-] | | This isn’t a serious response. Even if you had the clout to do that, you’d then own having to deal with the underlying pressure which lead them to require that in the first place. It’s rare that this is someone waking up in the morning and deciding to be insufferable, although you can’t rule that out in infosec, but they’re usually responding to requirements added by customers, auditors needed to get some kind of compliance status, etc. What you should do instead is talk with them about SLAs and validation. For example, commit to patching CRITICAL within x days, HIGH with y, etc. but also have a process where those can be cancelled if the bug can be shown not to be exploitable in your environment. Your CISO should be talking about the risk of supply chain attacks and outages caused by rushed updates, too, since the latter are pretty common. | |
| ▲ | IcyWindows 4 hours ago | parent | prev | next [-] | | Aren't some of these government regulations for cloud, etc.? | |
| ▲ | bumblehean 5 hours ago | parent | prev [-] | | Sure I'll go suggest that to my C-suite lol | | |
|
|
|
| ▲ | weinzierl 3 hours ago | parent | prev | next [-] |
| "The thing to do is to monitor your dependencies and their published vulnerabilities, and for critical vulnerabilities to assess whether your product is affect by it." Yes "Only then do you need to update that specific dependency right away." Big no. If you do that it is guaranteed one day you miss a vulnerability that hurts you. To frame it differently: What you propose sounds good in theory but in practice the effort to evaluate vulnerabilities against your product will be higher than the effort to update plus
taking appropriate measures against supply chain attacks. |
|
| ▲ | silvestrov 9 hours ago | parent | prev | next [-] |
| I think the main question is: do your app get unknown input (i.e. controlled by other people). Browsers get a lot of unknown input, so they have to update often. A Weather app is likely to only get input from one specific site (controlled by the app developers), so it should be relatively safe. |
|
| ▲ | ozim 3 hours ago | parent | prev | next [-] |
| Fun part is people are worried about 0 days but in reality most problems come from 300 or 600 days old not patched vulns. |
|
| ▲ | nrhrjrjrjtntbt 3 hours ago | parent | prev | next [-] |
| I agree. Pick yout poison. my poison is waiting before upgrades, assess zero days case by case. |
|
| ▲ | jerf 8 hours ago | parent | prev | next [-] |
| Also, if you are updating "right away" it is presumably because of some specific vulnerability (or set of them). But if you're in an "update right now" mode you have the most eyes on the source code in question at that point in time, and it's probably a relatively small patch for the targeted problem. Such a patch is the absolute worst time for an attacker to try to sneak anything in to a release, the exact and complete opposite of the conditions they are looking for. Nobody is proposing a system that utterly and completely locks you out of all updates if they haven't aged enough. There is always going to be an override switch. |
|
| ▲ | justsomehnguy 7 hours ago | parent | prev | next [-] |
| > People in this thread are worried that they are significantly vulnerable if they don't update right away Most of them assume what if they are working on some public accessible website then 99% of the people and orgs in the world are running nothing but some public accessible website. |
|
| ▲ | duped 9 hours ago | parent | prev [-] |
| A million times this. You update a dependency when there are bug fixes or features that you need (and this includes patching vulnerabilities!). Those situations are rare. Otherwise you're just introducing risk into your system - and not that you're going to be caught in some dragnet supply chain attack, but that some dependency broke something you relied on by accident. Dependencies are good. Churn is bad. |