Remix.run Logo
OskarS 6 days ago

I have a question: when I’ve seen people discussing this setting, people talk about using like ”3 days” or ”7 days” as the timeout, which seems insanely short to me for production use. As a C++ developer, I would be hesitant to use any dependency in the first six months of release in production, unless there’s some critical CVE or something (then again, we make client side applications with essentially no networking, so security isn’t as critical for us, stability is much more important).

Does the JS ecosystem really move so fast that you can’t wait a month or two before updating your packages?

dtech 6 days ago | parent | next [-]

Waiting 6 months to upgrade a dependency seems crazy, that's definitely not a thing in other languages or maybe companies. (It might be due to priorization, but not due to some rule of thumb)

In the JVM ecosystem it's quite common to have Dependabot or Renovate automatically create PRs for dependency upgrades withing a few hours of it being released. If it's manual it highly irregular and depends on the company.

slowroll22 6 days ago | parent [-]

For a previous place I worked - on some of our products 6 months was the minimum - and explicitly a year for a few of the dependencies.

The main deciding factors were the process and frequency it was released / upgraded by us or our customers.

The on-prem installs had the longest delay because once it was out there it was harder for us to address issues. Some customers also had a change freeze in place once things have been approved which was a pain to deal with if we needed to patch something for them.

Products that had a shorter release or update cycle (e.g. the mobile app) had a shorter delay (but still a delay) because any issue could be addressed faster.

The services that were hosted by us had the shortest delay on the order of days to weeks.

There were obviously exceptions in both directions but we tried to avoid them.

Prioritisation wasnt really an issue - a lot of dependencies were increased on internal builds so we had more time to test and verify before committing to it once it reached our stability rules.

Other factors that influenced us: - Blast radius - a buggy dependency in our desktop/server applications had more chance to cause damage than our hosted web application so it rolled a little slower for dependencies.

- Language (more like ergonomics of the language) - updating our C++ deps was a lot more cumbersome than JS deps)

esafak 5 days ago | parent [-]

As long as you can quickly upgrade a package when there's a security patch you're good. You make it sound like that's not the case, though.

slowroll22 5 days ago | parent [-]

It was definitely possible, as mentioned there were some exceptions (such as cases where we did need to roll out a version with dependencies bumped or with our own critical fixes).

The harder part, as is often the case, wasn't technical - but more convincing customers to take the new version and getting time with their IT teams to manage. It got easier over time but the bureaucracy at some of the clients was slow to change so I suspect they still face some issues.

progx 6 days ago | parent | prev | next [-]

Yes, but this is not only JS dependent, in PHP (composer) is the same.

Normally old major or minor packages don't get an update, only the latest.

E.g. 4.1.47 (no update), 4.2.1 (yes got update).

So if the problem is in 4.1 you must "upgrade" to 4.2.

With "perfect" semver, this shouldn't be a problem, cause 4.2 only add new features... but... back to reality, the world is not perfect.

diegof79 6 days ago | parent | prev | next [-]

Transitive dependencies are the main issue.

Suppose you have a package P1 with version 1.0.0 that depends on D1 with version ^1.0.0. The “^” indicates a range query. Without going into semver details, it helps update D1 automatically for minor patches or non-breaking feature additions.

In your project, everything looks fine as P1 is pinned to 1.0.0. Then, you install P2 that also uses D1. A new patch version of D1 (1.0.1) was released. The package manager automatically upgrades to 1.0.1 because it matches the expression ^1.0.0, as specified by P1 and P2 authors.

This can lead to surprises. JS package managers use lock files to prevent changes during installs. However, they still change the lock file for additions or manual version upgrades, resolving to newer minor dependencies if the version range matches. This is often desirable for bug fixes and security updates. But, it opens the door to this type of attack.

To answer your question, yes, the JS ecosystem moves faster, and pkg managers make it easy to create small libraries. This results in many “small” libraries as transitive dependencies. Rewriting these libraries with your own code works for simple cases like left-pad, but you can’t rewrite a webserver or a build tool that also has many small transitive dependencies. For example, the chalk library is used by many CLI tools to show color output.

ozim 6 days ago | parent | prev | next [-]

NPM packages follow semantic versioning so minor versions should be fine to auto update. (there is still an issue what for package maintainer might be minor not being minor for you - but let's stick to ideal world for that)

I don't think people are having major versions updated every month, it is more really like 6 months or once a year.

I guess the problem might be people think auto updating minor versions in CI/CD pipeline will keep them more secure as bug fixes should be in minor versions but in reality we see it is not the case and attackers use it to spread malware.

otterley 5 days ago | parent [-]

> so minor versions should be fine to auto update

The problem is that "should" assumes that point releases never introduce regressions (whether they be security, performance, or correctness). Unfortunately, history has shown that regressions can and do happen. The best practice for release engineering (CI/CD, if you will) is to assume the worst, test thoroughly, and release incrementally (include bake time).

Delaying updates isn't just a backstop against security vulnerabilities; it's useful for letting the dust settle after an update of any kind that can adversely impact the application. The theory is that someone will find it before you, report it, and that a fix will be issued.

ozim 5 days ago | parent [-]

Regressions are irrelevant in this context, you can accept regressions as something you will deal with in case those happen or not.

Simply installing update automatically you get pwned by bad guys, someone taking over your CI/CD server or infrastructure is not something acceptable.

otterley 5 days ago | parent [-]

That makes the advice all the more important, rather than making it "irrelevant." My point was that people mistakenly believe point releases are safe to apply automatically. They're not, and not just because of security.

creesch 6 days ago | parent | prev | next [-]

> Does the JS ecosystem really move so fast that you can’t wait a month or two before updating your packages?

Really depends on the context and where the code is being used. As others have pointed out most js packages will use semantic versioning. For the patch releases (the last of the three numbers), for code that is exposed to the outside world you generally want to apply those rather quickly. As those will contain hotfixes including those fixing CVEs.

For the major and minor releases it really depends on what sort of dependencies you are using and how stable they are.

The issue isn't really unique to the JavaScript eco system either. A bigger java project (certainly with a lot of spring related dependencies) will also see a lot of movement.

That isn't to say that some tropes about the JavaScript ecosystem being extremely volatile aren't entirely true. But in this case I do think the context is the bigger difference.

> then again, we make client side applications with essentially no networking, so security isn’t as critical for us, stability is much more important)

By its nature, most JavaScript will be network connected in some fashion in environments with plenty of bad actors.

pandemic_region 6 days ago | parent | prev | next [-]

> Does the JS ecosystem really move so fast that you can’t wait a month or two before updating your packages?

In 2 months, a typical js framework goes through the full Gartner Hype Cycle and moves to being unmaintained with an archived git repo and dozens of virus infected forks with similar names.

patwolf 6 days ago | parent | prev | next [-]

It's common to have npm auditing enabled, which means your CI/CD will force you to update to a brand new version of a package because a security vulnerability was reported in an older one.

I've also had cases where I've found a bug in a package, submitted a bug report or PR, and then immediately pulled in the new version as soon as it was fixed. Things move fast in the JavaScript/npm/GitHub ecosystem.

codemonkey-zeta 6 days ago | parent | prev [-]

I think the surface area for bugs in a C++ dependency is way bigger than a JS one. Pulling in a new node module is not going to segfault my app, for example.