| ▲ | montroser 7 hours ago |
| Yes, you are responsible for all the code you ship to your users. Not pinning dependencies is asking for trouble. It is literally, "download random code from the Internet and hope for the best." |
|
| ▲ | Scramblejams 6 hours ago | parent | next [-] |
| Pinning dependencies also means you're missing any security fixes that come in after your pinned versions. That's asking for trouble too, so you need a mechanism by which you become aware of these fixes and either backport them or upgrade to versions containing them. |
| |
| ▲ | yen223 3 hours ago | parent | next [-] | | Things like dependabot or renovate solves the problem of letting you know when security updates are available, letting you have your cake and eat it too. | |
| ▲ | kjkjadksj 6 hours ago | parent | prev [-] | | All code is fundamentally not ever secure. | | |
| ▲ | apstls 6 hours ago | parent | next [-] | | This statement is one of those useless exercises in pedantry like when people say "well technically coffee is a drug too, so..." Code with publicly-known weaknesses poses exponentially more danger than code with unknown weaknesses. It's like telling sysadmins to not waste time installing security patches because there are likely still vulnerabilities in the application. Great way to get n-day'd into a ransomware payment. | | |
| ▲ | nightpool 5 hours ago | parent [-] | | Have you spent time reviewing the security patches for any nontrivial application recently? 90% of them are worthless, the 10% that are actually useful are pretty easy to spot. It's not as big of a deal as people would like to have you think. |
| |
| ▲ | da_chicken 6 hours ago | parent | prev [-] | | That's why I run Windows 7. It's going to be insecure anyways so what's the big deal? |
|
|
|
| ▲ | lelandfe 6 hours ago | parent | prev [-] |
| Pinned dependencies usually have their own dependencies so you are generally always downloading random code and hoping. I mean, jeeze, how much code comes along for the ride with Electron... |
| |
| ▲ | cosmic_cheese 6 hours ago | parent | next [-] | | The real answer is to minimize dependencies (and subdependencies) to the greatest extent practical. In some cases you can get by with surprisingly few without too much pain (and in the long run, maybe less pain than if you'd pulled in more). | | |
| ▲ | Scramblejams 6 hours ago | parent [-] | | Yep, and for the rest I've gotten a lot of mileage, when shipping server apps, by deploying on Debian or Ubuntu* and trying to limit my dependencies to those shipped by the distro (not snap). The distro security team worries about keeping my dependencies patched and I'm not forced to take new versions until I have to upgrade to the next OS version, which could be quite a long time. It's a great way to keep lifecycle costs down and devops QoL up, especially for smaller shops. *Insert favorite distro here that backports security fixes to stable package versions for a long period of time. |
| |
| ▲ | chrisweekly 4 hours ago | parent | prev [-] | | No. "Always downloading random code and hoping" is not the only option. Even w/ the supply-chain shitshow that the public npmjs registry has become, using pnpm and a private registry makes it possible to leverage a frozen lockfile that represents the entire dependency graph and supports vulnerability-free reproducible builds. EDIT to add:
Of course, reaching a state where the whole graph is free of CVEs is a fleeting state of affairs. Staying reasonably up-to-date and using only scanned dependencies is an ongoing process that takes more effort and attention to detail than many projects are willing or able to apply; but it is possible. |
|