| ▲ | bunderbunder 4 days ago |
| Alternatively, I've long been wondering if automatic package management may have been a mistake. Its primary purpose seems to be to enable this kind of proliferation of micro-dependencies by effectively sweeping the management of these sprawling dependency graphs under the carpet. But the upshot of that is, most changes to your dependency graph, and by extension your primary vector for supply chain attacks, becomes something you're no longer really looking at. Versus, when I've worked at places that eschew automatic dependency management, yes, there is some extra work associated with manually managing them. But it's honestly not that much. And in some ways it becomes a boon for maintainability because it encourages keeping your dependency graph pruned. That, in turn, reduces exposure to third-party software vulnerabilities and toil associated with responding to them. |
|
| ▲ | JoshTriplett 4 days ago | parent | next [-] |
| Manual dependency management without a package manager does not lead people to do more auditing. And at least with a standardized package manager, the packages are in a standard format that makes them easier to analyze, audit, etc. |
| |
| ▲ | Groxx 4 days ago | parent | next [-] | | yea, just look at the state of many C projects. it's rather clearly worse in practice in aggregate. should it be higher friction than npm? probably yes. a permissions system would inherently add a bit (leftpad includes 27 libraries which require permissions "internet" and "sudo", add? [y/N]) which would help a bit I think. but I'm personally more optimistic about structured code and review signing, e.g. like cargo-crev: https://web.crev.dev/rust-reviews/ . there could be a market around "X group reviewed it and said it's fine", instead of the absolute chaos we have now outside of conservative linux distro packagers. there's practically no sharing of "lgtm" / "omfg no" knowledge at the moment, everyone has to do it themselves all the time and not miss anything or suffer the pain, and/or hope they can get the package manager hosts' attention fast enough. | | |
| ▲ | bunderbunder 4 days ago | parent [-] | | C has a lot of characteristics beyond simple lack of a standard automatic package manager that complicate the situation. The more interesting comparison to me is, for example, my experience on C# projects that do and do not use NuGet. Or even the overall C# ecosystem before and after NuGet got popular. Because then you're getting closer to just comparing life with and without a package manager, without all the extra confounding variables from differing language capabilities, business domains, development cultures, etc. | | |
| ▲ | Groxx 4 days ago | parent [-] | | when I was doing C# pre-nuget we had an utterly absurd amount of libraries that nobody had checked and nobody ever upgraded. so... yeah I think it applies there too, at least from my experience. I do agree that C is an especially-bad case for additional reasons though, yeah. | | |
| ▲ | bunderbunder 3 days ago | parent [-] | | Gotcha. When I was, we actively curated our dependencies and maintaining them was a regularly scheduled task that one team member in particular was in charge of making sure got done. | | |
| ▲ | Groxx 3 days ago | parent [-] | | most teams I've been around have zero or one person who handles that (because they're passionate) (this is usually me) - tbh I think that's probably the majority case. exceptions totally exist, I've seen them too. I just don't think they're enough to move the median away from "total chaotic garbage" regardless of the system | | |
| ▲ | bunderbunder 3 days ago | parent [-] | | This is why I secretly hate the term software engineer. "Software tinker" would be more appropriate. | | |
| ▲ | Groxx 2 days ago | parent [-] | | ha, I like that one - it evokes the right mental image. |
|
|
|
|
|
| |
| ▲ | mikestorrent 4 days ago | parent | prev [-] | | Well, consider that a lot of these functions that were exploited are simple things. We use a library to spare ourselves the drugdery of rewriting them, but now that we have AI, what's it to me if I end up with my own string-colouring functions for output in some file under my own control, vs. bringing in an external dependency that puts me on a permanent upgrade treadmill and opens the risk to supply chain attacks? Leftpad as a library? Let it all burn down; but then, it's Javascript, it's always been on fire. | | |
| ▲ | JoshTriplett 3 days ago | parent [-] | | > but now that we have AI, what's it to me if I end up with my own string-colouring functions for output in some file under my own control Before AI code generation, we would have called that copy-and-paste, and a code smell compared to proper reuse of a library. It's not any better with AI. That's still code you'd have to maintain, and debug. And duplicated effort from all the other code doing the same thing, and not de-duplicated across the numerous libraries in a dependency tree or on a system, and not benefiting from multiple people collaborating on a common API, and not benefiting from skill transfer across projects... | | |
| ▲ | mikestorrent 15 hours ago | parent [-] | | > a code smell Smells are changing, friend. Now, when I see a program with 20000 library dependencies that I have to feed into a SAST and SCA system and continually point-version-bump and rebuild, it smells a hell of a lot worse to me than something self-contained. At this point, I feel like I can protect the latter from being exploited better than the former. | | |
| ▲ | JoshTriplett 7 hours ago | parent [-] | | > At this point, I feel like I can protect the latter from being exploited better than the former. I expect that your future CVEs will say otherwise. People outside your organization have seen those library dependencies, and can update them when they discover bugs or security issues, and you can automatically audit a codebase to make sure it's using a secure version of each dependency. Bespoke AI-generated code will have bespoke bugs and bespoke security issues. |
|
|
|
|
|
| ▲ | ryandrake 4 days ago | parent | prev [-] |
| Unpopular opinion these days, but: It should be painful to pull in a dependency. It should require work. It should require scrutiny, and deep understanding of the code you're pulling in. Adding a dependency is such an important decision that can have far reaching effects over your code: performance, security, privacy, quality/defects. You shouldn't be able to casually do it with a single command line. |
| |
| ▲ | heisenbit 4 days ago | parent | next [-] | | For better or worse it is often less work to create a dependency than to maintain it over its lifetime. Improvements in maintenance also ease creation of new dependencies. | |
| ▲ | skydhash 4 days ago | parent | prev [-] | | I wouldn’t go for painful that much. The main issue is transitive dependencies. The tree can be several layer deep. In the C world, anything that is not direct is often a very stable library and can be brought in as a peer deps. Breaking changes happen less and you can resolve the tree manually. In NPM, there are so many little packages that even renowned packages choose to rely one for no obvious reason. It’s a severe lack of discipline. |
|