| ▲ | f311a 6 days ago |
| People really need to start thinking twice when adding a new dependency.
So many supply chain attacks this year. This week, I needed to add a progress bar with 8 stats counters to my Go project. I looked at the libraries, and they all had 3000+ lines of code. I asked LLM to write me a simple progress report tracking UI, and it was less than 150 lines. It works as expected, no dependencies needed. It's extremely simple, and everyone can understand the code. It just clears the terminal output and redraws it every second. It is also thread-safe. Took me 25 minutes to integrate it and review the code. If you don't need a complex stats counter, a simple progress bar is like 30 lines of code as well. This is a way to go for me now when considering another dependency. We don't have the resources to audit every package update. |
|
| ▲ | coldpie 6 days ago | parent | next [-] |
| > People really need to start thinking twice when adding a new dependency. So many supply chain attacks this year. I was really nervous when "language package managers" started to catch on. I work in the systems programming world, not the web world, so for the past decade, I looked from a distance at stuff like pip and npm and whatever with kind of a questionable side-eye. But when I did a Rust project and saw how trivially easy it was to pull in dozens of completely un-reviewed dependencies from the Internet with Cargo via a single line in a config file, I knew we were in for a bad time. Sure enough. This is a bad direction, and we need to turn back now. (We won't. There is no such thing as computer security.) |
| |
| ▲ | skydhash 6 days ago | parent | next [-] | | The thing is, system based package managers require discipline, especially from library authors. Even in the web world, it’s really distressing when you see a minor library is already on its 15 iteration in less that 5 years. I was trying to build just (the task runner) on Debian 12 and it was impossible. It kept complaining about rust version, then some libraries shenanigans. It is way easier to build Emacs and ffmpeg. | | |
| ▲ | ajross 5 days ago | parent [-] | | Indeed, it seems insane that we're pining for the days of autotools, configure scripts and the cleanly inspectable dependency structure. But... We absolutely are. | | |
| ▲ | kalaksi 5 days ago | parent [-] | | Disagree. And why wouldn't dep structure be cleanly inspectable. | | |
| ▲ | ajross 4 days ago | parent [-] | | Decades ago, you'd type "configure" and be told you need to install libfoobar version 9, and you would, and it would work. Now you get npm whining at you about an unsatisfiable dependency cycle because some three-level-removed transitive dependency you've never heard of put a hard lock file in that reference a version that got pulled for a security flaw. |
|
|
| |
| ▲ | jacobsenscott 5 days ago | parent | prev | next [-] | | Remember the pre package manager days was ossified, archaic, insecure installations because self managing dependencies is hard, and people didn't keep them up to date. You need to get your deps from somewhere, so in the pre-package manager days you still just downloaded it from somewhere - a vendor's web site, or sourceforge, or whatever, and probably didn't audit it, and hoped it was secure. It's still work to keep things up to date and audited, but less work at least. | | |
| ▲ | rixed 4 days ago | parent [-] | | If most of your deps are coming from the distro, they are audited already. Typically, I never had to add more than a handful of extra deps in any projects I ever worked on. That's a no brainer to manage. |
| |
| ▲ | cedws 6 days ago | parent | prev | next [-] | | Rust makes me especially nervous due to the possibility of compile-time code execution. So a cargo build invocation is all it could take to own you. In Go there is no such possibility by design. | | |
| ▲ | exDM69 5 days ago | parent | next [-] | | The same applies to any Makefile, the Python script invoked by CMake or pretty much any other scriptable build system. They are all untrusted scripts you download from the internet and run on your computer. Rust build.rs is not really special in that regard. Maybe go build doesn't allow this but most other language ecosystems share the same weakness. | | |
| ▲ | pdw 5 days ago | parent | next [-] | | Right, people forget that the xz-utils backdoor happened to a very traditional no-dependencies C project. | | |
| ▲ | theteapot 5 days ago | parent [-] | | xz-utils has a ton of build dependencies. The backdoor implant exploited a flaw in an m4 macro build dep. |
| |
| ▲ | cedws 5 days ago | parent | prev | next [-] | | Yes but it's the fact that cargo can pull a massive unreviewed dependency tree and then immediately execute code from those dependencies that's the problem. If you have a repo with a Makefile you have the opportunity to review it first at least. | | |
| ▲ | duped 5 days ago | parent | next [-] | | Do you review the 10k+ lines of generated bash in ./configure, too? | | |
| ▲ | cozzyd 5 days ago | parent [-] | | ./configure shouldn't be in your repo unless it's handwritten | | |
| ▲ | johnisgood 5 days ago | parent [-] | | Pretty much. It is called "autotools" for a reason. Theoretically you should be able to generate the configuration scripts through "autoconf" (or autoreconf), or generate Makefile.in for configure from Makefile.am using "automake", etc. |
|
| |
| ▲ | pharrington 5 days ago | parent | prev [-] | | You are allowed to read Cargo.toml. | | |
| ▲ | cedws 5 days ago | parent [-] | | Cargo.toml does not contain the source code of dependencies nor transient dependencies. | | |
| ▲ | magackame 5 days ago | parent [-] | | Welp, `cargo tree`, 100 nights and 100 coffees then it is | | |
| ▲ | marshray 5 days ago | parent | next [-] | | Yes! I sometimes set up a script that runs several variations on 'cargo tree', as well as collects various stats on output binary sizes, lines of code, licenses, etc. The output is written to a .txt file that gets checked-in. This allows me to easily observe the 'weight' of adding any new feature or dependency, and to keep an eye on the creep over time as the project evolves. | |
| ▲ | johnisgood 5 days ago | parent | prev [-] | | You will need something stronger than caffeine. |
|
|
|
| |
| ▲ | Bridged7756 5 days ago | parent | prev [-] | | In JavaScript just the npm install can fuck things up. Pre-install scripts can run malicious code. |
| |
| ▲ | pharrington 5 days ago | parent | prev | next [-] | | You're confusing compile-time with build-time. And build time code execution exists absolutely exists in go, because that's what a build tool is.
https://pkg.go.dev/cmd/go#hdr-Add_dependencies_to_current_mo... | | |
| ▲ | TheDong 5 days ago | parent | next [-] | | I think you're misunderstanding. "go build" of arbitrary attacker controlled go code will not lead to arbitrary code execution. If you do "git clone attacker-repo && cargo build", that executes "build.rs" which can exec any command. If you do "git clone attacker-repo && go build", that will not execute any attacker controlled commands, and if it does it'll get a CVE. You can see this by the following CVEs: https://pkg.go.dev/vuln/GO-2023-2095 https://pkg.go.dev/vuln/GO-2023-1842 In cargo, "cargo build" running arbitrary code is working as intended. In go, both "go get" and "go build" running arbitrary code is considered a CVE. | | |
| ▲ | thayne 5 days ago | parent [-] | | But `go generate` can, and that is required to build some go projects. It is also somewhat common for some complicated projects to require running a Makefile or similar in order to build, because of dependencies on things other than go code. | | |
| ▲ | TheDong 5 days ago | parent [-] | | The culture around "go generate" is that you check in any files it generates that are needed to build. In fact, for go libraries you effectively have to otherwise `go get` wouldn't work correctly (since there's no way to easily run `go generate` for a third-party library now that we're using go modules, not gopath). Have you actually seen this in the wild for any library you might `go get`? Can you link any examples? | | |
| ▲ | thayne 5 days ago | parent [-] | | > Have you actually seen this in the wild for any library you might `go get`? Not for a library, but I have for an executable. Unfortunately, I don't remember what it was. |
|
|
| |
| ▲ | cedws 5 days ago | parent | prev [-] | | I don't really get what you're trying to say, go get does not execute arbitrary code. |
| |
| ▲ | fluoridation 5 days ago | parent | prev | next [-] | | Does it really matter, though? Presumably if you're building something is so you can run it. Who cares if the build script is itself going to execute code if the final product that you're going to execute? | | |
| ▲ | johannes1234321 5 days ago | parent [-] | | With a scripting language it can matter: If I install some package I can review after the install before running or run in a container or other somewhat protected ground. Whereas anything running during install can hide all trades. Of course this assumption breaks with native modules and with the sheer amount of code being pulled in indirectly ... |
| |
| ▲ | goku12 5 days ago | parent | prev [-] | | Build script isn't a big issue for Rust because there is a simple mitigation that's possible. Do the build in a secure sandbox. Only execution and network access must be allowed - preferably as separate steps. Network access can be restricted to only downloading dependencies. Everything else, including access to the main filesystem should be denied. Runtime malicious code is a different matter. Rust has a security workgroup and their tools to address this. But it still worries me. |
| |
| ▲ | thayne 5 days ago | parent | prev | next [-] | | > This is a bad direction, and we need to turn back now. I don't deny there are some problems with package managers, but I also don't want to go back to a world where it is a huge pain to add any dependency, which leads to projects wasting effort on implementing things themselves, often in a buggy and/or inefficient way, and/or using huge libraries that try to do everything, but do nothing well. | | |
| ▲ | username223 5 days ago | parent [-] | | It's a tradeoff. When package users had to manually install dependencies, package developers had to reckon with that friction. Now we're living in a world where developers don't care about another 10^X dependencies, because the package manager will just run the scripts and install the files, and the users will accept it. |
| |
| ▲ | rootnod3 6 days ago | parent | prev | next [-] | | Fully agree. That is why I vendor all my dependencies. On the common lisp side a new tool emerged a while ago for that[1]. On top of that, I try to keep the dependencies to an absolute minimum. In my current project it's 15 dependencies, including the sub-dependencies. [1]: https://github.com/fosskers/vend | | |
| ▲ | coldpie 5 days ago | parent | next [-] | | I didn't vendor them, but I did do an eyeball scan of every package in the full tree for my project, primarily to gather their license requirements[1]. (This was surprisingly difficult for something that every project in theory must do to meet licensing requirements!) It amounted to approximately 50 dependencies pulled into the build, to create a single gstreamer plugin. Not a fan. [1] https://github.com/ValveSoftware/Proton/commit/f21922d970888... | |
| ▲ | skydhash 5 days ago | parent | prev [-] | | Vendoring is nice. Using the system version is nicer. If you can’t run on $current_debian, that’s very much a you problem. If postgres and nginx can do it, you can too. | | |
| ▲ | exDM69 5 days ago | parent | next [-] | | The system package manager and the language package/dependency managers do a very different task. The distro package manager delivers applications (like Firefox) and a coherent set of libraries needed to run those applications. Most distro package managers (except Nix and its kin) don't allow you to install multiple versions of a library, have libs with different compile time options enabled (or they need separate packages for that). Once you need a different version of some library than, say, Firefox does, you're out of luck. A language package manager by contrast delivers your dependency graph, pinned to certain versions you control, to build your application. It can install many different versions of a lib, possibly even link them in the same application. | | |
| ▲ | skydhash 5 days ago | parent [-] | | But I don’t really want your version of the application, I want the one that is aligned to my system. If some feature is really critical to the application, you can detect them at runtime and bailout (in C at least). Most developers are too aggressive on version pinning. > Most distro package managers (except Nix and its kin) don't allow you to install multiple versions of a library They do, but most distro only supports one or two versions in the official repos. | | |
| ▲ | rcxdude 5 days ago | parent [-] | | Maybe you want that, but I generally want the version of the application that the devs have tested the most. I've dealt with many issues due to slight differences between dependency versions, and I'd rather not provoke them. (That said, I do like debian for boring infrastructure, because they can keep things patched without changing things, but for complex desktop apps, nah, give me the upstream versions please. And for things I'm developing myself the distro is but a vehicle for a static binary or self-contained folder) |
|
| |
| ▲ | coldpie 5 days ago | parent | prev | next [-] | | > If you can’t run on $current_debian, that’s very much a you problem. This is a reasonable position for most software, but definitely not all, especially when you fix a bug or add a feature in your dependent library and your Debian users (reasonably!) don't want to wait months or years for Debian to update their packages to get the benefits. This probably happens rarely for stable system software like postgres and nginx, but for less well-established usecases like running modern video games on Linux, it definitely comes up fairly often. | | |
| ▲ | teddyh 5 days ago | parent [-] | | Something I have seen that recently have become much more common is the software upstream authors providing a Debian repository for the latest versions of their software, including backports for old Debian releases. | | |
| ▲ | rcxdude 5 days ago | parent [-] | | Yes, mainly because such repositories don't have to follow debian's policies, and so it's a lot easier to package a version that vendors in dependencies in a version/configuration you're willing to support (and it's better to point users there than at an official debian version because if debian breaks something you'll be getting the bug reports no matter how much people try to tell users to report to the distribution first) |
|
| |
| ▲ | imiric 5 days ago | parent | prev | next [-] | | That is an impossible task in practice for most developers. Many distros, and Debian in particular, apply extensive patches to upstream packages. Asking a developer to depend on every possible variation of such packages, across many distros, is a tall order. Postgres and Nginx might be able to do it, but those are established projects with large teams behind them and plenty of leverage. They might even be able to influence distro maintainers to their will, since no distro will want to miss out on carrying such popular packages. So vendoring is in practice the only sane choice for smaller teams and projects. Besides, distro package managers carrying libraries for all programming languages is an insane practice that is impossible to scale and maintain. It exists in this weird unspecified state that can technically be useful for end users, but is completely useless for developers. Are they supposed to develop on a specific distro for some reason? Should it carry sources or only binaries? Is the dependency resolution the same for all languages? Should language tooling support them? It's an entirely ridiculous practice that should be abandoned altogether. Yes, it's also silly that every language has to reinvent the wheel for managing dependencies, and that it can introduce novel supply chain attack vectors, but the alternative is a far more ludicrous proposition. | | |
| ▲ | skydhash 5 days ago | parent | next [-] | | > distro package managers carrying libraries for all programming languages is an insane practice that is impossible to scale and maintain. That's not the idea. If a software is packaged for a distro, then the distro will have the libraries needed for that software. If you're developing a new software and wants some new library not yet packaged, I believe you can figure how to get them on your system. The thread is about the user's system, not yours. When I want to run your code, you don't have to say: Use flatpak; Use docker; Use 24.1.1 instead of 24.1.0; Use $THING
| | |
| ▲ | marcosdumay 5 days ago | parent | next [-] | | It's not reasonable to expect every software in existence to work with a compatible set of dependencies. So no, the distro can't supply all the libraries. What happens is that distro developers spend their time patching the upstream so it works with the set included on the distro. This has some arguable benefits to any user that wants to rebuild their software, at the cost of random problems added by that patching that flies under the radar of the upstream developers. Instead, the GPs proposal of vendoring the dependencies solves that problem, without breaking the compilation, and adds another set of issues that may or may not be a problem. I do argue that it's a good option to keep on one's mind to apply when necessary. | | |
| ▲ | skydhash 5 days ago | parent [-] | | > It's not reasonable to expect every software in existence to work with a compatible set of dependencies. So no, the distro can't supply all the libraries. That is not what it's being asked. As a developer, you just need to provide the code and the list of requirements. And maybe some guide about how to build and run tests. You do not want to care about where I find those dependencies (Maybe I'm running you code as PID 1). But a lot of developers want to be maintainers as well and they want to enforce what can be installed on the user's system. (And no I don't want docker and multiple versions of nginx) | | |
| ▲ | jen20 5 days ago | parent | next [-] | | The question is whose issue tracker ends up on blast when something that Debian did causes issues in software. Often only to find that the bug has been fixed already but the distribution won't bother to update. | |
| ▲ | rcxdude 5 days ago | parent | prev | next [-] | | >As a developer, you just need to provide the code and the list of requirements. And maybe some guide about how to build and run tests. You do not want to care about where I find those dependencies (Maybe I'm running you code as PID 1). That's provided by any competent build system. If you want to build it differently, with a different set of requirements, that's up to you to figure out (and fix when it breaks). | |
| ▲ | marcosdumay 5 days ago | parent | prev [-] | | > That is not what it's being asked. From whom? You seem to be talking only about upstream developers. |
|
| |
| ▲ | imiric 5 days ago | parent | prev [-] | | Right. Build and runtime dependencies are a separate matter. But for runtime dependencies, it's easier for developers to supply an OCI image, AppImage, or equivalent, with the exact versions of all dependencies baked in, than to support every possible package manager on every distro, and all possible dependency and environment permutations. This is also much easier for the user, since they only need to download and run a single self-contained artifact, that was previously (hopefully) tested to be working as intended. This has its own problems, of course, but it is the equivalent of vendoring build time dependencies. The last part of my previous comment was specifically about the practice of distros carrying build time libraries. This might've been acceptable for C/C++ that have historically lacked a dependency manager, but modern languages don't have this problem. It's a burden that distro maintainers shouldn't have to worry about. | | |
| ▲ | skydhash 5 days ago | parent [-] | | > it's easier for developers to supply an OCI image, AppImage, or equivalent, with the exact versions of all dependencies baked in, than to support every possible package manager on every distro, No developer is being asked to support every distro. You just need to provide the code and the requirement list. But some developer made the latter overly restrictive. And tailor the project to support only one release process. > This is also much easier for the user, since they only need to download and run a single self-contained artifact, that was previously (hopefully) tested to be working as intended `apt install` is way easier than the alternative and more secure. > It's a burden that distro maintainers shouldn't have to worry about. There's no burden because no one does it. You have dev version for libraries because you need them to build the software that is being packaged. No one packages library that is not being used by the software available in the distro. It's a software repository, not a library repository. | | |
| ▲ | imiric 5 days ago | parent [-] | | > No developer is being asked to support every distro. You mentioned $current_debian above. Why Debian, and not Arch, Fedora, or NixOS? Supporting individual Linux distros is a deep rabbit hole, and smaller teams simply don't have the resources to do that. > You just need to provide the code and the requirement list. That's not true. Even offering a requirements list and installation instructions for a distro implies support for that distro. If something doesn't work properly, the developer can expect a flood of support requests. > `apt install` is way easier than the alternative and more secure. That's debatable. An OCI image, AppImage, or even Snap or Flatpak package is inherently more secure than a system package, and arguably easier to deploy and upgrade. > There's no burden because no one does it. Not true. Search Debian packages and you'll find thousands of language-specific libraries. Many other distros do the same thing. NixOS is probably the most egregious example, since it literally tries to take over every other package manager. > You have dev version for libraries because you need them to build the software that is being packaged. Eh, are the dev versions useful for end users or distro maintainers? If distro maintainers need to build the software that's being packaged, they can use whatever package manager is appropriate for the language stack. An end user shouldn't need to build the packages themselves, unless it's a build-from-source distro, which most aren't. My point is that there's no reason for these dependency trees to also be tracked by distro package managers. Every modern language has their own way of managing dependencies, and distros should stay out of it. The only responsibility distro package managers should have is managing runtime dependencies for binary packages. |
|
|
| |
| ▲ | skydhash 5 days ago | parent | prev [-] | | You do not depends on a package, you depends on its API. Implementation details shouldn't matter if behavior stays the same. Why do you care if the distro reimplemented ffmpeg or libcurl, or use an alternative version built with musl? Either the library is there or it's not. Or the minimum version you want is there or it's not. You've already provided the code and the requirement list, it's up to the distro maintainer or the user to meet them. If the latter patch the code, why do you care that much? And if a library have a feature flags, check them before using the part that is gated. | | |
| ▲ | imiric 5 days ago | parent | next [-] | | There's no guarantee that software/library vX.Y.Z packaged by distro A will be identical in behavior to one packaged by distro B. Sure, distro maintainers have all sorts of guidelines, but in reality, mistakes happen, and there can be incompatibilities between the version a developer has been testing against, and one the end user is using. Relying on feature flags is a pie in the sky solution, and realistically developers shouldn't have to be concerned with such environmental issues. Dependency declarations should be relied on to work 100% of the time, whether they're specified as version numbers or checksums. Since they're not reliable in practice, vendoring build and runtime dependencies is the only failproof method. This isn't to say that larger teams shouldn't support specific distros directly, but my point is that smaller teams simply don't have the resources to do so. | | |
| ▲ | skydhash 5 days ago | parent [-] | | But why do you care that much about how the user is running your code? Maybe my laptop is running Alpine and I patches some libraries to support musl and now some methods are NOP. As the developer, why does it matter to you? You would want me to have some chroot or container installation for me to install a glibc based system so that you can have a consistent behavior on every computer that happens to run your code? Even the ones you do not own? | | |
| ▲ | rcxdude 5 days ago | parent | next [-] | | Developers would generally like their application to work. Especially in the hands of non-technical users. If you're going to take things apart and take responsibility for when something breaks, go ham, but when devs find that their software is broken for many users because a widely-used distribution packaged it wrong, then it's kind of a problem because a) users aren't necessarily going to understand where the problem is, and b) regardless, it's still broken, and if you want to make something that works and have empathy for your users, it's kind of an unpleasant situation even if you're not getting the blame. | |
| ▲ | imiric 5 days ago | parent | prev [-] | | It matters because as a developer I'll get support requests from users who claim that my software has issues, even when the root cause is unrelated to my code. If I explicitly document that I support a single way of deploying the software, and that way is a self-contained artifact with all the required runtime dependencies, which was previously thoroughly tested in my CI pipeline, then I can expect far less support requests from users. Again, this matters a lot to smaller projects and teams. Larger projects have the resources to offer extended support for various environments and deployment procedures, but smaller ones don't have this luxury. A flood of support requests can lead to exhaustion, demotivation, and burnout, especially in open source projects and those without a profitable business model. Charging for support wouldn't fix this if the team simply doesn't have the bandwidth to address each request. |
|
| |
| ▲ | 5 days ago | parent | prev [-] | | [deleted] |
|
| |
| ▲ | rootnod3 5 days ago | parent | prev [-] | | But that would lock me in to say whatever $debian provides. And some dependencies only exist as source because they are not packaged for $distribution. Of course, if possible, just saying "hey, I need these dependencies from the system" is nicer, but also not error-free. If a system suddenly uses an older or newer version of a dependency, you might also run into trouble. In either case, you run into either an a) trust problem or b) a maintenance problem. And in that scenario I tend to prefer option b), at least I know exactly whom to blame and who is in charge of fixing it: me. Also comes down to the language I guess. Common Lisp has a tendency to use source packages anyway. | | |
| ▲ | skydhash 5 days ago | parent [-] | | > If a system suddenly uses an older or newer version of a dependency, you might also run into trouble. You won't. The user may. On his system. | | |
| ▲ | rootnod3 4 days ago | parent [-] | | Aware of that. So how is that different from any other Debian package? If you rely on a certain set of packages, you are always at the end at fault. You either trust a certain base or you vet it. |
|
|
|
| |
| ▲ | Sleaker 5 days ago | parent | prev | next [-] | | This isn't as new as you make it out, ant + ivy / maven / gradle had already started this in the 00s. Definitely turned into a mess, but I think the java/cross platform nature pushed this style of development along pretty heavily. Before this wasn't CPAN already big? | |
| ▲ | sheerun 5 days ago | parent | prev | next [-] | | Back as in using less dependencies or throwing bunch of "certifying" services at all of them? | |
| ▲ | rom1v 5 days ago | parent | prev | next [-] | | I feel that Rust increases security by avoiding a whole class of bugs (thanks to memory safety), but decreases security by making supply chain attacks easier (due to the large number of transitive dependencies required even for simple projects). | | |
| ▲ | carols10cents 5 days ago | parent [-] | | Who is requiring you to use large numbers of transitive dependencies? You can always write all the code yourself instead. |
| |
| ▲ | rkagerer 5 days ago | parent | prev | next [-] | | I'm actually really frustrated how hard it's become to manually add, review and understand dependencies to my code. Libraries used to come with decent documentation, now it's just a couple lines of "npm install blah", as if that tells me anything. | |
| ▲ | smohare 5 days ago | parent | prev | next [-] | | [dead] | |
| ▲ | sieabahlpark 5 days ago | parent | prev | next [-] | | [dead] | |
| ▲ | BobbyTables2 5 days ago | parent | prev [-] | | Fully agree. So many people are so drunk on the kool aid, I often wonder if I’m the weirdo for not wanting dozens of third party libraries just to build a simple HTTP client for a simple internal REST api. (No I don’t want tokio, Unicode, multipart forms, SSL, web sockets, …). At least Rust has “features”. With pip and such, avoiding the kitchen sink is not an option. I also find anything not extensively used has bugs or missing features I need. It’s easier to fork/replace a lot of simple dependencies than hope the maintainer merges my PR on a timeline convenient for my work. | | |
| ▲ | WD-42 5 days ago | parent | next [-] | | If you don’t want Tokio I have bad news for you. Rust doesn’t ship an asynchronous runtime. So you’ll need something if you want to run async. | |
| ▲ | chasd00 5 days ago | parent | prev | next [-] | | For this specific case an llm may be a good option. You know what you want and could do it yourself but who wants to type it all out? An llm could generate an http client from the socket level on up and it would be straightforward to verify. "Create an http client in $language with basic support for GET and POST requests and outputs the response to STDOUT without any third party libraries. after processing command line arguments the first step should be opening a TCP socket". That should get you pretty far. | | |
| ▲ | autoexec 5 days ago | parent [-] | | Sure, after all, when has vibe coding ever resulted in security issues? | | |
| |
| ▲ | bethekidyouwant 5 days ago | parent | prev | next [-] | | Just use your fork until they merge your MR? | |
| ▲ | 3036e4 5 days ago | parent | prev [-] | | There is only one Rust application (server) I use enough that I try to keep up and rebuild it from the latest release every now and then. Most of the time new releases mostly bump versions of some of the 200 or so dependencies. I have no idea how I, or the server code's maintainers, can have any clue what exactly is brought in with each release. How many upgrades times 200 projects before there is a near 100% chance of something bad being included? The ideal number of both dependencies and releases are zero. That is the only way to know nothing bad was added. Sadly much software seems to push for MORE, not fewer, of both. Languages and libraries keep changing their APIs , forcing cascades of unnecessary changes to everything. It's like we want supply chain attacks to hurt as much as possible. | | |
|
|
|
| ▲ | sfink 5 days ago | parent | prev | next [-] |
| I think something like cargo vet is the way forward: https://mozilla.github.io/cargo-vet/ Yes, it's a ton of overhead, and an equivalent will be needed for every language ecosystem. The internet was great too, before it became too monetizable. So was email -- I have fond memories of cold-emailing random professors about their papers or whatever, and getting detailed responses back. Spam killed that one. Dependency chains are the latest victim of human nature. This is why we can't have nice things. |
|
| ▲ | wat10000 6 days ago | parent | prev | next [-] |
| Part of the value proposition for bringing in outside libraries was: when they improve it, you get that automatically. Now the threat is: when they “improve” it, you get that automatically. left-pad should have been a major wake up call. Instead, the lesson people took away from it seems to have mostly been, “haha, look at those idiots pulling in an entire dependency for ten lines of code. I, on the other hand, am intelligent and thoughtful because I pull in dependencies for a hundred lines of code.” |
| |
| ▲ | fluoridation 6 days ago | parent | next [-] | | The problem is less the size of a single dependency but the transitivity of adding dependencies. It used to be, library developers sought to not depend on other libraries if they could avoid it, because it meant their users had to make their build systems more complicated. It was unusual for a complete project to have a dependency graph more than two levels deep. Package managers let you easily build these gigantic dependency graphs with ease. Great for productivity, not so much for security. | | |
| ▲ | wat10000 5 days ago | parent [-] | | The size itself isn’t a problem, it’s just a rough indicator of the benefit you get. If it’s only replacing a hundred lines of code, is it really worth bringing in a dependency, and as you point out potentially many transitive dependencies, instead of writing your own? People understood this with left-pad but largely seemed unwilling to extrapolate it to somewhat larger libraries. | | |
| ▲ | 3036e4 5 days ago | parent [-] | | You are probably bringing in 10-1000 lines of code for every 1 line you did not have to write (I am sure some good estimate could be calculated?), since all the libraries support cases you do not need. This also tends to result in having to use APIs that are far more complex than they have to be. In addition to security risks. |
|
| |
| ▲ | chuckadams 5 days ago | parent | prev [-] | | So, what's the acceptable LOC count threshold for using a library? Maybe scolding and mocking people isn't a very effective security posture after all. | | |
| ▲ | wat10000 5 days ago | parent | next [-] | | Time for everybody's favorite engineering answer: it depends! You have to weigh the cost/benefit tradeoff. But you have to do it in full awareness of the costs, including potential costs from packages being taken down, broken, or subverted. In any case, for an external dependency, 100 lines is way too low of a benefit. I'm not trying to be effective, I'm just lamenting. Maybe being sarcastic isn't a very effective way to get people to be effective? | | |
| ▲ | chuckadams 5 days ago | parent [-] | | Naw, sarcasm totally works... ;) I'd say it all depends -- there's that word again -- on what those 100 LOC are expressing. I suppose one could still copy/paste such a small amount of code, but I'd rather just check in some subset of vendored dependencies. Or maybe just pin the dependency to a commit hash (since we can't depend on version tags being immutable). Something actionable beyond peer pressure at any rate. | | |
| ▲ | wat10000 5 days ago | parent [-] | | There are definitely 100-line chunks of code I wouldn't want to rewrite from scratch. They also tend not to be the sort of thing that needs a lot of updates, so a copy/paste job ought to do the job. The big advantage with a dependency manager is that you don't have to find all of the dependency's dependencies, figure out the right build settings, etc. That's super helpful when it's huge, but it's not really doing anything for you when it's small. |
|
| |
| ▲ | tremon 5 days ago | parent | prev [-] | | Scolding and mocking is all we're left with, since two decades worth of rational arguments against these types of hazards have been dismissed as fear-mongering. | | |
| ▲ | chuckadams 5 days ago | parent [-] | | I don't think we're going to reach a point where "don't use dependencies at all" is a rational argument for most projects. | | |
| ▲ | tremon 5 days ago | parent [-] | | It's a good thing then that was not among the rational arguments I was referring to. Do you have other straw men on offer? | | |
|
|
|
|
|
| ▲ | legacynl 5 days ago | parent | prev | next [-] |
| Well that's just the difference between a library and building custom. A library is by definition supposed to be somewhat generic, adaptable and configurable. That takes a lot of code. |
|
| ▲ | skydhash 5 days ago | parent | prev | next [-] |
| I actually loathe those progress trackers. They break emacs shell (looking at you expo and eas). Why not print a simple counter like: ..10%..20%..30% Or just: Uploading… Terminal codes should be for TUI or interactive-only usage. |
| |
| ▲ | sfink 5 days ago | parent | next [-] | | Carriage returns are good enough for progress bars, and seem to work fine in my emacs shell at least: % echo -n "loading..."; sleep 1; echo -en "\rABORT ABORT"; sleep 1; echo -e "\rTerminated"
works fine for me, and that's with TERM set to "dumb". (I'm actually not sure why it cleared the line automatically though. I'm used to doing "\rmessage " to clear out the previous line.)Admittedly, that'll spew a bunch of stuff if you're sending it to a pager, so I guess that ought to be % if [ -t 1 ]; then echo -n "loading..."; sleep 1; echo -en "\rABORT ABORT"; sleep 1; echo -e "\rTerminated"; fi
but I still haven't made it to 15 dependencies or 200 lines of code! I don't get a full-screen progress bar out of it either, but that's where I agree with you. I don't want one. | | |
| ▲ | JdeBP 5 days ago | parent [-] | | The problem is that two pagers don't do everything that they should do in this regard. They are supposed to do things like ul utility does, but neither BSD more nor less handle when a CR is emitted to overstrike the line from the beginning. They only handle overstriking characters with BS. most handles overstriking with CR, though. Your output appears as intended when you page it with most. * https://jedsoft.org/most/ |
| |
| ▲ | flexagoon 5 days ago | parent | prev | next [-] | | I feel like not properly supporting widely used escape codes is an issue with the shell, not with the program that uses them | |
| ▲ | quotemstr 5 days ago | parent | prev [-] | | Try mistty |
|
|
| ▲ | littlecranky67 5 days ago | parent | prev | next [-] |
| We are using NX heavily (and are not affected) in my teams in a larger insurance company. We have >10 standalone line of business apps and 25+ individual libraries in the same monorepo, managed by NX. I've toyed with other monorepo tools for these kind of complex setup in my career (lerna, rushjs, yarn workspaces) but not only did none came close, lerna is basically handed over to NX, and rushjs is unmaintained. If you have any proposal how to properly manage the complexity of a FE monorepo with dozens of daily developers involved and heavy CI/CD/Devops integration, please post alternatives - given that security incident many people are looking. |
| |
| ▲ | abuob 5 days ago | parent | next [-] | | Shameless self-plug and probably not what you're looking for, but anyway: I've created https://github.com/abuob/yanice for that sort of monorepo-size; too many applications/libraries to be able to always run full builds, but still not google-scale or similar. It ultimately started as a small project because I got fed up with NX' antics a few years back (I think since then they improved quite a lot though), I don't need caching, I don't need their cloud, I don't need their highly opinionated approach on how to structure a monorepository; all I needed was decent change-detection to detect which project changed between the working-tree and a given commit. I've now since added support to enforce module-boundaries as it's definitely a must on a monorepo. In case anyone wants to try it out - would certainly appreciate feedback! | |
| ▲ | ojkwon 5 days ago | parent | prev | next [-] | | https://moonrepo.dev/ worked great for our team's setup. It also support bazel remote cache, agnostic to the vendor. | |
| ▲ | threetonesun 5 days ago | parent | prev | next [-] | | npm workspaces and npm scripts will get you further than you might think. Plenty of people got along fine with Lerna, which didn't do much more than that, for years. I will say, I was always turned off by NX's core proposition when it launched, and more turned off by whatever they're selling as a CI/CD solution these days, but if it works for you, it works for you. | | |
| ▲ | crabmusket 5 days ago | parent | next [-] | | I'd recommend pnpm over npm for monorepos. Forcing you to be explicit about each package's dependencies is good. I found npm's workspace features lacking in comparison and sparsely documented. It was also hard to find advice on the internet. I got the sense nobody was using npm workspaces for anything other than beginner articles. | | |
| ▲ | threetonesun 5 days ago | parent | next [-] | | In the context of what we're talking about here, using the default package manger to install a different package manger as a dependency has never quite sat right with me. And npm workspaces is certainly "lacking features" compared to NX, but in terms of making `npm link` for local packages easier and running scripts across packages it does fine. | | |
| ▲ | crabmusket 5 days ago | parent [-] | | Yes, I've found the experience of getting pnpm quite irritating/confusing. Corepack doesn't seem to work the way I would want it to, either. |
| |
| ▲ | dboreham 5 days ago | parent | prev [-] | | After 10 years or so enduring the endless cycle of "new thing to replace npm", I'm using: npm. And I'm not creating monorepos. | | |
| ▲ | crabmusket 5 days ago | parent [-] | | I was happily using npm until I outgrew it. pnpm seemed the smallest step towards what I needed after having evaluated nx, moonrepo etc. |
|
| |
| ▲ | littlecranky67 5 days ago | parent | prev | next [-] | | Killer feature of NX is its build cache and the ability to operate on the git staged files. It takes a couple of minutes to build our entire repo on an M4 Pro. NX caches the builds of all libs and will only rebuild those that are affected. Same holds true for linting, prettier, tests etc. Any solution that just executes full builds would be a no-starter for all use cases. | | |
| ▲ | halflife 5 days ago | parent [-] | | Don’t forget task dependency tree, without that you will have a ton of build scripts |
| |
| ▲ | littlecranky67 5 days ago | parent | prev [-] | | I've burried npm years ago, we are happily using yarn (v4 currently) in that project. Which also means, even if we were affected by the malware, noboday uses the .npmrc (we have a .yarnrc.yml instead) :) |
| |
| ▲ | tcoff91 5 days ago | parent | prev [-] | | moonrepo is pretty nice |
|
|
| ▲ | dakiol 5 days ago | parent | prev | next [-] |
| Easier solution: you don’t need a progress bar. |
| |
| ▲ | nicce 5 days ago | parent | next [-] | | Depends on the purpose… but I guess if you replace it with estimated time left, may be good enough. Sometimes progress bar is just there to identify whether you need stop the job since it takes too much time. | |
| ▲ | f311a 5 days ago | parent | prev | next [-] | | It runs indefinitely to process small jobs. I could log stats somewhere, but it complicates things. Right now, it's just a single binary that automatically gets restarted in case of a problem. | | | |
| ▲ | chairmansteve 5 days ago | parent | prev | next [-] | | One of the wisest comments I've ever seen on HN. | |
| ▲ | SoftTalker 5 days ago | parent | prev | next [-] | | Every feature is also a potential vulnerability. | | | |
| ▲ | vendiddy 5 days ago | parent | prev [-] | | And if you really do? Print the percentage to stdout. |
|
|
| ▲ | girvo 5 days ago | parent | prev | next [-] |
| > People really need to start thinking twice when adding a new dependency I've been preaching this since ~2014 and had little luck getting people on board unless I have full control over a particular team (which is rare). The need to avoid "reinventing the wheel" seems so strong to so many. |
| |
| ▲ | vendiddy 5 days ago | parent [-] | | I find if I read the source code of a dependency I might add, it's common that the part that I actually need is like 100 LOC rather than 1500 LOC. Please keep preaching. |
|
|
| ▲ | andix 5 days ago | parent | prev | next [-] |
| nx is not a random dependency. It's a multi-project management tool, package manager, build tool, and much more. It's backed by a commercial offering. A lot of serious projects use it for managing a lot of different concerns. This is not something silly like leftpad or is-even. |
|
| ▲ | cosmic_cheese 5 days ago | parent | prev | next [-] |
| Using languages and frameworks that take a batteries-included approach to design helps a lot here too, since you don’t need to pull in third party code or write your own for every little thing. It’s too bad that more robust languages and frameworks lost out to the import-world culture that we’re in now. |
|
| ▲ | christophilus 6 days ago | parent | prev | next [-] |
| I’d like a package manager that essentially does a git clone, and a culture that says: “use very few dependencies, commit their source code in your repo, and review any changes when you do an update.” That would be a big improvement to the modern package management fiasco. |
| |
| ▲ | hvb2 5 days ago | parent | next [-] | | Is that realistic though? What you're proposing is letting go of abstractions completely. Say you need compression, you're going to review changes in the compression code?
What about encryption, a networking library, what about the language you're using itself? That means you need to be an expert on everything you run. Which means no one will be building anything non trivial. | | |
| ▲ | 3036e4 5 days ago | parent | next [-] | | Small, trivial, things, each solving a very specific problem, and that can be fully understood, sounds pretty amazing though. Much better than what we have now. | | |
| ▲ | hvb2 5 days ago | parent [-] | | That's what a package is supposed to solve, no? Sure there are packages trying to solve 'the world' and as a result come with a whole lot of dependencies, but isn't that on whoever installs it to check? My point was that git clone of the source can't be the solution, or you own all the code... And you can't. You always depend on something.... | | |
| ▲ | 3036e4 5 days ago | parent [-] | | Your dependencies are also part of your product and your full responsibility. No one you deliver a product to will accept "it wasn't my code, it was in a dependency of one of my dependencies" as an excuse. Of course you need to depend on things, but it is insane to not keep that to a minimum. | | |
| ▲ | hvb2 5 days ago | parent [-] | | So you're expecting to see every product affected by this to go and do a big mea culpa because one of their dependencies broke? Like how xz was attacked, everyone pointed at that and no one said they didn't vet their dependencies. That's the whole point, you attack a dependency that everyone relies on because it's been good and stable. That's how these pyramids build up over time. So spoiler, it's not unlikely one of the dependencies in your minimal set gets exploited... | | |
| ▲ | jen20 5 days ago | parent [-] | | > So you're expecting to see every product affected by this to go and do a big mea culpa because one of their dependencies broke? Yes, absolutely. It's the bare minimum for people offering commercial products. |
|
|
|
| |
| ▲ | christophilus 5 days ago | parent | prev [-] | | Yes. I would review any changes to any 3rd party libraries. Why is that unrealistic? Regarding the language itself, I may or may not. Generally, I pick languages that I trust. E.g. I don't trust Google, but I don't think the Go team would intentionally place malware in the core tools. Libraries, however, often are written by random strangers on the internet with a different level of trust. | | |
| ▲ | Eji1700 5 days ago | parent | next [-] | | > Why is that unrealistic? Because the vast majority of development is done by people with a very narrow focus of skills on an extreme deadline, and you actually comfortable with compression, networking, encryption, IO, and all the other taken for granted libraries that wind up daisy chained together? Because if you are, great, but at the same time, that's not the job description for like 90% of coding jobs. I don't expect my frontend guy to need to know encryption so he can review the form library he's using. | | |
| ▲ | skydhash 5 days ago | parent [-] | | Why would a form library have encryption? That's a red flag for me. |
| |
| ▲ | ashirviskas 5 days ago | parent | prev | next [-] | | Good for you, but sadly, most people are not like you. Or don't have the opportunity to be like you. | |
| ▲ | rcxdude 5 days ago | parent | prev [-] | | How realistic it is depends on how big your dependencies are (in total LOC, not 'number of packages' - something I think gives rust's ecosystem a bad rap, given the tendency for things to be split into lots of packages so the total amount of code you pull in can be minimised). For many projects the LOC of dependencies utterly dwarfs the amount of code in the project itself, and it's pretty infeasible to review it all. |
|
| |
| ▲ | k3nx 5 days ago | parent | prev | next [-] | | That what I used git submodules for. I had a /lib folder in my project where the dependencies were pulled/checked out from. This was before I was doing CI/CD and before folks said git submodules were bad. Personally, I loved it. I only looked and updating them when I was going to release a new version of my program. I could easily do a diff to see what changed. I might not have understood everything, but it wasn't too difficult to see 10-100 line code changes to get a general idea. I thought it was better than the big black box we currently deal with. Oh, this package uses this package, and this package... what's different? No idea now, really. | |
| ▲ | hardwaregeek 5 days ago | parent | prev | next [-] | | That’s called the original Go package manager and it was pretty terrible | | |
| ▲ | christophilus 5 days ago | parent [-] | | I think it was only terrible because the tooling wasn't great. I think it wouldn't be too terribly hard to build a good tool around this approach, though I admittedly have only thought about it for a few minutes. I may try to put together a proof of concept, actually. | | |
| ▲ | jerf 5 days ago | parent [-] | | If you're working in Go, you don't need to put together a proof of concept. Very basic project tooling in conjunction with "go mod vendor", which takes care of copying in the dependencies in locally, will do what you're talking about. Go may not default to this operation, but using it this way is fairly easy. |
|
| |
| ▲ | willsmith72 5 days ago | parent | prev [-] | | sounds like the best way to miss critical security upgrades | | |
| ▲ | christophilus 5 days ago | parent | next [-] | | Why? If you had a package manager tell you "this is out of date and has vulnerability XYZ", you'd do a "gitpkg update" or whatever, and get the new code, review it, and if it passes review, deploy it. | |
| ▲ | skydhash 5 days ago | parent | prev [-] | | That’s why most mature (as in disciplined) projects have a rss feed or a mailing list. So you know when there’s a security bug and what to do about it. |
|
|
|
| ▲ | kbrkbr 5 days ago | parent | prev | next [-] |
| But here's the catch. If you do that in a lot of places, you'll have a lot of extra code to manage. So your suggested approach does not seem to scale well. |
| |
| ▲ | lxgr 5 days ago | parent [-] | | There's obviously a tradeoff there. At some level of complexity it probably makes sense to import (and pin to a specific version by hash) a dependency, but at least in the JavaScript ecosystem, that level seems to be "one expression of three tokens" (https://www.npmjs.com/package/is-even). |
|
|
| ▲ | myaccountonhn 4 days ago | parent | prev | next [-] |
| In pure functional programming like elm and Haskell, it is extremely easy to audit dependencies because any side effect must be explicitly listed, so you just search for those. That makes the risk way lower for dependencies, which is an underrated strength. |
|
| ▲ | throwmeaway222 5 days ago | parent | prev | next [-] |
| I've been saying this for a while, llms will get rid of a lot of libraries, rightly so. |
|
| ▲ | chrismustcode 5 days ago | parent | prev | next [-] |
| I honestly find in go it’s easier and less code to just write whatever feature you’re trying to implement than use a package a lot of the time. Compared to typescript where it’s a package + code to use said package which always was more loc than anything comparative I have done in golang. |
|
| ▲ | croes 6 days ago | parent | prev | next [-] |
| Without these dependencies there would be no training data so the AI can write your code |
| |
| ▲ | f311a 6 days ago | parent [-] | | I could write it myself. It's trivial, just takes a bit more time, and googling escape sequences for the terminal to move the cursor and clear lines. | | |
|
|
| ▲ | amelius 5 days ago | parent | prev [-] |
| And do you know what type of code the LLM was trained on? How do you know its sources were not compromised? |
| |
| ▲ | f311a 5 days ago | parent [-] | | Why do I need to know that if I'm an experienced developer and I know exactly what the code is doing? The code is trivial, just print stuff to stdout along with escape sequences to update output. | | |
|