| ▲ | elric 5 days ago |
| This is critical infrastructure, and it gets compromised way too often. There are so many horror stories of NPM (and similar) packages getting filled with malware. You can't rely on people not falling for phishing 100% of the time. People who publish software packages tend to be at least somewhat technical people. Can package publishing platforms PLEASE start SIGNING emails. Publish GPG keys (or whatever, I don't care about the technical implementation) and sign every god damned email you send to people who publish stuff on your platform. Educate the publishers on this. Get them to distrust any unsigned email, no matter how convincing it looks. And while we're at it, it's clear that the current 2FA approach isn't good enough. I don't know how to improve on it, but it's clear that the actions in this example were suspicious: user logs in, changes 2FA settings, immediately adds a new API token, which immediately gets used to publish packages. Maybe there should be a 24 hour period where nothing can be published after changing any form of credentials. Accompanied by a bunch of signed notification emails. Of course that's all moot if the attacker also changes the email address. |
|
| ▲ | feross 5 days ago | parent | next [-] |
| Disclosure: I’m the founder of https://socket.dev We analyzed this DuckDB incident today. The attacker phished a maintainer on npmjs.help, proxied the real npm, reset 2FA, then immediately created a new API token and published four malicious versions. A short publish freeze after 2FA or token changes would have broken that chain. Signed emails help, but passkeys plus a publish freeze on auth changes is what would have stopped this specific attack. There was a similar npm phishing attack back in July (https://socket.dev/blog/npm-phishing-email-targets-developer...). In that case, signed emails would not have helped. The phish used npmjs.org — a domain npm actually owns — but they never set DMARC there. DMARC is only set on npmjs.com, the domain they send email from. This is an example of the “lack of an affirmative indicator” problem. Humans are bad at noticing something missing. Browsers learned this years ago: instead of showing a lock icon to indicate safety, they flipped it to show warnings only when unsafe. Signed emails have the same issue — users often won’t notice the absence of the right signal. Passkeys and publish freezes solve this by removing the human from the decision point. |
| |
| ▲ | Moru 4 days ago | parent [-] | | Some registrars make this easy. Think it was cloudflare that has a button for "Do not allow email from this domain". Saw it last time I set up a domain that I didn't want to send email from. I'm guessing you get that question if there is no MX records for the domain when you move to cloudflare. |
|
|
| ▲ | SoftTalker 5 days ago | parent | prev | next [-] |
| I think you just have to distrust email (or any other "pushed" messages), period. Just don't ever click on a link in an email or a message. Go to the site from your own previously bookmarked shortcut, or type in the URL. I got a fraud alert email from my credit card the other day. It included links to view and confirm/deny the suspicious charge. It all looked OK, the email included my name and the last digits of my account number. I logged in to the website instead. When I called to follow up I used the phone number printed on my card. Turns out it was a legit email, but you can't really know. Most people don't understand public key signing well enough to rely on them only trusting signed emails. Also, if you're sending emails like this to your users, stop including links. Instead, give them instructions on what to do on your website or app. |
| |
| ▲ | Moru 4 days ago | parent | next [-] | | There is companies that send email with invoices where you have to click a link. There is no way of logging in on their site to get to the invoice. It is an easy fix for them (we use the same invoicing company as they do so I know). All they need to do is click "Allow sending bills directly to customers bank". Every month I get the email, I use the included chat function on the webpage to ask when they will enable this and it's always not possible. Mabe some day. I wish we could stop training people to click links in random messages just because we want to be able to track their movements online. | |
| ▲ | sroussey 5 days ago | parent | prev [-] | | I get Coinbase SMS all the time with a code not to share. But also… “call this phone number if you did not request the code”. | | |
| ▲ | sgc 5 days ago | parent [-] | | This does nothing for the case of receiving a fake coinbase sms with a fake contact phone number. I have had people attempt fraud in my work with live calls as follow up to emails and texts. I only caught it because it didn't pass the smell test so I did quite a bit of research. Somebody else got caught in the exact same scam and I had to extricate them from it. They didn't believe me at first and I had to hit them over the head a bit with the truth before it sank in. | | |
| ▲ | Moru 4 days ago | parent [-] | | Yes, this is a classic scam vector. We really should stop training users to click links / call phonenumbers in sms and emails. |
|
|
|
|
| ▲ | parliament32 5 days ago | parent | prev | next [-] |
| > it's clear that the current 2FA approach isn't good enough. I don't know how to improve on it USE PASSKEYS. Passkeys are phishing-resistant MFA, which has been a US govt directive for agencies and suppliers for three years now[1]. There is no excuse for infrastructure as critical as NPM to still be allowing TOTP for MFA. [1]https://www.whitehouse.gov/wp-content/uploads/2022/01/M-22-0... |
| |
| ▲ | cuu508 4 days ago | parent | next [-] | | Use WebAuthn as the second factor. Passkeys are a single factor authentication, and a downgrade from password+WebAuthn. | | |
| ▲ | parliament32 4 days ago | parent [-] | | Depends on where you store them. If they're in TPM (like WHFB) it's two-factor (because you need the TPM itself, something you have, and PIN or biometric to unlock it, something you know/are). But if you're just loading keys into a software password manager, yes, it's single factor. | | |
| ▲ | int_19h 2 days ago | parent [-] | | At this point, we have passkey support integrated in both major desktop OSes (Windows, macOS) and both major mobile OSes (Android, iOS). All of them require both the physical device and either PIN or biometric unlock. |
|
| |
| ▲ | smw 5 days ago | parent | prev | next [-] | | This is the way! Passkeys or FIDO2 (yubikey) should be required for supply chain critical missions like this. | |
| ▲ | FreakLegion 4 days ago | parent | prev [-] | | Yes, use FIDO, you'll be better off, but no, passkeys aren't immune to account takeover. E.g. not only does GitHub support OAuth apps, it supports device code flow, and thus: https://www.praetorian.com/blog/introducing-github-device-co.... |
|
|
| ▲ | ignoramous 5 days ago | parent | prev | next [-] |
| > Can package publishing platforms PLEASE start SIGNING emails I am skeptical this solves phising & not add to more woes (would you blindly click on links if the email was signed?), but if we are going to suggest public key cryptography, then: NPM could let package publishers choose if only signed packages must be released and consumers decide if they will only depend on signed packages. I guess, for attackers, that moves the target from compromising a publisher account to getting hold of the keys, but that's going to be impossible... as private keys never leave the SSM/HSM, right? > Get them to distrust any unsigned email, no matter how convincing it looks. For shops of any important consequence, email security is table stakes, at this point: https://www.lse.ac.uk/research/research-for-the-world/societ... |
| |
| ▲ | elric 5 days ago | parent [-] | | I don't think signed email would solve phishing in general. But for a service by-and-for programmers, I think it at least stands a chance. Signing the packages seems like low hanging fruit as well, if that isn't already being done. But I'm skeptical that those keys are as safe as they should be; IIRC someone recently abused a big in a Github pipeline to execute arbitrary code and managed to publish packages in that way. Which seems like an insane vulnerability class to me, and probably an inevitable consequence of centralising so many things on github. |
|
|
| ▲ | nikcub 5 days ago | parent | prev | next [-] |
| * passkeys * signed packages enforce it for the top x thousand most popular packages to start some basic hygiene about detecting unique new user login sessions would help as well |
| |
| ▲ | SAI_Peregrinus 5 days ago | parent [-] | | Requiring signed packages isn't enough, you have to enforce that signing can only be done with the approval of a trusted person. People will inevitably set up their CI system to sign packages, no human intervention needed. If they're smart & the CI system is capable of it they'll set it up to only build when a tag signed by someone approved to make releases is pushed, but far too often they'll just build if a tag is pushed without enforcing signature verification or even checking which contributors can make releases. Someone with access to an approved contributor's GitHub account can very often trigger the CI system to make a signed release, even without access to that contributor's commit signing key. |
|
|
| ▲ | evantbyrne 5 days ago | parent | prev | next [-] |
| The email was sent from the 'npmjs dot help' domain. I'm not saying you're wrong, but also basic due diligence would have prevented this. If not by email, the maintainer may have been able to be compromised over text or some other medium. And today maintainers of larger projects can avoid these problems by not importing and auto-updating a bunch of tiny packages that look like they could have been lifted from stack overflow |
| |
| ▲ | chrisweekly 5 days ago | parent [-] | | Re: "npmjs dot help", way too many companies use random domains -- effectively training their users to fall for phishing attacks. | | |
| ▲ | InsideOutSanta 5 days ago | parent | next [-] | | This exactly. It's actually wild how much valid emails can look like phishing emails, and how confusing it is that companies use different domains for critical things. One example that always annoys me is that the website listing all of Proton's apps isn't at an address you'd expect, like apps.proton.me. It's at protonapps.com. Just... why? Why would you train your users to download apps from domains other than your primary one? It also annoys me when people see this happening and point out how the person who fell for the attack missed some obvious detail they would have noticed. That's completely irrelevant, because everyone is stupid sometimes. Everyone can be stressed out and make bad decisions. It's always a good idea to make it harder to make bad decisions. | | |
| ▲ | OkayPhysicist 4 days ago | parent [-] | | I can answer why this is at the company I work at right now: It's a PITA to coordinate between teams, and my team doesn't control the main domain. If I wanted my team's application to run on the parent domain, I would have to negotiate with the crayon eaters in IT to make a subdomain, point it at whatever server, and then if I want any other changes to be made, I'd have to schedule a followup meeting, which will generate more meetings, etc. If I want to make any changes to the mycompany.othertld domain, I can just do it, with no approval from anyone. | | |
| ▲ | SoftTalker 4 days ago | parent [-] | | Are you arguing that it’s a good idea for random developers to be able to set up new subdomains on the company domain without any oversight? | | |
| ▲ | mdaniel 4 days ago | parent | next [-] | | Do they work there or not? I deeply appreciate that everyone's threat model is different, but I'd bet anyone that wants to create a new DNS record also has access to credentials that would do a ton more actual damage to the company if they so chose Alternatively, yup, SOC2 is a thing: optionally create a ticket tracking the why, then open a PR against the IaC repo citing that ticket, have it ack-ed by someone other than the submitter, audit trail complete, change managed, the end | |
| ▲ | 4 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | OkayPhysicist 4 days ago | parent | prev [-] | | What's your threat model that says they shouldn't? If you don't trust your senior devs, you're already pwned. |
|
|
| |
| ▲ | 0cf8612b2e1e 5 days ago | parent | prev [-] | | Too many services will send you 2FA codes from different numbers per request. |
|
|
|
| ▲ | zokier 5 days ago | parent | prev | next [-] |
| Spf/dkim already authenticates the sender. But it doesn't help if the user doesn't check who the email is from. But in that case gpg would not help that much either. |
| |
| ▲ | elric 5 days ago | parent | next [-] | | SPF & DKIM are all but worthless in practice, because so many companies send emails from garbage domains, or add large scale marketing platforms (like mailchimp) to their SPF records. Like Citroen sends software update notifications for their cars from mmy-customerportal.com. That URL looks and sounds like a phisher's paradise. But somehow, it's legit. How can we expect any user to make the right decision when we push this kind of garbage in their face? | | |
| ▲ | JimDabell 5 days ago | parent | next [-] | | The problem is there is no continuity. An email from an organisation that has emailed you a hundred times before looks the same as an email from somebody who has never emailed you before. Your inbox is a collection of legitimate email floating in a vast ocean of email of dubious provenance. I think there’s a fairly straightforward way of fixing this: contact requests for email. The first email anybody sends you has an attachment that requests a token. Mail clients sort these into a “friend request” queue. When the request is accepted, the sender gets the token, and the mail gets delivered to the inbox. From that point on, the sender uses the token. Emails that use tokens can skip all the spam filters because they are known to be sent by authorised senders. This has the effect of separating inbound email into two collections: the inbox, containing trustworthy email where you explicitly granted authorisation to the sender; and the contact request queue. If a phisher sends you email, then it will end up in the new request queue, not your inbox. That should be a big glaring warning that it’s not a normal email from somebody you know. You would have to accept their contact request in order to even read the phishing email. I went into more detail about the benefits of this system and how it can be implemented in this comment: https://news.ycombinator.com/item?id=44969726 | | |
| ▲ | zokier 5 days ago | parent [-] | | You don't need complex token arrangements for this. You can just filter emails based on their from addresses. | | |
| ▲ | JimDabell 5 days ago | parent [-] | | Unfortunately, it’s not that simple. It’s extremely common for the same organisation to send emails from different addresses, different domains, and different servers, for many different reasons. | | |
| ▲ | waynesonfire 4 days ago | parent [-] | | You can just filter emails based on their from addresses. | | |
| ▲ | JimDabell 4 days ago | parent [-] | | So if an organisation emails you from no-reply@notifications.example.com, mailing-list@examplemail.com, and bob.smith@examplecorp.com, and the phisher emails you from support@example.help, which filter based on their from addresses makes all the legitimate ones show up as the same sender while excluding the phishing email? | | |
| ▲ | artemisart 4 days ago | parent | next [-] | | Why should we expect companies to be able to reuse the correct token if they can't coordinate on using a single domain in the first place? | | |
| ▲ | JimDabell 4 days ago | parent [-] | | Your assumption that they use more than one domain by accident due to a lack of coördination is not correct. Separating, e.g. your product email from your mailing list email from your corporate email has a number of benefits. Anyway, I already mentioned a solid incentive for them to use the correct token. Go back and read my earlier comment. | | |
| |
| ▲ | zahlman 4 days ago | parent | prev [-] | | > which filter based on their from addresses makes all the legitimate ones show up as the same sender while excluding the phishing email? This is the wrong question. The right question is: what should we do about the fact that the organization has such terrible security practice? And the answer is: call them on the phone, and tell them that you will not do business with them until they fix their shit. | | |
| ▲ | jve 4 days ago | parent | next [-] | | You're not doing business with NPM by pushing packages there. And who is going to do anything about fixing their stuff when you pay them a mere subscription fee? | |
| ▲ | cindyllm 4 days ago | parent | prev [-] | | [dead] |
|
|
|
|
|
| |
| ▲ | zokier 5 days ago | parent | prev [-] | | The same problem applies to gpg. If companies can not manage to use consistent from addresses then do you really expect them to do any better with gpg key management? "All legitimate npm emails are signed with GPG key X" and "All legitimate npm emails come from @npmjs.com" are equally strong statements. |
| |
| ▲ | vel0city 5 days ago | parent | prev [-] | | There's little reason to think these emails didn't pass SPF/DKIM. They probably "legitimately" own their npmjs[.]help domain and whatever server they used to send the emails is probably approved by them to send for that domain. | | |
|
|
| ▲ | neilv 4 days ago | parent | prev | next [-] |
| > This is critical infrastructure, and it gets compromised way too often. Most times that I go to use some JS, Python, or (sometimes) Rust framework, I get a sinking feeling, as I see a huge list of dependencies scroll by. I know that it's a big pile of security vulnerabilities and supply-chain attack risk. Web development documentation that doesn't start with `npm install` seems rare now. Then there's the 'open source' mobile app frameworks that push you to use the framework on your workstation with some vendor's Web platform tightly in the loop, which all your code flows through. Children, who don't know how things work, will push any button. But experienced software engineers should understand the technology, the business context, and the real-world threats context, and at least have an uneasy, disapproving feeling every time they work on code like this. And in some cases -- maybe in all cases that aren't a fly-by-night, or an investment scam, or a hobby project on scratch equipment -- software engineers should consider pushing back against engaging in irresponsible practices that they know will probably result in compromise. |
| |
| ▲ | cjonas 4 days ago | parent [-] | | What does having an "uneasy disapproving feeling" actually solve? | | |
| ▲ | neilv 4 days ago | parent [-] | | The next sentence is one of the conclusions it might lead to. |
|
|
|
| ▲ | jonplackett 4 days ago | parent | prev | next [-] |
| One issue is that many institutions - banks, tech giants - still send ridiculously spammy looking emails asking you to click a link and go verify something. All these actions are teaching people to be dumb and make it more likely they’ll fall for a scam because the pattern has been normal before. |
|
| ▲ | thayne 5 days ago | parent | prev | next [-] |
| > Of course that's all moot if the attacker also changes the email address. Maybe don't allow changing the email address right after changing 2fa? And if the email is changed, send an email to the original email alllowing you to dispute the change. |
|
| ▲ | chatmasta 4 days ago | parent | prev | next [-] |
| DuckDB is not critical infrastructure and I don’t even think these billion-download packages are critical infrastructure. In software everything can be rolled back and that’s exactly what happened here. Yes we were lucky that someone caught this rather sloppy exploit early, and (you can verify via the wallet addresses) didn’t make any money from it. And it could certainly have been worse. But I think calling DuckDB “critical infrastructure” is just a bit conceited. As an industry we really overestimate the importance of our software that can be deleted when it’s broken. We take ourselves way too seriously. In any worst case scenario, a technical problem can be solved with a people solution. If you want to talk about critical infrastructure then the xz backdoor was the closest we’ve caught to affecting it. And what came of that backdoor? Nothing significant… I suppose you could say there might be 100 xz-like backdoors lurking in our “critical infrastructure” today, but at least as long as they’re idle, it’s not actually a problem. Maybe one day China will invade Taiwan and we’ll see just how compromised our critical infrastructure has actually been this whole time… |
|
| ▲ | progx 5 days ago | parent | prev | next [-] |
| TRUE! A simple self defined word in an email and you will see, if the mail is fake or not. |
|
| ▲ | egorfine 5 days ago | parent | prev [-] |
| > You can't rely on people not falling for phishing 100% of the time 1. I genuinely don't understand why. 2. If it is true that people are the failing factor, then nothing is going to help. Hardware keys? No problem, a human will use the hardware key to sign a malicious action. |
| |
| ▲ | tgv 5 days ago | parent | next [-] | | > 1. I genuinely don't understand why. You never make a mistake? Never ever? It's a question of numbers. If the likelihood of making a mistake is 1 in 10000 emails, send out links to 10.000 package maintainers, and you've got a 63% chance of someone making that mistake. | | |
| ▲ | chrisweekly 5 days ago | parent | next [-] | | Your point is completely valid.
Tangent: in your example, what calculation led to "63%"? | | |
| ▲ | theanonymousone 5 days ago | parent [-] | | 1-(.9999)^10000 I trust the user did this calculation. I didn't. | | |
| ▲ | tgv 5 days ago | parent [-] | | That's indeed the formula. The .9999 is (1 - 1/10000), 1/10000 being the likelihood. It would perhaps have been clearer if I had chosen two different numbers... |
|
| |
| ▲ | egorfine 5 days ago | parent | prev [-] | | Then hardware 2FA won't help. | | |
| ▲ | smw 5 days ago | parent | next [-] | | This seems to be a common misunderstanding. The major difference between passkeys and hardware 2fa (FIDO2/yubikeys) and TOTP/SMS/Email solutions is that the passkey/yubikey _also_ securely validates the site it's communicating with before sending validation, making traditional phishing attacks all but impossible. | |
| ▲ | tuckerman 5 days ago | parent | prev [-] | | Hardware 2FA, with something like passkeys (or even passkeys with software tokens), _would_ prevent this as they are unique to the domain by construction so cannot be accidentally phished (unlike TOTP 2FA). |
|
| |
| ▲ | elric 5 days ago | parent | prev | next [-] | | > 1. I genuinely don't understand why. It's a war of attrition. You can keep bombarding developers with new and clever ways of trying to obtain their credentials or get them to click on some link while signed in. It only has to succeed once. No one is 100% vigilant all the time. If you think you're the exception, you're probably deluding yourself. There's something broken in a system where one moment of inattention by one person can result in oodles of people ending up with compromised software, and I don't think it's the person that's broken. | | |
| ▲ | kentm 4 days ago | parent | next [-] | | > where one moment of inattention by one person I'll get a lot of pushback for this, but the main problem are ecosystems that encourage using packages published by one person. I call these "some person with a github" packages, and I typically go through codebases to try to remove these dependencies specifically because of this threat vector. Packages that are developed by a team with code multiple code reviewers and a process are still at risk, don't get me wrong. But the risk is much less if one person does not have the power to unilaterally merge a PR, and more-so if its backed by an organization that has multiple active devs and processes for reviews. If you do need to depend on these one-person packages, I'd recommend forking and carefully merging in changes, or pinning versions and manually reviewing all commits before upgrading versions. Thats probably intractable for a lot of projects, but thats honestly something that we as developers need to fix by raising the bar for what dependencies we include. | |
| ▲ | egorfine 5 days ago | parent | prev [-] | | Then see #2: there is no way to prevent humans from actually performing detrimental actions, hardware keys or not. | | |
| ▲ | vel0city 5 days ago | parent [-] | | This specific attack (and many others like it) would have absoultey been foiled by U2F or passkeys. These authors would have been incapable of giving the adversary any useful credential to impersonate them by the very nature of how these systems work. | | |
|
| |
| ▲ | MitPitt 5 days ago | parent | prev | next [-] | | Removing humans will help | | | |
| ▲ | InsideOutSanta 5 days ago | parent | prev [-] | | > If it is true that people are the failing factor, then nothing is going to help Nothing will reduce incidents to 0, but many things can move us closer to 0. |
|