Remix.run Logo
snickerbockers 8 hours ago

>Running npm install is not negligence. Installing dependencies is not a security failure. The security failure is in an ecosystem that allows packages to run arbitrary code silently.

No, your security failure is that you use a package manager that allows third-parties push arbitrary code into your product with no oversight. You only have "secutity" to the extent that you can trust the people who control those packages to act both competently and in good faith ad infinitum.

Also the OP seemingly implies credentials are stored on-filesystem in plaintext but I might be extrapolating too much there.

deepsun 8 hours ago | parent | next [-]

Same thing with IDE plugins. At least some are full-featured by the manufacturer, but I couldn't get on with VS Code as for every small feature I had to install some random plugin (even if popular, but still developed by who-knows-who).

willvarfar 4 hours ago | parent | next [-]

The amount of browser extension authors who have talked openly about being approached to sell their extension or insert malicious code is many, and presumably many others have taken the money and not told us about it. It seems likely there are IDE extensions doing or going to do the same thing...

packtreefly 2 hours ago | parent | prev [-]

It's painful, but I've grown distrustful enough of the ecosystem that I disable updates on every IDE plugin not maintained by a company with known-adequate security controls and review the source code of plugin changes before installing updates, typically opting out unless something is broken.

It's unclear to me if the code linked on the plugin's description page is in amy way guaranteed to be the code that the IDE downloads.

The status quo in software distribution is simultaneously convenient, extraordinarily useful, and inescapably fucked.

majormajor 4 hours ago | parent | prev | next [-]

> Running npm install is not negligence. Installing dependencies is not a security failure. The security failure is in an ecosystem that allows packages to run arbitrary code silently.

This is wildly circular logic!

"One person using these tools isn't bad security practice, the problem is that EVERYONE ELSE ["the ecosystem"] uses these tools and doesn't have higher standards!"

It should be no shock to anyone at this point that huge chunks of common developer tools have very poor security profiles. We've seen stories like this many times.

If you care, you need to actually care!

perching_aix 2 hours ago | parent [-]

So do you actually agree or disagree that there's something wrong with npm? It reads as if you were playing both sides, just to land on blaming the individual each time.

Even if this was actually some weirdly written plea to shared responsibility, surely it makes sense that in a hierarchy, one would proritize trying to fix things upstream closer to the root, rather than downstream closer to the leaves, doesn't it?

> This is wildly circular logic!

They're very clearly implying a semantic disagreement there, not making a logical mistake.

elif 7 hours ago | parent | prev | next [-]

It wasn't in their product. It was just on a devs machine

hnlmorg 7 hours ago | parent [-]

I think the OP is aware of that and I agree with them that it’s bad practice despite how common it is.

For example with AWS, you can use the AWS CLI to sign you in and that goes through the HTTPS auth flow to provide you with temporary access keys. Which means:

1. You don’t have any access keys in plain text

2. Even if your env vars are also stolen, those AWS keys expire within a few hours anyway.

If the cloud service you’re using doesn’t support OIDC or any other ephemeral access keys, then you should store them encrypted. There’s numerous ways you can do this, from password managers to just using PGP/GPG directly. Just make sure you aren’t pasting them into your shell otherwise you’ll then have those keys in plain text in your .history file.

I will agree that It does take effort to get your cloud credentials set up in a convenient way (easy to access, but without those access keys in plain text). But if you’re doing cloud stuff professionally, like the devs in the article, then you really should learn how to use these tools.

robomc 5 hours ago | parent | next [-]

> If the cloud service you’re using doesn’t support OIDC or any other ephemeral access keys, then you should store them encrypted. There’s numerous ways you can do this, from password managers to just using PGP/GPG directly. Just make sure you aren’t pasting them into your shell otherwise you’ll then have those keys in plain text in your .history file.

This doesn't really help though, for a supply chain attack, because you're still going to need to decrypt those keys for your code to read at some point, and the attacker has visibility on that, right?

Like the shell isn't the only thing the attacker has access to, they also have access to variables set in your code.

majormajor 4 hours ago | parent | next [-]

It's certainly a smaller surface that could help. For instance, a compromised dev dependency that isn't used in the production build would not be able to get to secrets for prod environments at that point. If your local tooling for interacting with prod stuff (for debugging, etc) is set up in a more secure way that doesn't mean long-lived high-value secrets staying on the filesystem, then other compromised things have less access to them. Add good, phishing-resistant 2FA on top, and even with a keylogger to grab your web login creds for that AWS browser-based auth flow, an attacker couldn't re-use it remotely.

(And that sort of ephemeral-login-for-aws-tooling-from-local-env is a standard part of compliance processes that I've gone through.)

hnlmorg 4 hours ago | parent | prev [-]

I agree it doesn’t keep you completely safe. However scanning the file system for plain text secrets is significantly easier than the alternatives.

For example, for vars to be read, you’d need the compromised code to be part of your the same project. But if you scan the file system, you can pick up secrets for any project written in any language, even those which differ from the code base that pulled the compromised module.

This example applies directly to the article; it wasn’t their core code base that ran the compromised code but instead an experimental repository.

Furthermore, we can see from these supply chain attacks that they do scan the file system. So we do know that encrypting secrets adds a layer of protection against the attacks happening in the wild.

In an ideal world, we’d use OIDC everywhere and not need hardcoded access keys. But in instances where we can’t, encrypting them is better than not.

cyberax 3 hours ago | parent | prev [-]

> 1. You don’t have any access keys in plain text

That's not correct. The (ephemeral) keys are still available. Just do `aws configure export-credentials --profile <YOUR_OIDC_PROFILE>`

Sure, they'll likely expire in 1-24 hours, but that can be more than enough for the attacker.

You also can try to limit the impact of the credentials by adding IP restrictions to the assumed role, but then the attacker can just proxy their requests through your machine.

LtWorf 7 hours ago | parent | prev [-]

> Also the OP seemingly implies credentials are stored on-filesystem in plaintext but I might be extrapolating too much there.

Doesn't really matter, if the agent is unlocked they can be accessed.

johncolanduoni 5 hours ago | parent [-]

This is not strictly true - most OS keychain stores have methods of authenticating the requesting application before remitting keys (signatures, non-user-writable paths, etc.), even if its running as the correct user. That said, it requires careful design on the part of the application (and its install process) to not allow a non-elevated application to overwrite some part of the trusted application and get the keys anyway. macOS has the best system here in principle with its bundle signing, but most developer tools are not in bundles so its of limited utility in this circumstance.

michaelt 3 hours ago | parent [-]

> This is not strictly true - most OS keychain stores have methods of authenticating the requesting application before remitting keys (signatures, non-user-writable paths, etc.), even if its running as the correct user.

Isn't that a smartphone-and-app-store-only thing?

As I understand it, no mainstream desktop OS provides the capabilities to, for example, protect a user's browser cookies from a malicious tool launched by that user.

That's why e.g. PC games ship with anti-cheat mechanisms - because PCs don't have a comprehensive attested-signed-code-only mechanism to prevent nefarious modifications by the device owner.

acdha 3 hours ago | parent [-]

> As I understand it, no mainstream desktop OS provides the capabilities to, for example, protect a user's browser cookies from a malicious tool launched by that user.

macOS sandboxing has been used for this kind of thing for years. Open a terminal window on a new Mac and trying to open the user’s photo library, Desktop, iCloud documents, etc. will trigger a permissions prompt.

michaelt 2 hours ago | parent [-]

Interesting, it's a few years since I've used a Mac.

Descriptions of this stuff online are pretty confusing. Apparently there's an "App Sandbox" and also "Transparency Consent and Control" - I assume from your mention of the photo library describing the latter?

How does this protection interact with IDEs? For some operations conducted in an IDE, like checking out code and collecting dependencies the user grants the software access to SSH keys, artifact repo credentials and suchlike. But unsigned code can also be run as a child process of the IDE - such as when the user compiles and runs their code.

How does the sandboxing protection interact with the IDE and its subprocesses, to ensure only the right subprocesses can access credentials?