Remix.run Logo
jesse_dot_id 2 days ago

> How many developers do you think knew that checkbox existed? How many assumed their database credentials and API keys were encrypted by default?

If I don't see asterisks, I'm not hitting save on the field with a secret in it. Maybe they were setting them programmatically? They should definitely still be looking to pass some kind of a secret flag, though. This is a weird problem for a company like Vercel to have.

apgwoz 2 days ago | parent | next [-]

You pretty much have to assume someone is going to put sensitive data in an input like this. Encryption by default is the only sensible choice.

lemagedurage 2 days ago | parent [-]

But the encrypted API key doesn't work, it needs to be decrypted first. Let's give the server access to the private key so it can decrypt the API key. We can do this by putting the private key in an env var. But now the private key is unencrypted. Ah, it doesn't work.

apgwoz 2 days ago | parent [-]

You’re thinking too much. When you run the app, the system decrypts the secrets and makes them available as env vars (or some other mechanism).

In an admin ui, you list the names of secrets only, and provide a “reveal” or a “replace” on each one. They are never decrypted unless explicitly asked for.

Is this perfect? Absolutely not. The key is controlled by the company, but it can be derived in a manner that doesn’t allow for the dump of everything if it’s leaked.

lemagedurage a day ago | parent | next [-]

My gripe is that, if some additional authentication is then not required for deployments or SSH access, that whoever has access to the admin UI will still be able to access the box and extract all secrets, just with extra steps. There's usually no real security boundary between "admin UI controls the box" and "box requires secrets in plain text".

I still like the approach, but I'm afraid that it feels more secure than it is, and people should be aware of that.

apgwoz a day ago | parent [-]

It’s absolute baseline, but yes, it relies entirely on the platform’s permissions model, the administrator who assigns permissions, and the application authors to not create vectors for env var dumps. :)

But honestly, if you’re in the container, and the application running in the container can get secrets, so can a shell user.

_Maybe_ there’s a model where the platform exposes a Unix domain socket and checks the PID, user, group of the connection, and delivers secrets that way? This has its problems, too, like it being non-standard, only possible in some scenarios and otherwise fallible… but better than nothing? If you reap the container when that process dies, you can’t race for the same PID, at least. I dunno

kstrauser 2 days ago | parent | prev [-]

My understanding is this is exactly how Vercel works. The users hadn’t checked the “don’t ever reveal, even to me” box next to the sensitive values. If they had, the attacker would only have been able to see the names of the variables and not their values.

apgwoz 2 days ago | parent [-]

Ah. The article has since been updated to point out that it’s not plaintext, but encrypted at rest (which would be expected). OK.

SOLAR_FIELDS 2 days ago | parent | prev [-]

Do you ask a bridge engineer if they forgot to reinforce the supports when they built the bridge? Even when I didn't know about security this was a table stakes thing. People saving sensitive things in plaintext are upset that their poor practices came back to bite them. Now, at the risk of sounding like I'm victim blaming here, Vercel is also totally bearing some responsibility for this insanity. But come on. FAFO and all that.