| ▲ | londons_explore 2 hours ago | |||||||
Does this actually work? I assume an AI which wanted to read a secret and found it wasn't in .env would simply put print(os.environ) in the code and run it... That's certainly what I do as a developer when trying to debug something that has complex deployment and launch scripts... | ||||||||
| ▲ | snowhale 34 minutes ago | parent | next [-] | |||||||
yeah the threat model matters a lot here. this is useful protection against accidental leaks -- logs, CI output, exceptions that print env context. an AI agent running arbitrary code can definitely just do os.environ, so this isn't stopping intentional exfiltration. for that you'd want actual sandbox isolation with no env passthrough. different problems. | ||||||||
| ▲ | PufPufPuf 2 hours ago | parent | prev [-] | |||||||
Good point. You would need to inject the secrets in an inaccessible part of the pipeline, like an external proxy. | ||||||||
| ||||||||