| ▲ | 0xferruccio a day ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
The primary exfiltration vector for LLMs is making network requests via images with sensitive data as parameters. As Claude Code increasingly uses browser tools, we may need to move away from .env files to something encrypted, kind of like rails credentials, but without the secret key in the .env | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | SahAssar a day ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
So you are going to take the untrusted tool that kept leaking your secrets, keep the secrets away from it but still use it to code the thing that uses the secrets? Are you actually reviewing the code it produces? In 99% of cases that's a "no" or a soft "sometimes". | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | xyzzy123 17 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
One tactic I've seen used in various situations is proxies outside the sandbox that augment requests with credentials / secrets etc. Doesn't help in the case where the LLM is processing actually sensitive data, ofc. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||