| ▲ | xmprt 4 hours ago | ||||||||||||||||
> When a question touches restricted data — student PII, sensitive HR information — the agent doesn’t just refuse. It explains what it can’t access and proposes a safe reformulation. "I can’t show individual student names, but here’s the same analysis using anonymized IDs." This part is scary. It implies that if I'm in a department that shouldn't have access to this data, the AI will still run the query for me and then do some post-processing to "anonymize" the data. This isn't how security is supposed to work... did we learn nothing from SQL injection? | |||||||||||||||||
| ▲ | stephbook 2 hours ago | parent | next [-] | ||||||||||||||||
I see two vectors here - The bot giving out PII by accident. You ignore it and report it. - You trying to fool the bot into giving you PII you're not supposed to have. But you've created an audit trail of your 100 failed prompt injections. The company fires you. This isn't public facing, open to anyone. This is more like a shared printer in the office. | |||||||||||||||||
| ▲ | thunfischbrot 4 hours ago | parent | prev [-] | ||||||||||||||||
In the strongest interpretation of that it would offer only data which the user is allowed to access. Why do you assume that them implementing a feature to prevent PII being accessed that they then turn around and return data which the user is not supposed to access? | |||||||||||||||||
| |||||||||||||||||