| ▲ | calibas 11 hours ago | |||||||
I see an enormous threat here, I think you're just scratching the surface. You have a customer facing LLM that has access to sensitive information. You have an AI agent that can write and execute code. Just image what you could do if you can bypass their safety mechanisms! Protecting LLMs from "social engineering" is going to be an important part of cybersecurity. | ||||||||
| ▲ | FridgeSeal 16 minutes ago | parent | next [-] | |||||||
> You have a customer facing LLM that has access to sensitive information…You have an AI agent that can write and execute code. Don’t do that then? Seems like a pretty easy fix to me. | ||||||||
| ▲ | fourthark an hour ago | parent | prev | next [-] | |||||||
Yes that’s the point, you can’t protect against that, so you shouldn’t construct the “lethal trifecta” | ||||||||
| ▲ | int_19h 9 hours ago | parent | prev | next [-] | |||||||
> You have a customer facing LLM that has access to sensitive information. Why? You should never have an LLM deployed with more access to information than the user that provides its inputs. | ||||||||
| ▲ | GuB-42 10 hours ago | parent | prev [-] | |||||||
Yes, agents. But for that, I think that the usual approaches to censor LLMs are not going to cut it. It is like making a text box smaller on a web page as a way to protect against buffer overflows, it will be enough for honest users, but no one who knows anything about cybersecurity will consider it appropriate, it has to be validated on the back end. In the same way a LLM shouldn't have access to resources that shouldn't be directly accessible to the user. If the agent works on the user's data on the user's behalf (ex: vibe coding), then I don't consider jailbreaking to be a big problem. It could help write malware or things like that, but then again, it is not as if script kiddies couldn't work without AI. | ||||||||
| ||||||||