▲ | TheCraiggers 6 days ago | |||||||||||||||||||||||||||||||
> Hopefully the LLM vendors issue security statements shortly. If they don't, that'll be pretty damning. Why would it be damning? Their products are no more culpable than Git or the filesystem. It's a piece of software installed on the computer whose job is to do what it's told to do. I wouldn't expect it to know that this particular prompt is malicious. | ||||||||||||||||||||||||||||||||
▲ | CER10TY 6 days ago | parent | next [-] | |||||||||||||||||||||||||||||||
Personally, I'd expect Claude Code not to have such far-reaching access across my filesystem if it only asks me for permission to work and run things within a given project. | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
▲ | echelon 6 days ago | parent | prev [-] | |||||||||||||||||||||||||||||||
Then safety and alignment are a farce and these are not serious tools. This is 100% within the responsibility of the LLM vendors. Beyond the LLM, there is a ton of engineering work that can be put in place to detect this, monitor it, escalate, alert impacted parties, and thwart it. This is literally the impetus for funding an entire team or org within both of these companies to do this work. Cloud LLMs are not interpreters. They are network connected and can be monitored in real time. | ||||||||||||||||||||||||||||||||
|