| ▲ | BadBadJellyBean 4 hours ago | |||||||
Then that is also on me for using a tool that I can't control. I don't run my LLMs in a way where they can just do things without me signing off on it. It's not nearly as fast as just letting it do it's thing but I kept it from doing stupid things so many times. Giving up control is a decision. The consequences of this decision are mine to carry. I can do my best to keep autonomous LLMs contained and safe but if I am the one who deploys them, then I am the one who is to blame if it fails. That's why I don't do that. | ||||||||
| ▲ | locknitpicker 3 hours ago | parent [-] | |||||||
> Then that is also on me for using a tool that I can't control. That's a core trait of LLMs. Even the AI companies developing frontier models felt the need to put together whole test suites purposely designed to evaluate a model's propensity to try to subvert the user's intentions. https://www.anthropic.com/research/shade-arena-sabotage-moni... > Giving up control is a decision. No, it is definitely not. Only recently did frontier models started to resort to generating ad-hoc scripts as makeshift tools. They even generate scripts to apply changes to source files. | ||||||||
| ||||||||