| ▲ | imiric 4 hours ago | |
> There is currently no way to prevent this apart from not giving the LLM full control. It will not delete what it can not delete. But deleting something is just one action you might not want it to take. The recent "agentic" craze is fueled by the narrative pushed by companies and influencers alike that the more access given to an LLM, the more useful it becomes. I think this is ludicrous for the same reasons as you, but it is evident that most people agree with this. We can blame users for misusing the tools, and suggest that sandboxing is the way to go, but at the end of the day most people will favor convenience over anything else a reasonable person might find important. So at what point should we start blaming the tools, and forcing "AI" companies to fix them? I certainly hope this is done before something truly catastrophic happens. | ||
| ▲ | BadBadJellyBean 4 hours ago | parent [-] | |
I agree that the marketing is crazy. The dangers are not nearly talked enough about. Still if I cut off my finger with a bandsaw that is usually my fault. I didn't use tool in a safe way. People have to learn how to use their tools in a safe way. You wouldn't give an intern that much power on day one. | ||