▲ | simonw 5 hours ago | |||||||
The biggest difference on this front between a human and an LLM is accountability. You can hold a human accountable for their actions. If they consistently fall for phishing attacks you can train or even fire them. You can apply peer pressure. You can grant them additional privileges once they prove themselves. You can't hold an AI system accountable for anything. | ||||||||
▲ | nradov 39 minutes ago | parent | next [-] | |||||||
You can hold the person (or corporate person) who owns or used the LLM accountable for its actions. It's like how dogs aren't really accountable. But if you let your dog run loose and it mauls a toddler to death then you'll probably be sued. Same thing. (Yes, I am aware this isn't a perfect analogy because a dangerous dog can be seized and destroyed. But that's an administrative procedure and really not the same as holding a person morally or financially accountable.) | ||||||||
▲ | Verdex 4 hours ago | parent | prev [-] | |||||||
Recently, I've kind of been wondering if this is going to turn out to be LLM codegen's Achilles heal. Imagine some sort of code component of critical infrastructure that costs the company millions per hour when it goes down and it turns out the entire team is just a thin wrapper for an LLM. Infra goes down in a way the LLM can't fix and now what would have been a few late nights is several months to spin up a new team. Sure you can hold the team accountable by firing them. However this is a threat to someone with actual technical know how because their reputation is damaged. They got fired doing such and such so can we trust them to do it here. For the person who LLM faked it, they just need to find another domain where their reputation won't follow them to also fake their way through until the next catastrophe. | ||||||||
|