| ▲ | joe_the_user 5 hours ago | |||||||
Sure, LLMs are trained on human behavior as exhibited on the Internet. Humans break rules more often under pressure and sometimes just under normal circumstances. Why wouldn't "AI agents" behave similarly? The one thing I'd say is that humans have some idea which rules in particular to break while "agents" seem to act more randomly. | ||||||||
| ▲ | js8 4 hours ago | parent [-] | |||||||
It can also be an emergent behavior of any "intelligent" (we don't know what it is) agent. This is an open philosophical problem, I don't think anyone has a conclusive answer. | ||||||||
| ||||||||