| ▲ | anp an hour ago | |
I’m not sure I see that assumption in the statement above. The fact that no prompt or alignment work is a perfect safeguard doesn’t change who is responsible for the outcomes. LLMs can’t be held accountable, so it’s the human who deploys them towards a particular task who bears responsibility, including for things that the agent does that may disagree with the prompting. It’s part of the risk of using imperfect probabilistic systems. | ||