Remix.run Logo
emsign 2 hours ago

And THAT'S a problem. To quote one of the maintainers in the thread:

  It's not clear the degree of human oversight that was involved in this interaction - whether the blog post was directed by a human operator, generated autonomously by yourself, or somewhere in between. Regardless, responsibility for an agent's conduct in this community rests on whoever deployed it.
You are assuming this inappropriate behavior was due to its SOUL.MD while we all here know this could as well be from the training and no prompt is a perfect safe guard.
anp an hour ago | parent | next [-]

I’m not sure I see that assumption in the statement above. The fact that no prompt or alignment work is a perfect safeguard doesn’t change who is responsible for the outcomes. LLMs can’t be held accountable, so it’s the human who deploys them towards a particular task who bears responsibility, including for things that the agent does that may disagree with the prompting. It’s part of the risk of using imperfect probabilistic systems.

lp0_on_fire an hour ago | parent | prev [-]

The person operating a tool is responsible for what it does. If I start my lawn mower, tie a rope to it and put a brick on the gas pedal so it mows my lawn while I make dinner and the damned thing ends up running over someone's foot TECHNICALLY I didn't run over someone's foot but I sure as hell created the conditions for it.

We KNOW these tools are not perfect. We KNOW these tools do stupid shit from time to time. We KNOW they deviate from their prompts for...reasons.

Creating the conditions for something bad to happen then hand waving away the consequences because "how could we have known" or "how could we have controlled for this" just doesn't fly, imo.