Remix.run Logo
acdha 17 hours ago

Think about if this happened in the real world. Like if I ran a book store, I’d expect some scammer to try to schmooze a discount but I’d also expect the staff to say no, refuse service, and call the police if they refused to leave. If the manager eventually said “okay, we’ll give you a discount” ultimately they would likely personally be on the hook for breaking company policy and taking a loss, but I wouldn’t be able to say that my employee didn’t represent my company when that’s their job.

Replacing the employee with a rental robot doesn’t change that: the business is expected to handle training and recover losses due to not following that training under their rental contract. If the robot can’t be trained and the manufacturer won’t indemnify the user for losses, then it’s simply not fit for purpose.

This is the fundamental problem blocking adoption of LLMs in many areas: they can’t reason and prompt injection is an unsolved problem. Until there are some theoretical breakthroughs, they’re unsafe to put into adversarial contexts where their output isn’t closely reviewed by a human who can be held accountable. Companies might be able to avoid paying damages in court if a chatbot is very clearly labeled as not not to be trusted, but that’s most of the market because companies want to lay off customer service reps. There’s very little demand for purely entertainment chatbots, especially since even there you have reputational risks if someone can get it to make a racist joke or something similarly offensive.