| ▲ | powera 21 hours ago | |
There's a difference between the chatbot "advertising" something and an hour-long manipulative conversation getting the chatbot to make up a fake discount code. Based on the OP's comments, if it was a human employee who gave the fake code they could plausibly claim duress. | ||
| ▲ | acdha 17 hours ago | parent | next [-] | |
Think about if this happened in the real world. Like if I ran a book store, I’d expect some scammer to try to schmooze a discount but I’d also expect the staff to say no, refuse service, and call the police if they refused to leave. If the manager eventually said “okay, we’ll give you a discount” ultimately they would likely personally be on the hook for breaking company policy and taking a loss, but I wouldn’t be able to say that my employee didn’t represent my company when that’s their job. Replacing the employee with a rental robot doesn’t change that: the business is expected to handle training and recover losses due to not following that training under their rental contract. If the robot can’t be trained and the manufacturer won’t indemnify the user for losses, then it’s simply not fit for purpose. This is the fundamental problem blocking adoption of LLMs in many areas: they can’t reason and prompt injection is an unsolved problem. Until there are some theoretical breakthroughs, they’re unsafe to put into adversarial contexts where their output isn’t closely reviewed by a human who can be held accountable. Companies might be able to avoid paying damages in court if a chatbot is very clearly labeled as not not to be trusted, but that’s most of the market because companies want to lay off customer service reps. There’s very little demand for purely entertainment chatbots, especially since even there you have reputational risks if someone can get it to make a racist joke or something similarly offensive. | ||
| ▲ | 18 hours ago | parent | prev | next [-] | |
| [deleted] | ||
| ▲ | szszrk 20 hours ago | parent | prev | next [-] | |
If having "an hour-long manipulative conversation" was possible, we have proof that company placed an unsupervised, error prone mechanism instead of real support. If that "difference" is so obvious to you (and you expect it will break at some point), why don't you demand the company to notice that problem as well? And simply.. not put bogus mechanism in place, at all. Edit: to be clear. I think company should just cancel and apologize. And then take down that bot, or put better safeguards (good luck with that). | ||
| ▲ | hshdhdhj4444 16 hours ago | parent | prev [-] | |
Umm… The human could have dropped off the conversation? Or escalated it to a manager? | ||