Remix.run Logo
AnimalMuppet 6 days ago

Well... is a chatbot for customer service really all that much worse than a human who is not permitted to deviate from their script?

eviks 6 days ago | parent | next [-]

Certainly, because not deviating from the scripts also cuts off the infinite range of made up nonsense a bot can hallucinate. And it's not like the bot will have magic authority to fix the real issue it can't be bound by the script, so in this regard there is no upside.

rafabulsing 6 days ago | parent [-]

Chatbots != LLMs.

We've had chatbots for a long time before LLMs, and while they're of course much more limited as you have to explicitly program every thing it should be able to do, by that very virtue, hallucinating is problem they do not have.

For this kind of customer service chat scenario, I find them much better than just a free style LLM trained in some internal docs.

(Though really, probably the ultimate solution is a hybrid one, where you have an explicitly programmed conversation tree the user can go down, but with an LLM decoding what the user is saying into one of the constrained options. So that if one of the options is "shipping issues", "my order is late" should take me there. While other forms of NLP can do that, LLMS would certainly shine for that application)

6 days ago | parent | prev [-]
[deleted]