▲ | sillyfluke 3 days ago | |
Thanks for the detailed domain-specific explanation, if we assume that some whale clients of the company will end up in the call center is it not more probable that more competent human agents will be responsible for the call, whereas it's pretty much the same AI agent adressing the whale client as the regular customers in the alternative scenario? | ||
▲ | doorhammer 3 days ago | parent [-] | |
Yeah, if I were running a QA department I wouldn't let llms anywhere near actual customers as far as trying to resolve a customer issue directly. And, this is just a guess, but it's not uncommon that whale customers like that have their own dedicated account person and I'd personally stick with that model. The use-case I'm like "huh, yeah, I could see that working well" is mostly around doing sentiment analysis and call tagging--maybe actual summaries that humans might read if I had a really well-design context for the llm to work within. Basically anything where you can have an acceptable false positive/negative rate. |