Remix.run Logo
roywiggins 3 days ago

In the context of call centers in particular I actually can believe that a moderately inaccurate AI model could be better on average than harried humans writing a summary after the call. Could a human do better carefully working off a recording, absolutely, but that's not what needs to be compared against.

It just has to be as good as a call center worker with 3-5 minutes working off their own memory of the call, not as good as the ground truth of the call. It's probably going to make weirder mistakes when it makes them though.

sillyfluke 3 days ago | parent | next [-]

>in the context of call centers in particular I actually can believe that a moderately inaccurate AI model could be better on average than harried humans

You're free to believe that of course, but you're assuming the point that has to be proven. Not all fuck ups are equal. Missing information is one thing, but writing literally opposite of what is said is way higher on the fuck up list. A human agent would be achieving an impressive level of incompetence if they kept on repeating such a mistake, and would definately have been jettisoned from the task after at most three strikes (assuming someone notices). But firing a specific AI agent that repeats such mistakes is out of the question for some reason.

Feel free to expand on why no amount of mistakes in AI summaries will outweigh the benefits in call centers.

trenchpilgrim 3 days ago | parent | prev [-]

Especially humans whose jobs are performance-graded on how quickly they can start talking to the next customer.

Imustaskforhelp 3 days ago | parent [-]

Yeah Maybe that's fair in the current world we live in.

But the solution isn't to use AI instead of not trusting the agents / customer service rep because their performance is graded on how quickly they can start talking to next

The solution is to change the economics in the way that the workers are incentivized to write good summaries, maybe paying them more and not grading them in such a way will help.

I am imagining some company saying AI is good enough because they themselves are using the wrong grading technique and AI is best option in that. SO in that sense, AI just benchmarked maxxed in that if that makes sense. Man, I am not even kidding but I sometimes wonder how economies of scale can work so functionally different from common sense. Like it doesn't make sense at this point.