▲ | roywiggins 3 days ago | |||||||
In the context of call centers in particular I actually can believe that a moderately inaccurate AI model could be better on average than harried humans writing a summary after the call. Could a human do better carefully working off a recording, absolutely, but that's not what needs to be compared against. It just has to be as good as a call center worker with 3-5 minutes working off their own memory of the call, not as good as the ground truth of the call. It's probably going to make weirder mistakes when it makes them though. | ||||||||
▲ | sillyfluke 3 days ago | parent | next [-] | |||||||
>in the context of call centers in particular I actually can believe that a moderately inaccurate AI model could be better on average than harried humans You're free to believe that of course, but you're assuming the point that has to be proven. Not all fuck ups are equal. Missing information is one thing, but writing literally opposite of what is said is way higher on the fuck up list. A human agent would be achieving an impressive level of incompetence if they kept on repeating such a mistake, and would definately have been jettisoned from the task after at most three strikes (assuming someone notices). But firing a specific AI agent that repeats such mistakes is out of the question for some reason. Feel free to expand on why no amount of mistakes in AI summaries will outweigh the benefits in call centers. | ||||||||
▲ | trenchpilgrim 3 days ago | parent | prev [-] | |||||||
Especially humans whose jobs are performance-graded on how quickly they can start talking to the next customer. | ||||||||
|