| ▲ | wat10000 5 hours ago | ||||||||||||||||||||||||||||||||||
Better than actual human customer agents who give an obviously scripted “I’m sorry about that” when you explain a problem. At least the computer isn’t being forced to lie to me. We need a law that forces management to be regularly exposed to their own customer service. | |||||||||||||||||||||||||||||||||||
| ▲ | datsci_est_2015 4 hours ago | parent [-] | ||||||||||||||||||||||||||||||||||
I knew someone would respond with this. HN is rampant with this sort of contrarian defeatism, and I just responded the other day to a nearly identical comment on a different topic, so: No, it is not better. I have spent $AGE years of my life developing the ability to determine whether someone is authentically providing me sympathy, and when they are, I actually appreciate it. When they aren’t, I realize that that person is probably being mistreated by some corporate monstrosity or they’re having a shit day, and I provide them benefit of the doubt. > At least the computer isn’t being forced to lie to me. Isn’t it though? > We need a law that forces management to be regularly exposed to their own customer service. Yeah we need something. I joke about with my friends creating an AI concierge service that deals with these chatbots and alerts you when a human is finally somehow involved in the chain of communication. What a beautiful world where we’ll be burning absurd amounts of carbon in some sort of antisocial AI arms race to try to maximize shareholder profit. | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||