Remix.run Logo
kldg 2 days ago

As someone who was recently screwed over by LLM CSR, I'd respectfully disagree. Amazon replaced their offshore humans with LLMs recently. They put the "subscribe to Prime" button on the right-hand side of the screen when you go to checkout. It's a one-click subscription. I accidentally clicked it a few days ago.

I immediately hop on customer service chat to ask for a refund. I was surprised to be talking to an LLM rather than a human, but go ahead and explain what happened and state I want the transaction for the subscription canceled. It offers to cancel the subscription at the end of the 30-day subscription. I decline, noting I want a refund for the subscription I didn't intend to take. It repeats it can cancel the subscription at the end of 30-day subscription. I ask for human. It repeats. I ask for human again. It repeats. I disconnect.

Amazon knows what it's doing.

Nextgrid 2 days ago | parent [-]

This occurrence has nothing to do with AI? The reason AI doesn't want to grant you the refund is because it's not been given the ability to do so. It would be no different with a human.

If Amazon wanted to give you the ability to get a refund for unused Prime benefits, it would allow the AI to do it, or even give you a button to do it yourself.

kldg 2 days ago | parent [-]

I can't really argue this except to say trust me, bro, I've been an Amazon customer for over 20 years, and there has never been an issue, no matter how unusual, the CSR was not able to resolve inside about five minutes. The LLM was completely on rails with only specific whitelisted actions available to it. Even if a human couldn't do the specific action, they could explain why instead of word-for-word repeating an irrelevant part of their script.

They don't trust the LLM so they cripple what it can do, would be the generous interpretation. I actually think they're intentionally crippling the LLMs' access to accounts, though, to reduce their spend not on CSRs, but on CSR actions for, for example, refunds, where the LLM becomes an excuse for the change; they can hide behind what they'll call technical issues or teething pains.

AbstractH24 a day ago | parent [-]

I know exactly what you mean, but its hard to tell if its something they are ok with because they are slowly becoming less user-centric and willing to make refunds/exceptions. Or if its the rigidity of AI.

If it was the former, they would help you when you esclated it. So I think they are just becoming more greedy.