Remix.run Logo
kokanee a day ago

I view the issue of inefficient communication as a problem that will wane as LLMs progress, and a bit idealistic about the efficiency of most human-to-human communication. I feel strongly that we shouldn't be forced to interact with chatbots for a much simpler reason: it's rude. It's dismissive of the time and attention of the person on the other end; it demonstrates laziness or an inability to succeed without cutting corners, and it is an affront to the value of human interaction (regardless of efficiency).

ericd a day ago | parent | next [-]

I feel like that ship sailed long ago with phone trees and hour-long support wait times becoming normal. Not that it's an ideal state of affairs, but I'd much rather talk to a chatbot than wait for an hour for a human who doesn't want to talk to anyone, as long as that chatbot is empowered to solve my problem.

anonymous_sorry a day ago | parent | next [-]

Have you ever had a chatbot solve your problem? I don't think this has ever happened to me.

As a reasonably technical user capable of using search, the only way this could really happen is if there was no web/app interface for something I wanted to do, but there was a chatbot/AI interface for it.

Perhaps companies will decide to go chatbot-first for these things, and perhaps customers will prefer that. But I doubt it to be honest - do people really want to use a fuzzy-logic CLI instead of a graphical interface? If not, why won't companies just get AI to implement the functionality in their other UIs?

ericd a day ago | parent | next [-]

Actually, I have, Amazon has an excellent one. I had a few exchanges with it, and it initiated a refund for me, it was much quicker than a normal customer service call.

Outside of customer service, I'm working on a website that has a huge amount of complexity to it, and would require a much larger interface than normal people would have patience for. So instead, those complex facets are exposed to an LLM as tools it can call, as appropriate based on a discussion with the user, and it can discuss the options with the user to help solve the UI discoverability problem.

I don't know yet if it's a good idea, but it does potentially solve one of the big issues with complex products - they can provide a simple interface to extreme complexity without overwhelming the user with an incredibly complex interface and demanding that they spend the time to learn it. Normally, designers handled this instead by just dumbing down every consumer facing product, and I'd love to see how users respond to this other setup.

ori_b a day ago | parent | next [-]

I'm happy that LLMs are encouraging people to add discoverable APIs to their products. Do you think you can make the endpoints public, so they can be used for automation without the LLM in the way?

If you need an LLM spin to convince management, maybe you can say something about "bring your own agent" and "openclaw", or something else along those lines?

ericd a day ago | parent [-]

Yep, I’m developing the direct agent access api in parallel as a first class option, seems like the human ui isn’t going to be so necessary going forward, though a little curation/thought on how to use it is still helpful, rather than an agent having to come up with all the ideas itself. I’ve spun off one of the datasets I’ve pulled as an independent x402 api already, plan to do more of those.

ori_b a day ago | parent [-]

What I mean is that I want to be able to build my own UIs and CLIs against open, published APIs. I don't care about the agent, it's an annoyance. The main use of it is convincing people who want to keep the API proprietary that they should instead open it.

anonymous_sorry a day ago | parent | prev [-]

I did think about this use-case as I was typing my first message.

I can see it working for complex products, for functionality I only want to use once in a blue moon. If it's something I'm doing regularly, I'd rather the LLM just tell me which submenu to find it in, or what command to type.

ericd a day ago | parent [-]

Yeah true, might be a good idea to have the full UI and then just have the agent slowly “drive” it for the user, so they can follow along and learn, for when they want to move faster than dealing with a chatbot. Though I think speech to text improves chatbot use speed significantly.

LorenPechtel a day ago | parent | prev [-]

Amazon's robot did replace the package that vanished. I don't believe it ever understood that I had a delivery photograph showing two packages but found only one on my porch. But I doubt a human would have cared, either--cheap item, nobody's going to worry about how it happened. (Although I would like to know--wind is remotely possible but the front porch has an eddy that brings stuff, it doesn't take stuff.)

nharada a day ago | parent | prev [-]

Yeah as long as the chatbot is empowered to fix a bunch of basic problems I'm okay with them as the first line of support. The way support is setup nowadays humans are basically forced to be robots anyway, given a set of canned responses for each scenario and almost no latitude of their own. At least the robot responds instantly.

ericd a day ago | parent [-]

Yep, exactly, the problem comes when chatbots are used to shield all the people who can do stuff from interacting with customers.

SteveGoob a day ago | parent | prev | next [-]

> a bit idealistic about the efficiency of most human-to-human communication.

I don't know if I would call it idealism. I feel like what we're discovering is that while the efficiency of communication is important, the efficacy of communication is more important. And chatbots are far less reliable at communicating the important/relevant information correctly. It doesn't really matter how easy it is to send an email if the email simply says the wrong thing.

To your point though, it's just rude. I've already seen a few cases where people have been chastised for checking out of a conversation and effectively letting their chatbot engage for them. Those conversations revolved around respect and good faith, not efficiency (or even efficacy, for that matter).

nickff a day ago | parent | prev [-]

The problem is that people are very rude to customer service representatives, so companies spend money training CSRs, who often quit after a short period of being abused by customers. Automated reception systems disallow people from reaching representatives for the same reason.

autoexec a day ago | parent [-]

CSRs are abused by call center managers far more often than they are by the people on the other end of the phone line. The endless push for "better" metrics, the terrible pay, the dehumanizing scripts, bad (or zero) training, optimizing to make every employee interchangeable and expendable, unforgiving attendance policies, treating workers like children, etc. Call centers are brutal environments and the reason churn is often so high has very little to do with abuse from the people calling for help. In fact, the last two call centers I had any insight into (to their credit) had strict policies about not taking abuse from customers and would flag abusive customer's accounts.

hellotomyrars a day ago | parent [-]

It can be both. It depends a lot on what kind product is being supported. Tech support usually doesn’t get abuse hurled at you by the callers but financial/medical it gets a lot dicier.

That said, I 100% left every call center job I had when I couldn’t put up with the bullshit middle manager crap anymore.

Nothing like having a “team leader” who knows literally nothing about the product who then has to come up with the most nitpicky garbage because they’re required to have criticism on call reviews. Meanwhile some other asshole starts yelling at him to yell at you for not being on the phones enough when the reason I’m not on the phone is because everyone on the team turns to me to ask questions to because, unlike our illustrious leader, I know what I’m doing.