Remix.run Logo
Don't make me talk to your chatbot(raymyers.org)
218 points by pkilgore 4 hours ago | 177 comments
com2kid 4 hours ago | parent | next [-]

People demand free support.

When I worked at Microsoft, it cost over $20 to have a human customer support agent pick up the phone when someone called in for help. That was greater than our product margin. Every time someone called for help, we basically lost the entire profit on that sale, and then some.

Most common support calls where for things that were explained in the manual, the out of box experience, tutorial documents, FAQ pages, and so on and so forth.

Did we have actual support issues that needed fixing, yes of course. And the insanely high cost of customer support drove us to improve our first use experience. But holy cow people don't realize how expensive support calls are.

Edit: To explain some of the costs - This was back when people worked in physical call centers, so first off we were paying for physical office space. Next up training, each CSR had to be trained on our product. This took time and we had to pay for that training time. We also had to write support material, and update that support material for each new version that came out. All of this gets amortized into the cost of support. Because workers tend not to stay long, you pay for a lot of training.

Add in all the other costs associated with running a call center and the cost per call, even for off shore call centers, is not cheap.

In a reasonable world we'd just raise the price of the product by $x based on what % of people we expect to call in for support (ignore for a minute that estimating that number is hard), but the world isn't reasonable. Downwards price pressure comes from all sides, primarily VC backed competitors who are OK burning $$ to gain market share, and competitors at other FAANGs that are OK burning money to gain market share.

The result is that everyone is going to try and reduce support costs because holy cow per user margins are low now days for huge swaths of product categories (Apple's iPhone being a notable exception...)

Sohcahtoa82 2 hours ago | parent | next [-]

> Most common support calls where for things that were explained in the manual, the out of box experience, tutorial documents, FAQ pages, and so on and so forth.

My brother used to work at tech support for XBox Live.

He said that 80% of his calls were for password resets, something users can easily self-service. There's literally an option on the login form for "Forgot Password", and people would rather spend time calling up support, waiting on hold, and verifying their identity to a support agent than click a button.

And it's not like the password reset flow was any easier going through support. He'd just trigger a password reset e-mail to be sent, exactly like the user hitting Forgot Password.

And this is after the phone tree tells them "If you forgot your password, click the Forgot Password link".

I always think about this when people demand they should be able to talk to a human. The overwhelming number of callers to tech support don't need a human. Giving everybody the ability to speak to a human just isn't feasible.

I have an uncle that works tech support for XFinity. Half his calls are resolved by just power cycling the modem/router. People shouldn't need a human to tell them to do that.

redox99 2 hours ago | parent | next [-]

Power cycling is not a solution. It's a crappy workaround, and you still had downtime because of it. The device should never get stuck in the first place, and the solution for that is fixing whatever bug is in the firmware.

If they want to reduce support calls, then have more reliable gear.

chimeracoder 2 hours ago | parent [-]

> Power cycling is not a solution. It's a crappy workaround, and you still had downtime because of it. The device should never get stuck in the first place, and the solution for that is fixing whatever bug is in the firmware.

I'm sympathetic to the argument that companies should make support calls less necessary by providing better products and services, but "just write bug-free software" is not a solution.

wtallis 2 hours ago | parent [-]

Giving the device enough RAM to survive memory leaks during heavy usage would also be a valid option, as is automatic rebooting to get the device back into a clean state before the user experiences a persistent loss of connectivity. There are a wealth of available workarounds when you control everything about the device's hardware and software and almost everything about the network environments it'll be operating in. Fixing all the tricky, subtle software bugs is not necessary.

ambicapter 32 minutes ago | parent | next [-]

A memory leak will consume any amount of ram by definition, adding more ram is not a solution either.

esyir 2 hours ago | parent | prev | next [-]

For a community full of engineers, I'm always surprised that people always take absolutionist views on minor technical decisions, rather than thinking of the tradeoffs made that got there.

andrew_lettuce an hour ago | parent [-]

The obvious trade off here is engineering effort vs. development cost, and when the tech support solution is "have you tried turning it off, then on again?" We know which path was chosen

DonHopkins an hour ago | parent | prev [-]

You can't just throw RAM at embedded devices that you make millions of and have extremely thin margins on. Have you bothered to look at the price of RAM today? At high numbers and low margins you can barely afford to throw capacitors at them, let alone precious rare expensive RAM.

Lammy 20 minutes ago | parent | next [-]

No, XFinity are the ones who decided their routers “““need””” to have unwanted RAM-hungry extra functionality beyond just serving their residential customers' needs. Their routers participate in an entire access-sharing system so they can greedily double-dip by reselling access to your own connection that you already pay them for:

- https://www.xfinity.com/learn/internet-service/wifi

- https://www.xfinity.com/support/articles/xfinity-wifi-hotspo...

wtallis 34 minutes ago | parent | prev [-]

We're talking about devices where the retail price is approximately one month of revenue from one customer, and that's if there isn't an extra fee specifically for the equipment rental. Yes, consumer electronics tend to have very thin margins, but residential ISPs are playing a very different game.

andrew_lettuce an hour ago | parent | prev | next [-]

The thing is a YOU don't get to decide this. Maybe the PW reset flow is significantly more complex for some people who don't have an actual human walk them though it; maybe Xfinity routers shouldn't need to be power cycled to fix problems. Maybe corporations should make their products better to avoid do many support calls or price that into the purchase price. At least let's be honest that the entire exercise is an attempt to externalize costs on their customers.

godelski an hour ago | parent | next [-]

I've called for password resets before. Sometimes the email doesn't come in or can take like an hour (fuck "Magic links" and email OTPs...). I've even had support reset it and a day later get the half dozen reset requests I made.

Just because something appears simple and obvious doesn't mean it is. There's a lot of ways for those systems to fail. Might be the user's connection or might be the server the user is connecting to and the customer support is sending through a different one.

Big lesson I've learned is that if a lot of people are struggling with something that seems obvious then it probably isn't.

Mikhail_Edoshin 22 minutes ago | parent [-]

I just remembered my password reset battle with an online store. Yes, the email or the SMS took too long to arrive and when the code came, it was already expired. And I knew the password, by the way; it was just a "new browser" and they wanted a second authentication. Marvelous.

ocdtrekkie an hour ago | parent | prev [-]

The problem is consumers are the ones who decided this. I used to only buy web hosting from companies with 24/7 US based phone tech support. Today this basically doesn't exist, because cheaper options not offering it ate their lunch.

TZubiri 14 minutes ago | parent | prev | next [-]

This mentality is how you get

"Hi, thank you for your message, please take a look at our following FAQ guides:

- I forgot my password

Was this answer useful to you, or would you like more links to our FAQ? Before we give you a link to what used to be a talk to a human line, but which has been replaced by another chatbot in a sort of Matryoshka"

wtallis 2 hours ago | parent | prev | next [-]

> Half his calls are resolved by just power cycling the modem/router. People shouldn't need a human to tell them to do that.

Comcast deserves every penny of customer service expenses they're incurring if their own purpose-built modem/routers are so flaky they're responsible for half the problems people experience with their service. Customers should not be expected to endure shitty products without even seeking help from the service provider that owes them better.

By contrast, I've seen Google Fiber proactively issue a partial refund in response to a service outage that was so short I didn't even notice it.

pydry 2 hours ago | parent | prev [-]

I had a friend who worked for a company that built AI call centres. I naively thought that customers would use it to do "password reset" type workflows and have an escape hatch for customers to talk to a human if the AI couldnt handle what they needed.

Surprisingly few of them wanted that. If the AI couldnt handle their issue they mostly wanted customers to just fuck off.

bluefirebrand an hour ago | parent [-]

> If the AI couldnt handle their issue they mostly wanted customers to just fuck off.

Witness the future of business and society

autoexec 3 hours ago | parent | prev | next [-]

Microsoft is a company that has very little right to complain about support costs. They'd save themselves a fortune if they stopped releasing bad software and updates that required support in the first place. Nobody wants to call Microsoft for support. They do it because they've been forced to, usually by Microsoft. This kind of support can hardly be called "free" because even when Microsoft isn't charging customers to speak with the person on the other end of the line the customer has already paid in time and suffering (and sometimes lost data)

bsder 3 hours ago | parent [-]

> They'd save themselves a fortune if they stopped releasing bad software

I doubt it. I suspect the number one tech support call is "I forgot my password" and everything else is a long way below that.

I'll slag on Microslop all day, but users are dumber than dumb.

dsjoerg 2 hours ago | parent | next [-]

Users are "dumb", and it's a dumb _system_ and dumb business that doesn't plan for that in terms of FTUE, business model, support model, and product flows.

We product makers get to think about our one little product all day, and it's our job to make our product work for the "dumb" users. It's not their job to adapt to us.

Blackthorn 2 hours ago | parent | prev | next [-]

Very easy solution to users forgetting their passwords. It's to not need a password for your software. Something that once upon a time, Microsoft did not require with their operating systems.

autoexec an hour ago | parent | next [-]

They also invented that whole "you changed your video card so now you have to call support to reactivate windows" process.

andrew_lettuce an hour ago | parent | prev [-]

Exactly, if 1/2 their support calls are PW resets, and that costs them a fortune solve the problem, don't slap AI lipstick on the chat pig.

zmgsabst an hour ago | parent | prev [-]

Okay — but did they try to address that, eg, via easy to remember pass phrases? Or were they hacks pushing that complexity nonsense that XKCD called out as midwit math?

https://xkcd.com/936/

Passwords are the ultimate example of technologists turning in substandard bullshit and then blaming users for “holding it wrong”. If that’s Microsoft’s largest problem, they’ve deserved every call.

boplicity 2 hours ago | parent | prev | next [-]

If someone pays for a product, and then gets support for it, that's not FREE support. That's paid support. It's not their fault if the company they're a customer of loses money when they support those they've sold a product to.

jeffparsons 2 hours ago | parent | next [-]

Amazon, for example, charges us for cloud resources and then charges us again (handsomely) for the privilege of submitting bug reports to them. And then sometimes, even with a clear, deterministic repro for a bug with no plausible workaround (besides "stop using the feature"), where the fix is probably as simple as "pull a fix from upstream open source repo" or "sic Claude on it for 10 minutes", the bug remains open for literally years.

This is very different from "I didn't read the instructions on the screen and now I'm calling support". Both scenarios exist. I have some sympathy for businesses facing the latter, and much less for businesses facing the former.

RIMR an hour ago | parent | prev | next [-]

This is an oversimplification.

When people talk about wanting "free support", they mean that they want support included with the price of the product (no extra charges), but you're still going to get what you paid for, and expecting too much might not get you what you want.

If you pay $20/month for a software subscription for your small business, you're going to get a different kind of support than the enterprise customer paying $100k/month. The small business customer will get support via email with multi-day SLAs, and the enterprise customer will get priority support via screen-share with same-day SLAs.

And there are free-tier services that offer limited support, where users that don't pay anything expect to be treated like they're full-fledged customers.

There a limited scenario here, where a paying customer has so many problems with the product that the cost of support exceeds the revenue the customer provides, and when one can confidently say that this is not the result of an overly-needy customer, you spend the money figuring out the problem and making sure that the solution is available to help any customer that follows. The cost of support my exceed revenue for one customer, but once the solution is in the knowledge base, you don't have to repeat those costs again for the next customer.

But there are also small customers who fumble the product and put too much strain on support until a decision is made not to prioritize them over other customers. I have seen small customers with unreasonable expectations get "fired" simply because their revenue wasn't worth it.

If a company routinely sees support costs exceed revenue, that's usually the company's fault for having a faulty and/or hard-to-support product. If a single customer's support costs exceed the revenue they provide, that's usually the customer's fault for leaning too heavily on support to be their personal I.T. provider.

shimman 2 hours ago | parent | prev [-]

Corporations have really hammered in the propaganda haven't they? They idea that a trillion dollar corporation can't have good support because they're just greedy and don't want to hire workers needs to be reinstated every moment.

dmd 2 hours ago | parent | prev | next [-]

You wrote all of that in response to the title, without reading even one paragraph of the article? Wild. The article is not about support chatbots.

wtallis an hour ago | parent | next [-]

Support chatbots are pretty much the only scenario where "don't make me talk to your chatbot" is a problem in practice. If someone tries to use a chatbot to engage with me in a personal or professional discussion, I don't lose anything of value by simply ignoring them permanently. It's only when the party using the chatbot has something I want that I have any incentive to even consider playing along.

radiorental 2 hours ago | parent | prev [-]

You might be talking to a chatbot!!

wvenable 3 hours ago | parent | prev | next [-]

My last experience with a support chatbot was actually pretty decent. It collected all the information, asked followup details, and then fired that whole thing off to a human to deal with. It was perfectly fine.

muyuu 3 hours ago | parent | next [-]

my only experiences with chatbots so far have been as instruments for companies to avoid their contractual obligations and just not provide the options that I would have asked a person directly for

obviously not a problem with the technology itself, it was like that with more primitive answering machines as well, often there only to either answer the obvious things, or stonewall people with real problems with the product or service hoping they'd just give up and take the loss

tempest_ 3 hours ago | parent [-]

I mean that is also the job of existing call handlers.

"We are experiencing an greater than usual call volume, please wait while an agent becomes available" only to be randomly disconnected has been a thing for most of my life.

Everyone seems to be hyping open claw at the moment soon its just going to be LLMs talking to LLMs.... I wonder if they will develop a short hand and start talking in wingdings.

crabmusket 3 hours ago | parent | prev | next [-]

"smart answering machine" seems like a very apt use case for LLMs, provided the rest of the system works - that a human actually received and acts on the feedback.

lurk2 3 hours ago | parent | next [-]

This is the thing that drives me crazy. Most of these phone calls should just be emails; I can usually stand to wait a week or two for the company to get back to me. General support funnels like support@example.com have been dead for most consumer-facing technologies for close to a decade at this point. I’m not installing an app for every company I’m forced to interact with when there are already existing, universal technologies available that they could implement if they just priced their products appropriately.

kanzure 2 hours ago | parent [-]

It would be nice if more businesses embraced email instead of requiring phone calls for basic tasks. Imagine how much more productive we could be if we could just send off a quick email with the information and questions.

Instead, what we're likely going to get are "voice agents" calling each other when we could have just used email instead...

shimman an hour ago | parent [-]

Businesses likely don't know a better way because the person selling them software doesn't want them to use an open and federated technology. They want the business to use Slack, with a SalesForce CRM, and then add a JIRA workflow to top it off.

Most of the time it's simply not being aware of what's out there or just showing them a different work flow.

LorenPechtel 3 hours ago | parent | prev [-]

Yeah. I recently had to deal with Amazon's robot. Definitely bird-brained but close enough that the right objective was accomplished even though I don't think it ever understood what happened (but woe to the non-native speaker!) The problem is not chatbot customer support, the problem is bird-brained managers that think a system that solves 99% of issues doesn't need a fallback for that 1%.

lurk2 3 hours ago | parent | prev | next [-]

Whenever I interact with them I get asked to describe my issue then regardless of what I write I get asked a battery of questions you would expect are getting fed into a form and then on the off-chance I get connected to a human operator (which was my goal to begin with) they end up asking me for all the same information again.

esafak 3 hours ago | parent | prev [-]

Do you remember what product they used?

Quothling 3 hours ago | parent | prev | next [-]

Isn't part of why Apple's iPhone can be so expensive is because it's very easy to get actual human support for it when something goes wrong? You probably didn't make the mistake at Microsoft, but I've seen people look at the localized spreadsheet and miss the long term company wide spreadsheet completely. Often because the sales and support departments are so far from each other that they're basically two different companies working in different directions. Maybe Microsoft customer support is a bad place to measure these things because of the size, but around here quite a few banks have tried outsourcing their phone support to everything available and have come back because it cost them customers. Even customers who never phoned them.

That being said. Your example of customers calling for support on things they shpuld be capable of figuring out themselves in is probably where AI is going to shine as first line support. Once (if?) AI voice chat is good enough to replace chatbots we may not even realize we're talking with an AI unless it tells us.

protocolture 2 hours ago | parent | next [-]

>Isn't part of why Apple's iPhone can be so expensive is because it's very easy to get actual human support for it when something goes wrong?

Yeah, Apple has best in class support. They tried monetising it through Applecare but thats largely broken down.

I cant stand Apple for a lot of reasons, but their phone support, and everything behind that like training, is about as good as you can possibly hope to achieve.

leptons 2 hours ago | parent | prev [-]

>customers calling for support on things they shpuld be capable of figuring out themselves in is probably where AI is going to shine as first line support.

It certainly won't be cheap to run real-time AI voice chat, or any real-time AI chat. The AI costs that you currently see are heavily subsidized, just like OP's example of "VC backed competitors who are OK burning $$ to gain market share", it's the same. These AI companies are far from profitable, burning billions to insert themselves into customer support pipelines and everywhere else they can, and then the other foot will drop. Uber and Lyft are far more expensive today than when they started, and the price to run "AI" will also inflate when these companies have to pay off all the billions they've spent but didn't earn. I doubt it will end up costing much less if less at all than human support, with worse results.

com2kid 2 hours ago | parent [-]

AI voice chat can be done for cheap.

Lots of it is RAG and knowledge base lookups, you don't need large fancy models. Indeed you want fast responses, so low parameter models are better.

TTS and ASR models are tiny now days, like a handful of GB tiny.

Last time I priced this all out the VOIP fees cost more than self hosting all the models.

maest 24 minutes ago | parent | prev | next [-]

The support cost is why I email support to unsubscribe me from newsletters I haven't signed up for, instead of clicking the unsubscribe link. I then mark the email as spam anyway in gmail.

It's petty, but I haven't found a better disincentive.

foresto 2 hours ago | parent | prev | next [-]

If I'm contacting a company for help from a human, it's because I haven't found the solution in the manual, the out of box experience, tutorial documents, FAQ pages, and so on and so forth. More often than not, I'm calling to do the company the favor of reporting an unaddressed failure mode in their service, often with technical details that would help them quickly identify and fix the cause (and reduce their support call volume)... if only that information could be delivered to the right people.

I don't have infinite time or patience, though. When blocked by a moat of hold times, chat bots, first level support scripts, etc, I will give up.

Yes, calls like mine are in the minority. But they are especially valuable, and I think well worth their share of the costs you describe.

Maybe companies should be identifying customers with above average tech skills, and routing them to better support channels next time they call.

Maybe we need shibboleet.

I don't know what the best solution is, but there must be a better way to do triage than funneling everyone into a flowchart of counterproductive misery, as is widespread today.

dylan604 an hour ago | parent [-]

> If I'm contacting a company for help from a human, it's because I haven't found the solution in the manual, the out of box experience, tutorial documents, FAQ pages, and so on and so forth.

You'd be amazed at how not normal that is though. The number of people willing to throw up their hands to ask for help rather than researching anything is pretty damn high.

godelski an hour ago | parent | prev | next [-]

  > People demand free support.
  > When I worked at Microsoft
Last I checked windows was a paid product...

Last I checked the common nicknames were "Microslop" and "Winblows"

Maybe if Microslop spent more time improving their product they'd spend less money and time on support.

Sorry, I have no empathy for a multi trillion dollar company that's shoving things down our throats. I'm sorry you had a frustrating experience as an employee but my feelings about a mega corp are very different. It's like watching someone wipe away their tears with hundred dollar bills

lifis 2 hours ago | parent | prev | next [-]

Why not charge for support?

And if it turns out to be your mistake (faulty product or missing documentation) as opposed to something the user could have reasonably solved by themselves, refund the charge and possibly provide compensation for the inconvenience.

com2kid 2 hours ago | parent [-]

Companies used to charge for support.

But if one company stops doing it, eventually everyone has to stop doing it.

Then the race to the bottom begins...

drusepth 2 hours ago | parent | prev | next [-]

> When I worked at Microsoft, it cost over $20 to have a human customer support agent pick up the phone when someone called in for help. That was greater than our product margin. Every time someone called for help, we basically lost the entire profit on that sale, and then some.

This doesn't seem like a bad thing when it comes to aligning incentives (assuming customers actually want a product they don't need help to use).

protocolture 2 hours ago | parent | prev | next [-]

>Did we have actual support issues that needed fixing, yes of course. And the insanely high cost of customer support drove us to improve our first use experience. But holy cow people don't realize how expensive support calls are.

Same in the ISP space. ISP's with low margins often lose multiple months of revenue on a single support call.

pants2 3 hours ago | parent | prev | next [-]

Curious, why was it $20?

I would think that's close to an hourly rate for first level support and calls are mostly resolved in ~10 mins?

IanCal 3 hours ago | parent | next [-]

They need a place to be, they need to get hired, trained, managed and all the associated general costs of employment (hr, payroll, etc). They need equipment, there's monitoring, evaluations etc.

Then you also have to pay them regardless of whether someone calls.

com2kid 3 hours ago | parent | prev | next [-]

I edited my comment above and explained, that $20 is an amortized cost representing everything that goes into picking up that phone call.

BalinKing 2 hours ago | parent [-]

In that case, wouldn't you be happy to get more calls, so that the up-front "training" cost is worth it? Naïvely I'd expect that every additional call would _decrease_ the amortized price per call.

mrandish 3 hours ago | parent | prev [-]

While I agree with TFA's point that forcing a chatbot isn't a substitute for just having the info available, organized and searchable, the answer to your specific question is that the fully burdened cost of a trained support center human includes a lot more than their gross hourly wage. There's recruiting, interviewing, hiring, training plus space, desk, computer, phone, IT, HR, health care, vacation, sick days, insurance, employer's share of employment taxes.

A rough rule of thumb is the full burdened cost of an hourly office knowledge worker is two to three times the gross hourly wage.

dylan604 an hour ago | parent | prev | next [-]

One company whose software I used had an annual support contract. If you did not renew that contract, every time you called support they would ask for a credit card number. If you found an actual bug, the card would not be charged. If it was a user error, the card would get charged.

This seemed pretty reasonable to me.

alexpotato 2 hours ago | parent | prev | next [-]

> Because workers tend not to stay long, you pay for a lot of training.

I often wonder that if you paid $60K for a top quality support person instead of $30K for two average people (or even $20K for 3 bad people) then the following might happen:

- you would get better support calls

- happier customers

- longer tenured employees

- all of the above would lead to a reputation as a company with AMAZING support

kazinator 3 hours ago | parent | prev | next [-]

People prefer a pricing model in which support appears free. Free support (that is good) creates the sense that the company stands behind the product and service, and leads to good reviews, so it is a win/win.

jibal 2 hours ago | parent | prev | next [-]

Why are you talking about support? The article has nothing to do with that.

chillfox an hour ago | parent [-]

I can't believe how far down I had to scroll before someone called the OP out for not having actually read the article and just decided to make up their own topic.

motbus3 an hour ago | parent | prev | next [-]

Ir is just imagination to not consider the legal trouble od not providing proper support or even worse, improper support

dylan604 an hour ago | parent [-]

Where is support legally mandated?

TZubiri 17 minutes ago | parent | prev | next [-]

>People demand free support.

Ok, SaaS it is then

>People demand to pay once and that's it.

Ok, ads, you got it.

>People demand no ads.

Ok, chatbot support then

>...

wcfrobert an hour ago | parent | prev | next [-]

Software scales. Customer support doesn't. SaaS companies do not want to deal with customer support at all. It's only gotten worse with AI agents.

It's incredibly frustrating to spend a good 10 minutes navigating a website's complex web of menus to get a phone number (I think they deliberately try to hide it...). Then spend another 5 minutes listening to bots telling me to press 1 for English, only to fall into the wrong menu where the bot repeats some useless information I already know, say goodbye, then hang up.

Having a bot say to me: "we care about your concerns, and we value your business" is absurd and oxymoronic.

Compare this to say Chase, Amex, or Geico. I call, someone answers within 2 minutes and addresses all my problems/concerns in fluent English. I'd happily pay a premium for that.

jongjong 3 hours ago | parent | prev [-]

The root problem is that these big companies are not capable of serving the customers that they have but because they have a monopoly, the customers are forced to use them.

All alternatives which are capable of actually serving the customer are systematically driven out of business.

Had they built a better, more intuitive product, they would get fewer support calls and wouldn't be struggling with costs.

com2kid 3 hours ago | parent [-]

> Had they built a better, more intuitive product, they would get fewer support calls and wouldn't be struggling with costs.

As I mentioned, due to high support costs we worked to improve the UX and we ended up dropping our support costs dramatically.

Doesn't change the fact that everyone who did call cost us more than our profit on the sale.

Customer support is expensive.

Microsoft used to charge for customer support back in the day (90s). The way it worked was that if it was your fault, you paid, if it was a product bug, there was no cost for support. While not a perfect system, it at least aligned everyone's incentives in the right direction. (The huge glaring flaw being it was MS that decided if they were going to charge you for the support call or not...)

hidelooktropic 3 hours ago | parent | prev | next [-]

It matters less to me that the helper is an AI/human than the kind of help I'm getting.

The bigger problem to me is "help" is always framed as my needing to be educated, not a problem with the service. This is especially prevalent for technical customers who are legitimately trying to draw attention to a bug in the platform only to get how-to help articles pasted back to them.

xmprt 2 hours ago | parent | next [-]

> "help" is always framed as my needing to be educated

For many users, this is often the case, and front line AI support like this can handle that pretty effectively while giving your case faster live support. Would you rather sit behind 4 people in the queue trying to figure out why their device doesn't work without batteries when it's not plugged in or have them deal with AI to solve the problem while you get your real issue sorted out quickly after dealing with a handful of basic prompts?

hidelooktropic 2 hours ago | parent | next [-]

I agree that's why I would prefer to have AI if it does the job better and if it can be further trained to understand when to escalate in the case of a more technical user which I have found humans rarely do.

devilbunny 2 hours ago | parent | prev [-]

I wouldn't mind, if it ended up getting me a human at the end of the process.

It's not quite at the level of the "shibboleet" XKCD, but I did once manage to get a much higher support tier at Comcast who was able to verify that 1) I had a problem that was their fault and 2) fix it. Even that guy was halfway on a script. Y'know, after I've read you a ping timeout three times from the Windows command line, I probably shouldn't have to read it verbatim to you again. It hasn't changed.

appreciatorBus 2 hours ago | parent | prev | next [-]

The article wasn't about this at all. It wasn't about customers, about AI customer service, or about seeking help.

LorenPechtel 3 hours ago | parent | prev | next [-]

Or technical customers with a case that was not handled properly. I'm thinking of long, long ago, ISP changed the Usenet server and didn't document it--not on their website, not with their tech support. It shouldn't have taken an hour and a third level support person to get we changed providers, here's the new address. First two levels simply couldn't comprehend that it was not a third party system that I was having trouble with.

dmd 2 hours ago | parent | prev [-]

Yet another person who responded to the title without clicking through to the article, which has nothing to do with support chatbots.

hidelooktropic 2 hours ago | parent [-]

I understand the article is actually not about support chatbots specifically but seeing the conversation here talked about that it's not out of place to join in that discussion. Attempting to shame people by accusing them of not reading is hardly constructive. This isn't Reddit.

SaberTail 3 hours ago | parent | prev | next [-]

The "figure out what you want to say" is key. I've started to think of LLMs, at least in a business setting, as misunderstanding amplifiers.

How many times at work have you been talking to someone else where they're using common words as jargon? Maybe it's something like "the online system" or "the platform". And it's perfectly clear to them what they mean, but everyone else in the company either doesn't know what that actually is, or they have a distorted idea based on the conventional definitions of the words. Even without LLMs in the mix, this can lead to people coming out of meetings with completely different understandings of what's going on.

My experience is few people are actually providing the relevant context to the LLM to explain what they mean in situations like this. Or they don't have the actual knowledge and are using the LLM in the hopes it'll fill in for their ignorance. The LLMs are RLHFed to sound confident, so they won't convey that they don't know what a piece of jargon means. Instead they'll use a combination of the common meaning and the rest of the context to invent something. When this gets copy/pasted and sent around, it causes everyone who isn't familiar to get the wrong idea. Hence "misunderstanding amplifier".

To the point of the article, this is soluble if people take the time to actually figure out what they are trying to convey. But if they did that, they wouldn't need the LLM in the first place.

LorenPechtel 2 hours ago | parent [-]

And that people and the systems actually know the relevant terms.

I recently was dealing with the Amazon robot--after correctly identifying the items in the order it then proceeded to use short terms which were wrong, but make sense as what a classifier might have spit out. Instead of understanding being a shared thing it falls entirely on the user. Sufficiently adept user, this is fine. But a lot of users aren't sufficient adept.

appreciatorBus 2 hours ago | parent | prev | next [-]

This submission might be an HN record for highest % of commenters who skipped reading the article. I'm sure it's always high but so far there are 125 comments and maybe 3 or 4 referencing what was in the actual article.

edot an hour ago | parent [-]

Yes, from the title and first few comments I thought it was about getting customer support and having to talk to a chatbot first. For anyone else who didn't read, this article is about how mindlessly copy-pasting LLM output is comparable to "making me talk to your chatbot".

DonHopkins an hour ago | parent [-]

"Don't make me read your blog"

hatthew 2 hours ago | parent | prev | next [-]

Writing is fundamentally the transfer of information from your brain to my brain. If you have 1000 bits of information you want to transfer, you can't give 300 bits of information to an LLM and have it fill in the remaining 700, because it doesn't know what those 700 bits are. If it's able to guess those 700 bits correctly, then they aren't true information, and you really only have 300 bits you want to transfer. You might as well transfer those bits to me directly, rather than having the LLM add on an extra superfluous 700 bits that I then have to filter out.

TZubiri 11 minutes ago | parent [-]

Remember when LLMs were entering the mainstream, and everyone shared tips on how to superprompt, and one of the hot tips was to tell the AI to write a prompt itself?

Like:

BAD: "Write the specs for a system that does X Y and Z"

GOOD: "Write me a prompt to write the specs for a system that does X Y and Z."

As if the LLM magically knew all about itself and the best tips and tricks to prompt itself even if it had just came out and there was no scraped information on how to use them yet.

jfreds 2 hours ago | parent | prev | next [-]

AI pull request descriptions are my current pet peeve. The ones I have seen are verbose and filled with meaningless fluff words (“optimized”, “performant” for what? In terms of what?), they leak details about the CoT that didn’t make it into the final solution (“removed the SQLite implementation” what SQLite implementation? There isn’t one on main…), and are devoid of context about _why_ the work is even being done, what alternatives were considered etc.

My first round of code review has become a back and forth with the original author just asking them questions about their description, before I even bother to look at code. At first I decided I’d just be a stick in the mud and juniors would learn to get it right the first time, but it turns out I’m just burning myself out due to spite instead.

senko 4 hours ago | parent | prev | next [-]

I thought this was going to be about (customer support) chatbots, which can be a good thing.

"Don't make me talk to your [customer support] chatbot" reads like "Don't make me go to an ATM for a cash withdrawal". If I can solve a thing quickly and effectively without waiting forever to speak to an overworked customer support agent on another contitent, I would very much like that!

Well, anyways, the post is not about that. It's about posting AI-generated text (blog posts, PR summaries). Which I agree with, although there are a bunch of holes in the argument, such as:

> 1. Figure out what you want to say. 2. Say it. That first figuring-out part is important.

Well, yeah, I can figure out what I want to say, then have the chatbot say it. So looks like the second part is important, too.

Juminuvi an hour ago | parent | prev | next [-]

100% agree. Hopefully etiquette will catch up if enough folks talk about this.

Side note, the number of comments here from people who clearly didn’t read the article is impressive

kokanee 4 hours ago | parent | prev | next [-]

I view the issue of inefficient communication as a problem that will wane as LLMs progress, and a bit idealistic about the efficiency of most human-to-human communication. I feel strongly that we shouldn't be forced to interact with chatbots for a much simpler reason: it's rude. It's dismissive of the time and attention of the person on the other end; it demonstrates laziness or an inability to succeed without cutting corners, and it is an affront to the value of human interaction (regardless of efficiency).

ericd 4 hours ago | parent | next [-]

I feel like that ship sailed long ago with phone trees and hour-long support wait times becoming normal. Not that it's an ideal state of affairs, but I'd much rather talk to a chatbot than wait for an hour for a human who doesn't want to talk to anyone, as long as that chatbot is empowered to solve my problem.

anonymous_sorry 3 hours ago | parent | next [-]

Have you ever had a chatbot solve your problem? I don't think this has ever happened to me.

As a reasonably technical user capable of using search, the only way this could really happen is if there was no web/app interface for something I wanted to do, but there was a chatbot/AI interface for it.

Perhaps companies will decide to go chatbot-first for these things, and perhaps customers will prefer that. But I doubt it to be honest - do people really want to use a fuzzy-logic CLI instead of a graphical interface? If not, why won't companies just get AI to implement the functionality in their other UIs?

ericd 3 hours ago | parent | next [-]

Actually, I have, Amazon has an excellent one. I had a few exchanges with it, and it initiated a refund for me, it was much quicker than a normal customer service call.

Outside of customer service, I'm working on a website that has a huge amount of complexity to it, and would require a much larger interface than normal people would have patience for. So instead, those complex facets are exposed to an LLM as tools it can call, as appropriate based on a discussion with the user, and it can discuss the options with the user to help solve the UI discoverability problem.

I don't know yet if it's a good idea, but it does potentially solve one of the big issues with complex products - they can provide a simple interface to extreme complexity without overwhelming the user with an incredibly complex interface and demanding that they spend the time to learn it. Normally, designers handled this instead by just dumbing down every consumer facing product, and I'd love to see how users respond to this other setup.

ori_b 3 hours ago | parent | next [-]

I'm happy that LLMs are encouraging people to add discoverable APIs to their products. Do you think you can make the endpoints public, so they can be used for automation without the LLM in the way?

If you need an LLM spin to convince management, maybe you can say something about "bring your own agent" and "openclaw", or something else along those lines?

ericd 2 hours ago | parent [-]

Yep, I’m developing the direct agent access api in parallel as a first class option, seems like the human ui isn’t going to be so necessary going forward, though a little curation/thought on how to use it is still helpful, rather than an agent having to come up with all the ideas itself. I’ve spun off one of the datasets I’ve pulled as an independent x402 api already, plan to do more of those.

anonymous_sorry 2 hours ago | parent | prev [-]

I did think about this use-case as I was typing my first message.

I can see it working for complex products, for functionality I only want to use once in a blue moon. If it's something I'm doing regularly, I'd rather the LLM just tell me which submenu to find it in, or what command to type.

ericd 2 hours ago | parent [-]

Yeah true, might be a good idea to have the full UI and then just have the agent slowly “drive” it for the user, so they can follow along and learn, for when they want to move faster than dealing with a chatbot. Though I think speech to text improves chatbot use speed significantly.

LorenPechtel 2 hours ago | parent | prev [-]

Amazon's robot did replace the package that vanished. I don't believe it ever understood that I had a delivery photograph showing two packages but found only one on my porch. But I doubt a human would have cared, either--cheap item, nobody's going to worry about how it happened. (Although I would like to know--wind is remotely possible but the front porch has an eddy that brings stuff, it doesn't take stuff.)

nharada 3 hours ago | parent | prev [-]

Yeah as long as the chatbot is empowered to fix a bunch of basic problems I'm okay with them as the first line of support. The way support is setup nowadays humans are basically forced to be robots anyway, given a set of canned responses for each scenario and almost no latitude of their own. At least the robot responds instantly.

ericd 2 hours ago | parent [-]

Yep, exactly, the problem comes when chatbots are used to shield all the people who can do stuff from interacting with customers.

SteveGoob 3 hours ago | parent | prev | next [-]

> a bit idealistic about the efficiency of most human-to-human communication.

I don't know if I would call it idealism. I feel like what we're discovering is that while the efficiency of communication is important, the efficacy of communication is more important. And chatbots are far less reliable at communicating the important/relevant information correctly. It doesn't really matter how easy it is to send an email if the email simply says the wrong thing.

To your point though, it's just rude. I've already seen a few cases where people have been chastised for checking out of a conversation and effectively letting their chatbot engage for them. Those conversations revolved around respect and good faith, not efficiency (or even efficacy, for that matter).

nickff 4 hours ago | parent | prev [-]

The problem is that people are very rude to customer service representatives, so companies spend money training CSRs, who often quit after a short period of being abused by customers. Automated reception systems disallow people from reaching representatives for the same reason.

autoexec 3 hours ago | parent [-]

CSRs are abused by call center managers far more often than they are by the people on the other end of the phone line. The endless push for "better" metrics, the terrible pay, the dehumanizing scripts, bad (or zero) training, optimizing to make every employee interchangeable and expendable, unforgiving attendance policies, treating workers like children, etc. Call centers are brutal environments and the reason churn is often so high has very little to do with abuse from the people calling for help. In fact, the last two call centers I had any insight into (to their credit) had strict policies about not taking abuse from customers and would flag abusive customer's accounts.

hellotomyrars 2 hours ago | parent [-]

It can be both. It depends a lot on what kind product is being supported. Tech support usually doesn’t get abuse hurled at you by the callers but financial/medical it gets a lot dicier.

That said, I 100% left every call center job I had when I couldn’t put up with the bullshit middle manager crap anymore.

Nothing like having a “team leader” who knows literally nothing about the product who then has to come up with the most nitpicky garbage because they’re required to have criticism on call reviews. Meanwhile some other asshole starts yelling at him to yell at you for not being on the phones enough when the reason I’m not on the phone is because everyone on the team turns to me to ask questions to because, unlike our illustrious leader, I know what I’m doing.

Molitor5901 4 hours ago | parent | prev | next [-]

Related: Please don't make me talk to your AI pretend-human complete with Asian accent and background call center sounds. That's even more insulting that a chat bot.

titanomachy 4 hours ago | parent [-]

Who did this?

Molitor5901 3 hours ago | parent | next [-]

For me it was a Maryland contracting company, FH Furr, that does electrical, plumbing, etc.

WD-42 3 hours ago | parent | prev | next [-]

Listen to the podcast Shell Game.

morkalork 3 hours ago | parent | prev | next [-]

https://www.ycombinator.com/companies/retell-ai

The different accents and call center background noise are features in their product.

knowaveragejoe 4 hours ago | parent | prev [-]

If you haven't experienced it yet, you will soon.

avatardeejay 18 minutes ago | parent | prev | next [-]

I definitely lean pro AI, and I feel an air of condescension here that doesn't thrill me. But, it wasn't overwhelming and the point does kind of resonate.

I see it in a reddit post, or a twitter comment, I've suspected it in text messages. And like that angle, the like "you're a human. can you please, just" and feeling a little out there for pouring my soul into every word I right wherever it is, like, that idea resonates. That frustration to be reading a lengthy blurb in what's become an over-saturated style where I have to work even harder to discern their real meaning than if they were actually that verbose to begin with.

daft_pink 3 hours ago | parent | prev | next [-]

I just signed up with Gusto for one of my companies. They charged me for premium support automatically and when I tried to dispute it I had to talk in circles with their AI named Gus. Why am I paying through the nose for premium support just to chat with an AI?

jascha_eng 3 hours ago | parent | next [-]

This is not really what the article is about

trollbridge 3 hours ago | parent | prev | next [-]

Hence why I prefer a real CPA with a real person who answers the phone.

user3939382 3 hours ago | parent | prev [-]

Gusto is a nightmare if your account needs fall out of their happy path. Everything is 100% automated with call center scripts to help you otherwise. You will never reach someone with power to fix anything.

mrandish 3 hours ago | parent | prev | next [-]

As a customer, I just want the information I need. While I don't want to talk to a chatbot, I also don't want to talk to a human - and for the same reason: they usually don't have the info I need.

That's the aspect I don't understand. The information I want is almost always something some other customers have asked already. I'd much prefer to avoid their customer support maze entirely and help myself on a searchable wiki. Unfortunately, most company's online product support FAQs usually only contain answers to obvious shit on the order of RTFM and "is it plugged in." Why not just post the doc their advanced tier 3 support people share amongst themselves? It can be under a warning label like 'preliminary advanced info for engineers'.

I realize people like me represent only around 2-3% of the customers seeking support but it's 2-3% that is able to self-serve and takes more time than average because we invariably have to work through front-line support to get escalated to someone with the non-obvious info that's still been asked many times before. So maybe we're only ~2% but we suck up 4% of support bandwidth and we probably take up closer to ~20% of Tier 3 support - the most expensive, scarce type.

zzo38computer 2 hours ago | parent [-]

I mostly agree (although sometimes it is necessary to talk to someone about it); it would be better to actually have good documentation (so that you do not need to talk to someone about it).

A warning label like you mention is a possibility if that is considered to be necessary, although I think it might be better to have a file that you can download and read (or request by mail or telephone or fax, if this becomes necessary in some circumstances; do not assume the computer always works and is compatible with your file), instead of a searchable wiki.

red75prime 3 hours ago | parent | prev | next [-]

I have a shorter, more cynical version of this: if a person doesn’t provide enough input to a chatbot, I’d be better off talking to the chatbot directly.

tl2do an hour ago | parent | prev | next [-]

The article doesn't address where human oversight is actually necessary. I sometimes use AI for simple spell checking—requiring human review for that would be over-complication. In some more difficult tasks, having AI review AI output works fine for me.

hungryhobbit 3 hours ago | parent | prev | next [-]

I find chatbot conversations to be incredibly similar to dreams.

It's human nature to want to share your dreams, because they are fascinating to you.

However, it's also human nature to want to punch someone in the face when they start talking about this crazy dream they had last night ... because it has nothing to do with you, and doesn't interest you at all.

Similarly, when an AI says something useful to you, in response to your prompts, it's very particular to you. When you try to share it with others ... you get the article.

aprentic 3 hours ago | parent | prev | next [-]

People want to spend as little as possible while getting support for their product as long as possible.

Companies want people to spend as much as possible while doing the minimum work on the product.

Chatbots let companies spend almost nothing while pretending to provide long-term support.

I wonder if something similar to a copyleft license could help. What if there was a contractual "fair business" pledge that companies could add? I imagine that good enough lawyers could craft something that essentially said, "You can only display this contract if you legally guarantee that you do X, Y, Z and do not do A, B C."

pizzathyme 3 hours ago | parent | prev | next [-]

The key thing here is not whether it's AI. The key thing is quality and signal. No one wants to read to a low quality human comment either.

If the AI output was actually better than talking to a real human, more useful, more concise, serving the job to be done, then no one would have a problem with it. In fact they would appreciate it. That future is not here in many areas.

The problem is people are wielding AI right now and either [a] the models they are using are not good enough, [b] they aren't being given enough context, or [c] they are deployed in a way that makes it sloppy

(Insert joke about whether this comment is AI. It's not, but joke away)

WD-42 3 hours ago | parent | next [-]

No. It doesn’t matter how good an llm model is. If a person has something to say and they can give the llm enough context to say it well, they should just write it themselves. Theres 0 reason to bring a llm into it. Doing so simply makes your writing less trustworthy because as a reader I don’t know if what I’m reading is genuine from the writer or simply average of all texts filler.

chrysoprace 3 hours ago | parent | prev | next [-]

I disagree. If my colleague can't be bothered to write a PR comment themselves then I can't be bothered to read it. If I can gain the same insights from interfacing an LLM directly then there's no point in this intermediary dance.

metalliqaz 3 hours ago | parent | prev | next [-]

No it isn't. I really do not care what the LLM has to say. If a person has taken the (substantial) time necessary to fill the context with enough information that something interesting comes out, I would much rather they simply give me the inputs. The middleman is just digested Internet text. I've already got one of those on my end.

zahlman 3 hours ago | parent | next [-]

Related: https://blog.gpkb.org/posts/just-send-me-the-prompt/

(I could have sworn there was a popular HN submission a while back of this or a similar blog post, but damned if I can find it now.)

meatmanek 2 hours ago | parent | next [-]

You're probably thinking of https://news.ycombinator.com/item?id=43888803

zahlman an hour ago | parent [-]

Yes, you're abs- I mean, that's the one.

metalliqaz 2 hours ago | parent | prev [-]

wow, it's like that guy is in my head

andrewaylett 3 hours ago | parent | prev [-]

That does somewhat depend on the size of the context.

LLMs won't add information to context, so if the output is larger than the input then it's slop. They're much better at picking information out of context. If I have a corpus of information and prompt an extraction, the result may well contain more information than the prompt. It's not necessarily feasible to transfer the entire context, and also I've curated that specific result as suitably conveying the message I intend to convey.

This does all take effort.

My take is also that I am interested in what people say: I have priors for how worthwhile I expect it to be to read stuff written by various people, and I will update my priors when they give me things to read. If they give me slop, that's going to affect what I think of them, and I expect the same in return. I'm willing to work quite hard to avoid asking my colleagues to read or review slop.

schrectacular 3 hours ago | parent | prev | next [-]

Slop-y indeed

jibal 2 hours ago | parent | prev [-]

> The key thing here is not whether it's AI. The key thing is quality and signal. No one wants to read to a low quality human comment either.

This is so obviously true to intelligent people (and is even a point made in the article) ... it's sad that you're getting downvoted.

The OP wrote

> When I talk to a person, I expect that they are telling me things out of their head — that they have developed a belief and are trying to communicate it to me.

But when I'm having a conversation about a subject (rather than with a friend, partner, or other person with whom I have a relationship and the conversation is part of the having of that relationship) I don't care what is in that person's head, I care about the truth of the matter, so I'm far more interested in their sources, their logic and the validity of same. Unless I'm a psychologist doing a survey, why should I care about some random person's beliefs? Since I'm a truth seeker, I care about their arguments, and of course the quality of their arguments is of paramount importance. I appreciate people who can back up their arguments, and LLM summaries that are chock full of facts gleaned from the massive training data that includes a vast amount of human knowledge are fully appreciated--while being aware that hallucination is possible so I often double check things regardless of the source. OTOH, the pushback to this is from people I consider worse than irrelevant--they not only are willfully ignorant but they reject knowledge seeking for irrational ideological reasons. (I myself see the LLM industry to be extremely problematic, but as long as LLMs exist and are capable of producing quality signal--which is the given here--then I will use them.)

This whole page is illustrative: so many people are telling us things out of their head ... that have nothing to do with the article because they didn't read it. So they blather about their beliefs and opinions about support--because that's how they interpreted the title. These comments are useless.

P.S.

> If all you care about is the facts, and not the other’s relationship to them, why engage with a person at all?

I already said: I'm a truth seeker. Also I sometimes seek to persuade people in public forums--and not necessarily the person I'm corresponding with. And missing is any reason why I should care about internet randos' relationships with their beliefs, other than as a psychological survey.

> You could query a LLM for whatever subject, argument or counterpoint you wish.

I can do better, and can do more, as noted.

> Besides, your hypothetical summaries chock full of facts don’t exist, at least not yet. Most LLM summaries are chock full of filler, thus the name slop, thus why us “ignorant” people hate reading it.

This is an example of a belief that is not supported by the facts--if it's even a belief, which I doubt--it's emo ideology. Putting "ignorant" in quotes doesn't falsify it, and I have never encountered a remotely intelligent person who "hates" reading LLM summaries--this is in the same category as people who reject Wikipedia citations because "anyone can edit it". This person unintelligently reduces all LLM output to "slop"--maybe he should try actually reading the head article, which has a quite different take.

WD-42 2 hours ago | parent [-]

If all you care about is the facts, and not the other’s relationship to them, why engage with a person at all? You could query a LLM for whatever subject, argument or counterpoint you wish.

Besides, your hypothetical summaries chock full of facts don’t exist, at least not yet. Most LLM summaries are chock full of filler, thus the name slop, thus why us “ignorant” people hate reading it.

shubhamintech 3 hours ago | parent | prev | next [-]

The worst part isn't that chatbots are bad at their job. It's that teams shipping them genuinely don't know how bad they are. Nobody's reading the conversations, so nobody catches users hitting dead ends, rage-clicking, or dropping off right after a failed session. The data exists, it just sits there unread.

geauxvirtual 2 hours ago | parent | prev | next [-]

The only Chatbot I've semi-enjoyed interacting with was SiriusXM during my latest yearly try to cancel dance they make you go through. Usually this was a 45 minute to an hour phone call with Customer Support to eventually get a cheaper rate and continue service.

This last time, it sent me to a Chatbot. In five minutes, I got a cheaper rate than I was previously paying. I'm sort of looking forward to the next interaction to see if I can get even cheaper rates or finally cancel the service.

dylan604 an hour ago | parent [-]

Maybe you can get a Trump >100% discount from the bot if you keep at it.

It's unfortunate that you keep accepting their lower rate though. If a very small percentage of people are willing to continue paying even at slightly lower rates the the vast majority of people, it is still earning them money. Maybe you really didn't want to cancel? I always thought wanting to cancel was black and white, but you've turned it into shades of gray.

tombert 4 hours ago | parent | prev | next [-]

I guess part of the advantage of being an extremely long-winded writer who makes lots of typos is that people know that what I'm writing is probably written by a human.

Though maybe people will start supplying context like "no em dashes, and occasionally misspell a word or two", and soon you won't even be able to tell that.

marginalia_nu 3 hours ago | parent [-]

Typos and EM-dashes are not just obvious. They are avoidable. Here's the kicker: Even if you remove those, other AI smells don't just exist. They are obvious.

tombert 3 hours ago | parent | next [-]

Maybe, though maybe we're only noticing it when it's obvious? Sort of a survivorship bias thing?

Who knows how much of the comments on any website are written by humans now; yeah there are plenty of tells so it can be obvious, but that might only be for the exceptionally bad posts.

marginalia_nu 3 hours ago | parent [-]

Maybe. This type of obvious slop is extremely pervasive though, to the point there almost can't be that much undetectable AI writing.

Checked LinkedIn and found four posts in a row that had "here's the kicker".

tombert 3 hours ago | parent [-]

Yeah, I've also see a lot of "Not X, not Y, just Z" style posts on LinkedIn.

LinkedIn has always been a place full of low-effort posts for people trying to self-promote, so I guess it makes sense to have a robot actually do the thinking for something that is and always has been inherently mindless.

bwat49 3 hours ago | parent | prev | next [-]

Yeah a trend that I've noticed in online comments is people taking LLM generated text and just removing punctuation and making it all lowercase. It's like dude, it's still so obvious xD

jibal 2 hours ago | parent | prev [-]

People who say such things constantly make incorrect claims about AI authorship.

guerython 2 hours ago | parent | prev | next [-]

On my team we always ship an agent draft with a short human anchor first. Two sentences that explain the motivation and the checks we ran, then the bot block with a label like “agent draft” for anyone who wants the raw output. That way readers know what we actually think and don’t have to guess whether the chat log is the human opinion. Do you have a checklist for when that human intro is enough versus when the whole thing needs to stay private?

mazone 2 hours ago | parent | prev | next [-]

How about the trend that people just copy paste AI responses back to you in slack.

this_user 3 hours ago | parent | prev | next [-]

I don't know, occasionally there are some funny results. For instance, I have managed to get AWS' support bot to start smack talking their platform and criticising its often needlessly complex and sometimes incoherent design before cheekily offering to help me make my relative simple setup even more complex and enterprise-ready.

adamtaylor_13 2 hours ago | parent | prev | next [-]

I agree in principle, but, for me, it all comes down to execution.

I used a product that implemented a VERY good AI chatbot as part of their email support and it was better than human support. It was nearly instant in its response time and answered all of my questions perfectly.

In fact, it wasn't until after the interaction that I realized it was an AI bot! Pretty good IMO and I'd prefer that interaction over holding "...because your call IS important to us."

RayVR 2 hours ago | parent | next [-]

Did you read the article? Author is talking about humans relying on chatbots to formulate all their responses for them without strict direction and refinement.

bendangelo an hour ago | parent | prev [-]

Which product?

TimFogarty 3 hours ago | parent | prev | next [-]

I have noticed that my writing ability has atrophied since I was writing essays in school. Now at work most of my writing is slack messages. Writing longer more thoughtful pieces about strategy or performance review has become a slog. I suspect that a lot of people have had a similar experience so offloading the pain of writing to an LLM is appealing.

But frankly LLMs suck at writing. It's not only formulaic, it's uninspired!! So I worry that we're entering an era of mediocre writing. I like the "Have you considered writing?" suggestion. I've been trying to make a habit of writing book reviews so I can counter some of the writing atrophy I've developed. Hopefully it will help me become a better thinker too. As Ray says here: "Understanding your own point of view is an enriching exercise."

zahlman 3 hours ago | parent [-]

> I have noticed that my writing ability has atrophied since I was writing essays in school.

Seems to me like you're doing fine so far. (I hope I haven't just been letting my standards go down the drain...)

> It's not only formulaic, it's uninspired!

Heh.

godelski an hour ago | parent | prev | next [-]

  > The only acceptable pro-AI response to the accusation of AI Slop is to join team Anti-Slop.
I'll never understand why this is controversial. Especially in techie and engineering communities. We of all people love to be grumpy. It's in our nature because the first step to solving problems is recognizing them. Sweeping shit under the rug is for business people who's only interest is money and have no care for the product
ifokiedoke 3 hours ago | parent | prev | next [-]

Reading almost all the comments gives me the sad validation that people truly do not read the article before commenting...

This article is not about support chatbots. It's about clearing up your writing/thoughts and communicating clearly even if you used a chatbot to get there.

padjo 3 hours ago | parent | next [-]

It's absolutely miserable isn't it. People see a headline, decide they have an important thing they must tell the world and just blurt it out. Imagine doing this in conversation. You'd overhear a fragment of conversation and just interject with some semi related bullshit that makes you feel smart and then leave.

Capricorn2481 3 hours ago | parent [-]

People on HN spend most of their time congratulating themselves on how they aren't like other social media platforms, but do the exact same crap anyways.

When it comes to technical discussions, there are so many people on here just regurgitating what they read on an earlier thread. Maybe to test if what they heard was true. Maybe because they just want to sound smart. Not a lot of people actually trying things.

davis 2 hours ago | parent | prev [-]

I can't believe I had to scroll this long to see this comment. Like what the heck is wrong with people on this site. I thought I clicked the wrong comment section.

zikani_03 2 hours ago | parent [-]

I also thought the same thing. Had to double take and scroll to see I'm not alone. It's an interesting phenomenon, I suspect the title was too on the nose that it invokes an immediate reaction.

Anyways, I wish more people commented on what's actually in the article - i've observed what OP is complaining about happening in whatsapp groups too esp when there is a difference of opinion; people defer to calling Meta's in-chat AI instead of giving a bit more effort and stating _their_ view.

lemoncookiechip 2 hours ago | parent | prev | next [-]

I don't care if it's a human, a chatbot, or a dog if they fix my problem.

I don't want to contact customer support in the first place, if I'm forced to, it's because something is very wrong and in that case I don't want to be listening to elevator music and "your call is important to us, please hold" for an hour, and get my call disconnected forcing me to call again.

Issue is that I've yet to have a chatbot actually fix my issues, or most 1st contact human operators for that matter.

appreciatorBus 2 hours ago | parent [-]

The article wasn't about customer support chat bots at all.

kazinator 3 hours ago | parent | prev | next [-]

I don't mind talking to a chatbot if solves problems and doesn't go in circles.

Don't make me talk to a chatbot while there is zero forward progress in solving the problem.

namegulf 3 hours ago | parent | prev | next [-]

To start with, for qualifying the chat it's okay for a chatbot to ask some quick questions so that it can connect with the right person.

Forcing a customer anything beyond that is RUDE!

ivarv 4 hours ago | parent | prev | next [-]

For a similar take see Cory Doctorow's recent "No one wants to read your AI slop" - https://pluralistic.net/2026/03/02/nonconsensual-slopping/#r...

cerved 3 hours ago | parent | prev | next [-]

I instruct Claude to write like peff, writes much better commit messages now

arewethereyeta 3 hours ago | parent | prev | next [-]

Amen! All our banks introduced this we cannot talk to a human unless it's fraud.

foxglacier an hour ago | parent | prev | next [-]

Some people deserve it because they already write in a vague overly diplomatic, bloated way that's hard to extract the core meaning from - or has no meaning underneath all the words. Schoolteachers, I mean you. I'm happy using AI to email teachers because they might as well be human-powered AIs themselves.

We do need to be more tolerant of AI writing though. Some people need it because they can't express their ideas well themselves. You wouldn't say "no wheeled vehicles allowed inside" because that would exclude handicapped people who need wheelchairs.

einpoklum 2 hours ago | parent | prev | next [-]

> We increasingly use coding agents to create PRs.

No we don't, and neither should you. Don't make me read your chatbot's PR.

data-ottawa 20 minutes ago | parent | prev | next [-]

Topical, but not related to the article: I just had the worst experience with corporate AI chatbot.

Apparently my mobile provider switched to a voice chatbot only phone system in September. I called them today because of a price increase and some weird long distance charges on my bill.

I call, the chatbot answers, I confirm my account info and enter my PIN then ask it “why did my price go up”, “your price went up because we made a change to your account”. Wow, super. I ask it if it can reduce the price, no. I ask it about the long distance charges and it tells me to check my account statement online. I ask to be transferred to an human and it asks why, but it does transfer me over to their callback system. As a first line of support defence, that wasn’t so bad.

The timing of the transfer is off so I only hear half of what the callback system says. It requires me to verify the number I want to be called back on, then I tells me I can type in a time I want the callback. I think to myself, is this 24 hour time, how does AM/PM work, what time-slots are available, how do those work?

While I’m thinking about all of this it repeats the instructions. I type in 1 minute into the future because I don’t want to waste my own time waiting, just please call me back ASAP. “That time was unavailable”. I guess that makes sense, it’s very soon and maybe slots are on 5 minute windows. I have to confirm my phone number again and I can enter a new time. I try 10 minutes from now, that’s on a clean 5 minute boundary. Nope. Confirm again and try the nearest 15 minute boundary. Nope — and this time it hangs up and I have to call back and start from zero.

I call back and explain to the bot (again) that I need to talk a human, it’s a billing issue, thanks. It fails to understand me and that seems to get it stuck in a loop. After two more turns it asking me to repeat myself I hang up and call back again. This time the bot does understands me and I try 3 more callback times for today, all of which fail, and it hangs up.

I’ve just spent 10 minutes talking to a wall and punching in numbers. The phone wall is clearly unassailable — all paths lead to the broken callback subsystem.

I try the text chat bot on the site. I convince it to put me in contact with a human chat operator, who I then have to convince it’s worth it for them to call me because he couldn’t help me and it took over 1 minute per chat turn/iteration.

Finally a human calls me! Before we can talk I need to open a text, click the link, then enter the code she tells me, then enter my account PIN. It felt like I might be getting phished, this required such a weird chain of info. She tells me they put a notice of the price increase in my December bill (which was the right amount, so I didn’t read it), so this is all above board. She says if I want a cheaper plan I should check the app and she won’t even tell me what the options are. I ask if they’ll price match competitors and she says no.

At this point I told her I was considering leaving if they won’t price match (and also that the new support service was very bad). She says she’s sorry to hear that, to check the app, then pauses and asks me if I’m happy with my internet services, as if anything about our interaction says “please sign me up with more services!” I know they have to ask about the upsell because they always do, but wow.

The entire reason I’ve been a customer so long was that for a decade I would call and within 5 minutes be able to update my account to a new plan. Usually the support staff were nice and could offer me some loyalty discount, and I was happy.

I just (as in just) finished cancelling my entire account, then checked HN, and “Don’t make me talk to your chatbot” is the top article. Serendipity at its finest.

ares623 3 hours ago | parent | prev | next [-]

Day by day i'm starting to lean towards this take https://anthonymoser.github.io/writing/ai/haterdom/2025/08/2...

patrickmay 3 hours ago | parent [-]

I regret that I have but one upvote to give to this comment.

aiwrita 2 hours ago | parent | prev | next [-]

Nice

jmyeet 3 hours ago | parent | prev | next [-]

I'm reminded of the Air Canada customer service chatbot. It completely made up a refund policy (and there are still people on HN who insist LLMs don't hallucinate) and a court ruled the company had to honor it [1].

The only way to deal with this is to make the implentation not worth it by constantly bypassing it to speak to a human and/or making it cost money by getting it to give you things you're not otherwise entitled to.

I really wonder how these things will handle prompt injection and similar things. I have no confidence any of this is secure.

Wait until this comes to healthcare and it'll be chatbots handling appeals to prior authorization denials, wasting even more physician time.

[1]: https://www.wired.com/story/air-canada-chatbot-refund-policy...

onion2k 3 hours ago | parent | prev | next [-]

There's been a lot of "the world doesn't work the way I want it to" on HN recently. I suspect this is a function of an aging readership more than anything particularly groundbreaking about hot takes on the up and coming tech.

"Anything invented after you're thirty-five is against the natural order of things." Douglas Adams

WD-42 3 hours ago | parent [-]

Do you enjoy reading slop? I fail to see how this is a controversial take.

Ajakks 2 hours ago | parent [-]

You act as if the internet was like a high society book club - all the previous articles were written by ivy league grads.

I recall geocities, angelfire, all the chans.

The internet has always been a cesspool with little islands of quality floating in a proverbial sewage of human output. In theory AI slop will improve.

A racist, sexist, ignorant online community of humans 20 years ago, if it is still active, is almost certainly still a racist, sexist, and ignorant community today.

sfink 2 hours ago | parent | next [-]

Being able to name especially egregious forums is the point. AI slop isn't worse than preceding slop, but it is more widespread, partly because it's more socially acceptable than racism, sexism, and ignorance, and partly because it's harder to identify.

Similarly, email spam that is easy to automatically categorize is not a problem.

Making slop less sloppy makes the problem worse, not better. You could claim that that's only up to a threshold, but there's a pretty strong information theoretic argument against that.

WD-42 2 hours ago | parent | prev [-]

So your straw man is that the internet already had bad stuff on it? Cmon, you can do better. Adding more bad to bad is still bad.

causal 2 hours ago | parent | prev [-]

I have found that chatbots embedded in some spaces can be useful, e.g. docs. Stagehand for example embeds a little query form at the bottom of their docs page, and I've found the chatbot it engages can quickly direct me to the documentation I'm looking for: https://docs.stagehand.dev/v3/first-steps/introduction