Remix.run Logo
areoform 7 hours ago

I recently found out that there's no such thing as Anthropic support. And that made me sad, but not for reasons that you expect.

Out of all of the tech organizations, frontier labs are the one org you'd expect to be trying out cutting edge forms of support. Out of all of the different things these agents can do, surely most forms of "routine" customer support are the lowest hanging fruit?

I think it's possible for Anthropic to make the kind of experience that delights customers. Service that feels magical. Claude is such an incredible breakthrough, and I would be very interested in seeing what Anthropic can do with Claude let loose.

I also think it's essential for the anthropic platform in the long-run. And not just in the obvious ways (customer loyalty etc). I don't know if anyone has brought this up at Anthropic, but it's such a huge risk for Anthropic's long-term strategic position. They're begging corporate decision makers to ask the question, "If Anthropic doesn't trust Claude to run its support, then why should we?"

eightysixfour 7 hours ago | parent | next [-]

> Out of all of the different things these agents can do, surely most forms of "routine" customer support are the lowest hanging fruit?

I come from a world where customer support is a significant expense for operations and everyone was SO excited to implement AI for this. It doesn't work particularly well and shows a profound gap between what people think working in customer service is like and how fucking hard it actually is.

Honestly, AI is better at replacing the cost of upper-middle management and executives than it is the customer service problems.

swiftcoder 6 hours ago | parent | next [-]

> shows a profound gap between what people think working in customer service is like and how fucking hard it actually is

Nicely fitting the pattern where everyone who is bullish on AI seems to think that everyone else's specialty is ripe for AI takeover (but not my specialty! my field is special/unique!)

eightysixfour 6 hours ago | parent | next [-]

I was closer to upper-middle management and executives, it could have done the things I did (consultant to those people) and that they did.

It couldn't/shouldn't be responsible for the people management aspect but the decisions and planning? Honestly, no problem.

pixl97 6 hours ago | parent | prev | next [-]

As someone who does support I think the end result looks a lot different.

AI, for a lot of support questions works quite well and does solve lots of problems in almost every field that needs support. The issue is this commonly removes the roadblocks from your users being cautious to doing something incredibly stupid that needs support to understand what they hell they've actually done. Kind of a Jeavons Paradox of support resources.

AI/LLMs also seem to be very good at pulling out information on trends in support and what needs to be sent for devs to work on. There are practical tests you can perform on datasets to see if it would be effective for your workloads.

The company I work at did an experiment on looking at past tickets in a quarterly range and predicting which issues would generate the most tickets in the next quarter and which issues should be addressed. In testing the AI did as well or better than the predictions we had made that the time and called out a number of things we deemed less important that had large impacts in the future.

swiftcoder 5 hours ago | parent [-]

I think that's more the area I'd expect genAI to be useful (support folks using it as a tool to address specific scenarios), rather than just replacing your whole support org with a branded chatbot - which I fear is what quite a few management types are picturing, and licking their chops at the resulting cost savings...

0xferruccio 6 hours ago | parent | prev | next [-]

to be fair at least half of the software engineers i know are facing some level of existential crisis when seeing how well claude code works, and what it means for their job in the long term

and these are people are not junior developers working on trivial apps

swiftcoder 6 hours ago | parent [-]

Yeah, I've watched a few peers go down this spiral as well. I'm not sure why, because my experience is that Claude Code and friends are building a lifetime of job security for staff-level folks, unscrewing every org that decided to over-delegate to the machine

pinkmuffinere 6 hours ago | parent | prev | next [-]

Perhaps even more-so given the following tagline, "Honestly, AI is better at replacing the cost of upper-middle management and executives than it is the customer service problems", lol. I suppose it's possible eightysixfour is an upper-middle management executive though.

eightysixfour 6 hours ago | parent [-]

Consultant to, so yes. It could have replaced me and a ton of the work of the people I was supporting.

pinkmuffinere 6 hours ago | parent [-]

Ah I see, that definitely lends some weight claim then.

Terr_ 6 hours ago | parent | prev [-]

> bullish [...] but not my specialty

IMO we can augment this criticism by asking which tasks the technology was demoed on that made them so excited in the first place, and how much of their own job is doing those same tasks--even if they don't want to admit it.

__________

1. "To evaluate these tools, I shall apply them to composing meeting memos and skimming lots of incoming e-mails."

2. "Wow! Look at them go! This is the Next Big Thing for the whole industry."

3. "Concerned? Me? Nah, memos and e-mails are things everybody does just as much as I do, right? My real job is Leadership!"

4. "Anyway, this is gonna be huge for replacing staff that have easier jobs like diagnosing customer problems. A dozen of them are a bigger expense than just one of me anyway."

danielbln 6 hours ago | parent | prev | next [-]

There are some solid usecases for AI in support, like document/inquiry triage and categorization, entity extraction, even the dreaded chatbots can be made to not be frustrating, and voice as well. But these things also need to be implemented with customer support stakeholders that are on board, not just pushed down the gullet by top brass.

eightysixfour 6 hours ago | parent [-]

Yes but no. Do you know how many people call support in legacy industries, ignore the voice prompt, and demand to speak to a person to pay their recurring, same-cost-every-month bill? It is honestly shocking.

There are legitimate support cases that could be made better with AI but just getting to them is honestly harder than I thought when I was first exposed. It will be a while.

mikkupikku 5 hours ago | parent [-]

Demanding a person on the phone use the website on your behalf is a great life hack, I do it all the time. Often they try to turn me away saying "you know you can do this on our website", I just explain that I found it confusing and would like help. If you're polite and pleasant, people will bend over backwards to help you out over the phone.

With "legacy industries" in particular, their websites are usually so busted with short session timeouts/etc that it's worth spending a few minutes on hold to get somebody else to do it.

eightysixfour 5 hours ago | parent [-]

Sorry, I disagree here. For the specific flow I'm talking about - monthly recurring payments - the UX is about as highly optimized for success as it gets. There are ways to do it via the web, on the phone with a bot, bill pay in your own bank, set it up in-store, in an app, etc.

These people don't want the thing done, they want to talk to someone on the phone. The monthly payment is an excuse to do so. I know, we did the customer research on it.

mikkupikku 5 hours ago | parent [-]

Recurring monthly payments I set to go automatic, but setting that up in the first place I usually do through a phone call. I know some people just want somebody to talk to, same as going through the normal checkout lines at the grocery store, but I think an equally large part of this is people just wanting somebody else to do the work (using the website, or scanning groceries) for them.

eightysixfour 4 hours ago | parent [-]

> but I think an equally large part of this is people just wanting somebody else to do the work (using the website, or scanning groceries) for them.

Again, this is something my firm studied. Not UX "interviews," actual behavioral studies with observation, different interventions, etc. When you're operating at utility scale there are a non-negligible number of customers who will do more work to talk to a human than to accomplish the task. It isn't about work, ease of use, or anything else - they legitimately just want to talk.

There are also some customers who will do whatever they can to avoid talking to a human, but that's a different problem than we're talking about.

But this is a digression from my main point. Most of the "easy things" AI can do for customer support are things that are already easily solved in other places, people (like you) are choosing not to use those solutions, and adding AI doesn't reduce the number of calls that make it to your customer service team, even when it is an objectively better experience that "does the work."

hn_acc1 5 hours ago | parent | prev [-]

>Honestly, AI is better at replacing the cost of upper-middle management and executives than it is the customer service problems.

Sure, but when the power of decision making rests with that group of people, you have to market it as "replace your engineers". Imagine engineers trying to convince management to license "AI that will replace large chunks of management"?

lukan 7 hours ago | parent | prev | next [-]

I would say it is a strong sign, they do not trust their agent yet, to allow them significant buisness decisions, that a support agent would have to do. Reopening accounts, closing them, refunds, .. people would immediately start to try to exploit them. And will likely succeed.

atonse 7 hours ago | parent [-]

My guess is that it's more "we are right now using every talented individual right now to make sure our datacenters don't burn down from all the demand. we'll get to support soon once we can come up for air"

But at the same time, they have been hiring folks to help with Non Profits, etc.

WarmWash 7 hours ago | parent | prev | next [-]

Claude is an amazing coding model, its other abilities are middling. Anthropic's strategy seems to be to just focus on coding, and they do it well.

embedding-shape 7 hours ago | parent | next [-]

> Anthropic's strategy seems to be to just focus on coding, and they do it well.

Based on their homepage, that doesn't seem to be true at all. Claude Code yes, focuses just on programming, but for "Claude" it seems they're marketing as a general "problem solving" tool, not just for coding. https://claude.com/product/overview

WarmWash 6 hours ago | parent | next [-]

Anthropic isn't bothering with image models, audio models, video models, world models. They don't have science/math models, they don't bother with mathematics competitions, and they don't have open model models either.

Anthropic has claude code, it's a hit product, SWE's love claude models. Watching Anthropic rather than listening to them makes their goals clear.

Ethee 7 hours ago | parent | prev [-]

Isn't this the case for almost every product ever? Company makes product -> markets as widely as possible -> only niche group become power users/find market fit. I don't see a problem with this. Marketing doesn't always have to tell the full story, sometimes the reality of your products capabilities and what the people giving you money want aren't always aligned.

0xbadcafebee 6 hours ago | parent | prev | next [-]

Critically, this has to be their play, because there are several other big players in the "commodity LLM" space. They need to find a niche or there is no reason to stick with them.

OpenAI has been chaotically trying to pivot to more diversified products and revenue sources, and hasn't focused a ton on code/DevEx. This is a huge gap for Anthropic to exploit. But there are still competitors. So they have to provide a better experience, better product. They need to make people want to use them over others.

Famously people hate Google because of their lack of support and impersonality. And OpenAI also seems to be very impersonal; there's no way to track bugs you report in ChatGPT, no tickets, you have no idea if the pain you're feeling is being worked on. Anthropic can easily make themselves stand out from Gemini and ChatGPT by just being more human.

arcanemachiner 6 hours ago | parent | prev [-]

Interesting. Would anyone care to chime in with their opinion of the best all-rounder model?

WarmWash 6 hours ago | parent [-]

You'll get 30 different opinions and all those will disagree with each other.

Use the top models and see what works for you.

Lerc 6 hours ago | parent | prev | next [-]

There is a discord, but I have not found it to be the friendliest of places.

At one point I observed a conversation which, to me, seemed to be a user attempting to communicate in a good faith manner who was given instructions that they clearly did not understand, and then were subsequently banned for not following the rules.

It seems now they have a policy of

    Warning on First Offense → Ban on Second Offense
    The following behaviors will result in a warning. 
    Continued violations will result in a permanent ban:

    Disrespectful or dismissive comments toward other members
    Personal attacks or heated arguments that cross the line
    Minor rule violations (off-topic posting, light self-promotion)
    Behavior that derails productive conversation
    Unnecessary @-mentions of moderators or Anthropic staff
I'm not sure how many groups moderate in a manner that a second offence off-topic comment is worthy of a ban. It seems a little harsh. I'm not a fan of obviously subjective banable offences.

I'm a little surprised that Anthropic hasn't fostered a more welcoming community. Everyone is learning this stuff new, together or not. There is plenty of opportunity for people to help each other.

magicmicah85 6 hours ago | parent | prev | next [-]

https://support.claude.com/en/articles/9015913-how-to-get-su...

Their support includes talking to Fin, their AI support with escalations to humans as needed. I dont use Claude and have never used the support bot, but their docs say they have support.

csours 6 hours ago | parent | prev | next [-]

Human attention will be the luxury product of the next decade.

munk-a 6 hours ago | parent | prev | next [-]

> They're begging corporate decision makers to ask the question, "If Anthropic doesn't trust Claude to run its support, then why should we?"

Don't worry - I'm sure they won't and those stakeholders will feel confident in their enlightened decision to send their most frustrated customers through a chatbot that repeatedly asks them for detailed and irrelevant information and won't let them proceed to any other support levels until it is provided.

I, for one, welcome our new helpful overlords that have very reasonably asked me for my highschool transcript and a ten page paper on why I think the bug happened before letting me talk to a real person. That's efficiency.

throwawaysleep 6 hours ago | parent [-]

> to send their most frustrated customers through a chatbot

But do those frustrated customers matter?

munk-a 6 hours ago | parent [-]

I just checked - frustrated customers isn't a metric we track for performance incentives so no, they do not.

throwawaysleep 6 hours ago | parent [-]

Even if you do track them, if 0.1% of customers are unhappy and contacting support, that's not worth any kind of thought when AI is such an open space at the moment.

throwawaysleep 6 hours ago | parent | prev | next [-]

Eh, I can see support simply not being worth any real effort, i.e. having nobody working on it full time.

I worked for a unicorn tech company where they determined that anyone with under 50,000 ARR was too unsophisticated to be worth offering support. Their emails were sent straight to the bin until they quit. The support queue was entirely for their psychological support/to buy a few months of extra revenue.

It didn't matter what their problems were. Supporting smaller people simply wasn't worth the effort statistically.

> I think it's possible for Anthropic to make the kind of experience that delights customers. Service that feels magical. Claude is such an incredible breakthrough, and I would be very interested in seeing what Anthropic can do with Claude let loose.

Are there enough people who need support that it matters?

pixl97 5 hours ago | parent [-]

>I worked for a unicorn tech company where they determined that anyone with under 50,000 ARR was too unsophisticated to be worth offering support.

In companies where your average ARR is 500k+ and large customers are in the millions, it may not be a bad strategy.

'Good' support agents may be cheaper than programmers, but not by that much. The issues small clients have can quite often be as complicated as and eat up as much time as your larger clients depending on what the industry is.

furyofantares 6 hours ago | parent | prev [-]

> I recently found out that there's no such thing as Anthropic support.

The article discusses using Anthropic support. Without much satisfaction, but it seems like you "recently found out" something false.

kmoser 6 hours ago | parent [-]

If you want to split hairs, it seem that Anthropic has support as a noun but not as a verb.

furyofantares 5 hours ago | parent [-]

I mean the comment says they literally don't have support and also complains they don't have a support bot, when they have both.

https://support.claude.com/en/collections/4078531-claude

> As a paid user of Claude or the Console, you have full access to:

> All help documentation

> Fin, our AI support bot

> Further assistance from our Product Support team

> Note: While we don't offer phone or live chat support, our Product Support team will gladly assist you through our support messenger.