Remix.run Logo
areoform 2 hours ago

Claude's actually pretty great at this! I actually used to use Claude A LOT to answer interesting questions (which I'll be writing up on!) More generally, Claude is palpably different from most other agents. I'd recommend these models – especially Opus – without qualifications.

But there's a process risk here based on their current practises. I'm hoping those practises change so that I can recommend Claude to everyone I know, but as of now, there's existential risk exposure here that's greater than Google's.

Anthropic's automated systems can and will ban you for pretty arbitrary things; and you won't get human support or Claude – even if you are an enterprise paying out of your nose. And there's 0 redressal unless you go viral on social media. Or know someone who knows someone. See: https://x.com/Whizz_ai/status/2051180043355967802 https://x.com/theo/status/2045618854932734260

And I say that as someone who likes how Anthropic has been training Claude and Opus. I just don't think they're prepared to be the trillion dollar company they've become. They are – in a very real way – suffering from success. Which is extremely inconvenient to be on the receiving end of when you're on a deadline.

brunoborges an hour ago | parent | next [-]

Before AI, shipping code to production used to be a two-person task: one writes the code, another one reviews the code. Now with AI writing the code, the developer that was supposed to write the code, only has to review it. And this is because they are responsible for the code they ship.

Code review has become unbearable because before AI, developers were reviewing code as they went writing it in the first place. Granted, never perfect and why a second person reviewing code was (is?) a best practice. But effectively there was always some level of code review happening as developers wrote code.

I fear it is way more boring to review financial and medical documents completely written by AI than it is to write (and at the same time review) by yourself. And way more dangerous to ship mistakes than in most software.

areoform an hour ago | parent | next [-]

I am/was writing up an interesting hypothesis with Claude's help. But I redid the most important parts of the data pipeline manually. As in went in and cmd-c + cmd-v'ed the data by hand to create a reference, and I'm randomly spot checking 33% of the larger records.

The analysis itself; I'm doing it by hand.

traceroute66 an hour ago | parent | prev [-]

> the developer that was supposed to write the code, only has to review it.

But more often than not that developer ends up reviewing far more lines of code due to the typical verbosity of an LLM.

brunoborges 33 minutes ago | parent [-]

100%... that's why I say code review became unbearable!

intended 2 hours ago | parent | prev | next [-]

> and you won't get human support or Claude – even if you are an enterprise paying out of your nose. And there's 0 redressal unless you go viral on social media.

Sadly this sounds like par for the course when it comes to tech. Too many messages and requests for help depend on knowing someone in the right slack groups.

areoform 2 hours ago | parent [-]

Which is very confusing to me. If you have groundbreaking AI, you can offer groundbreaking support at scale.

hvb2 an hour ago | parent | next [-]

You wouldn't build a chat bot for that, imagine how easy it is to make that thing go off the rails and allow anyone to reactivate their account. Really, you can't trust it to do any business function...

At least, that's really the message this sends in my opinion

traceroute66 an hour ago | parent | prev | next [-]

> If you have groundbreaking AI, you can offer groundbreaking support at scale

You're a funny one aren't you...

Meet "Fin" Anthropic's "where support questions go to die" so-called-support bot, created by Intercom but powered by Anthropic.

Maybe it's an internal in-joke in the Anthropic offices ... "Fin" in french means "End".

I don't know anyone who has had a positive experience with "Fin" .... or ever spoken to a human at Anthropic support for that matter, even if you ask "Fin" to escalate.

intended 30 minutes ago | parent | prev [-]

Nope.

Customer support and safety are cost centers. It doesn’t scale like software does and no one’s KPIs are going to improve dramatically if you provide support beyond a point.

AI and LLMs are the cool tech, and the most important thing is to push the frontier. Money spent elsewhere is money not spent on R&D.

It would be hilarious if it wasn’t the GDPs of nations being spent on this.

dakolli 2 hours ago | parent | prev [-]

They aren't even close to a 1T company, they're valued at <400bb and that's at like a 20x-30x multiple. They can probably raise money at a higher valuation but its literally just value based on hype, not revenue.

areoform 2 hours ago | parent | next [-]

https://www.businessinsider.com/anthropic-trillion-dollar-va...

KellyCriterion an hour ago | parent | prev [-]

Check the secondaries market ;-)