Remix.run Logo
waldopat 4 hours ago

I feel like they are picking a lane. ChatGPT is great for chatbots and the like, but, as was discussed in a prior thread, chatbots aren't the end-all-be-all of AI or LLMs. Claude Code is the workhorse for me and most folks I know for AI assisted development and business automation type tasks. Meanwhile, most folks I know who use ChatGPT are really replacing Google Search. This is where folks are trying to create llm.txt files to become more discoverable by ChatGPT specifically.

You can see the very different response by OpenAI: https://openai.com/index/our-approach-to-advertising-and-exp.... ChatGPT is saying they will mark ads as ads and keep answers "independent," but that is not measurable. So we'll see.

For Anthropic to be proactive in saying they will not pursue ad based revenue I think is not just "one of the good guys" but that they may be stabilizing on a business model of both seat and usage based subscriptions.

Either way, both companies are hemorrhaging money.

guidoism 3 hours ago | parent | next [-]

> ChatGPT is saying they will mark ads as ads and keep answers "independent," but that is not measurable. So we'll see.

Yeah I remember when Google used to be like this. Then today I tried to go to 39dollarglasses.com and accidentally went to the top search result which was actually an ad for some other company. Arrrg.

panarky 2 hours ago | parent [-]

Before Google, web search was a toxic stew of conflicts of interest. It was impossible to tell if search results were paid ads or the best possible results for your query.

Google changed all that, and put a clear wall between organic results and ads. They consciously structured the company like a newspaper, to prevent the information side from being polluted and distorted by the money-making side.

Here's a snip from their IPO letter [0]:

Google users trust our systems to help them with important decisions: medical, financial and many others. Our search results are the best we know how to produce. They are unbiased and objective, and we do not accept payment for them or for inclusion or more frequent updating. We also display advertising, which we work hard to make relevant, and we label it clearly. This is similar to a well-run newspaper, where the advertisements are clear and the articles are not influenced by the advertisers’ payments. We believe it is important for everyone to have access to the best information and research, not only to the information people pay for you to see.

Anthropic's statement reads the same way, and it's refreshing to see them prioritize long-term values like trust over short-term monetization.

It's hard to put a dollar value on trust, but even when they fall short of their ideals, it's still a big differentiator from competitors like Microsoft, Meta and OpenAI.

I'd bet that a large portion of Google's enterprise value today can be traced to that trust differential with their competitors, and I wouldn't be surprised to see a similar outcome for Anthropic.

Don't be evil, but unironically.

[0] https://abc.xyz/investor/founders-letters/ipo-letter/default...

AceJohnny2 2 hours ago | parent [-]

I agree. Having watched Google shift from its younger idealistic values to its current corrupted state, I can't help but be cynical about Anthropic's long-term trajectory.

But if nothing else, I can appreciate Anthropic's current values, and hope they will last as long as possible...

Gud 2 hours ago | parent | prev | next [-]

Disagree.

I end up using ChatGPT for general coding tasks because of the limited session/weekly limit Claude pro offers, and it works surprisingly well.

The best is IMO to use them both. They complement each other.

stavros an hour ago | parent [-]

I use OpenCode and I made an "architect" agent that uses Opus to make a plan, then gives that plan to a "developer" agent (with Sonnet) that implements it, and a "reviewer" agent (Codex) reviews it in the end. I've gotten much better results with this than with straight up Opus throughout, and obviously hit the limits much less often as well.

johnsimer 4 hours ago | parent | prev [-]

Both companies are making bank on inference

waldopat 3 hours ago | parent | next [-]

You may not like this sources, but both the tomato throwers to the green visor crowds agree they are losing money. How and when they make up the difference is up to speculation

https://www.wheresyoured.at/why-everybody-is-losing-money-on... https://www.economist.com/business/2025/12/29/openai-faces-a... https://finance.yahoo.com/news/openais-own-forecast-predicts...

exitb 3 hours ago | parent | prev | next [-]

Maybe on the API, but I highly doubt that the coding agent subscription plans are profitable at the moment.

tvink 3 hours ago | parent [-]

For sure not

ehsanu1 4 hours ago | parent | prev | next [-]

Could you substantiate that? That take into account training and staffing costs?

ihsw 3 hours ago | parent [-]

The parent specifically said inference, which does not include training and staffing costs.

lysace 3 hours ago | parent | prev [-]

That is the big question. Got reliable data on that?

(My gut feeling tells me Claude Code is currently underpriced with regards to inference costs. But that's just a gut feeling...)

tvink 3 hours ago | parent | next [-]

https://www.wheresyoured.at/costs/

Their AWS spend being higher than their revenue might hint at the same.

Nobody has reliable data, I think it's fair to assume that even Anthropic is doing voodoo math to sleep at night.

simianwords 3 hours ago | parent | prev [-]

> If we subtract the cost of compute from revenue to calculate the gross margin (on an accounting basis),2 it seems to be about 50% — lower than the norm for software companies (where 60-80% is typical) but still higher than many industries.

https://epoch.ai/gradient-updates/can-ai-companies-become-pr...

lysace 3 hours ago | parent [-]

The context of that quote is OpenAI as a whole.