Remix.run Logo
lordnacho 11 hours ago

Are you saying they'd be profitable if they didn't pour all the winnings into research?

From where I'm standing, the models are useful as is. If Claude stopped improving today, I would still find use for it. Well worth 4 figures a year IMO.

jsnell 11 hours ago | parent | next [-]

They'd be profitable if they showed ads to their free tier users. They wouldn't even need to be particularly competent at targeting or aggressive with the amount of ads they show, they'd be profitable with 1/10th the ARPU of Meta or Google.

And they would not be incompetent at targeting. If they were to use the chat history for targeting, they might have the most valuable ad targeting data sets ever built.

lxgr 11 hours ago | parent | next [-]

Bolting banner ads onto a technology that can organically weave any concept into a trusted conversation would be incredibly crude.

nacnud 11 hours ago | parent | next [-]

True - but if you erode that trust then your users may go elsewhere. If you keep the ads visually separated, there's a respected boundary & users may accept it.

SJC_Hacker 9 hours ago | parent | next [-]

There will be a respected boundary for a time, then as advertisers find its more effective the boundaries will start to disappear

calvinmorrison 10 hours ago | parent | prev [-]

google did it. LLms are the new google search. It'll happen sooner or later.

ptero 9 hours ago | parent [-]

Yes, but for a while google was head and shoulders above the competition. It also poured a ton of money into building non-search functionality (email, maps, etc.). And had a highly visible and, for a while, internally respected "don't be evil" corporate motto.

All of which made it much less likely that users would bolt in response to each real monetization step. This is very different to the current situation, where we have a shifting landscape with several AI companies, each with its strengths. Things can change, but it takes time for 1-2 leaders to consolidate and for the competition to die off. My 2c.

evilfred 9 hours ago | parent | prev | next [-]

how is it "trusted" when it just makes things up

andrewflnr 9 hours ago | parent | next [-]

That's a great question to ask the people who seem to trust them implicitly.

handfuloflight 9 hours ago | parent [-]

They aren't trusted in a vacuum. They're trusted when grounded in sources and their claims can be traced to sources. And more specifically, they're trusted to accurately represent the sources.

andrewflnr 8 hours ago | parent | next [-]

Nope, lots of idiots just take them at face value. You're still describing what rational people do, not what all actual people do.

handfuloflight 8 hours ago | parent [-]

Fair enough.

PebblesRox 6 hours ago | parent | prev | next [-]

If you believe this, people believe everything they read by default and have to apply a critical thinking filter on top of it to not believe the thing.

I know I don't have as much of a filter as I ought to!

https://www.lesswrong.com/s/pmHZDpak4NeRLLLCw/p/TiDGXt3WrQwt...

andrewflnr 4 hours ago | parent [-]

That checks out with my experience. I don't think it's just reading either. Even deeper than stranger danger, we're inclined to assume other humans communicating with us are part of our tribe, on our side, and not trying to deceive us. Deception, and our defenses against deception, are a secondary phenomenon. It's the same reason that jokes like "the word 'gullible' is written in the ceiling", gesturing to wipe your face at someone with a clean face, etc, all work by default.

sheiyei 8 hours ago | parent | prev [-]

> they're trusted to accurately represent the sources.

Which is still too much trust

tsukikage 8 hours ago | parent | prev | next [-]

“trusted” in computer science does not mean what it means in ordinary speech. It is what you call things you have no choice but to trust, regardless of whether that trust is deserved or not.

pegasus 7 hours ago | parent | next [-]

For one, it's not like we're at some CS conference, so we're engaging in ordinary speech here, as far as I can tell. For two, "trusted" doesn't have just one meaning, even in the narrower context of CS.

lxgr 7 hours ago | parent | prev [-]

I meant it in the ordinary speech sense (which I don't even thing contradicts the "CS sense" fwiw).

Many people have a lot of trust in anything ChatGPT tells them.

dingnuts 9 hours ago | parent | prev [-]

15% of people aren't smart enough to read and follow directions explaining how to fold a trifold brochure, place it in an envelope, seal it, and address it

you think those people don't believe the magic computer when it talks?

ModernMech 9 hours ago | parent | prev | next [-]

I imagine they would be more like product placements in film and TV than banner ads. Just casually dropping a recommendation and link to Brand (TM) in a query. Like those Cerveza Cristal ads in star wars. They'll make it seem completely seamless to the original query.

thewebguyd 8 hours ago | parent | next [-]

I just hope that if it comes to that (and I have no doubt that it will), regulation will catch up and mandate any ad/product placement is labeled as such and not just slipped in with no disclosure whatsoever. But, given that we've never regulated influencer marketing which does the same thing, nor are TV placements explicitly called out as "sponsored" I have my doubts but one can hope.

lxgr 7 hours ago | parent | prev [-]

Yup, and I wouldn't be willing to bet that any firewall between content and advertising would hold, long-term.

For example, the more product placement opportunities there are, the more products can be placed, so sooner or later that'll become an OKR to the "content side" of the business as well.

Analemma_ 10 hours ago | parent | prev [-]

Like that’s ever stopped the adtech industry before.

It would be a hilarious outcome though, “we built machine gods, and the main thing we use them for is to make people click ads.” What a perfect Silicon Valley apotheosis.

bugbuddy 11 hours ago | parent | prev | next [-]

I heard majority of the users are techies asking coding questions. What do you sell to someone asking how to fix a nested for loop in C++? I am genuinely curious. Programmers are known to be the stingiest consumers out there.

cuchoi 11 hours ago | parent | next [-]

I'm not sure that stereotype holds up. Developers spend a lot: courses, cloud services, APIs, plugins, even fancy keyboards.

A quick search shows that click on ads targeting developers are expensive.

Also there is a ton of users asking to rewrite emails, create business plans, translate, etc.

Lewton 10 hours ago | parent | prev | next [-]

> I heard majority of the users are techies asking coding questions.

Citation needed? I can't sit on a bus without spotting some young person using ChatGPT

jsnell 9 hours ago | parent | prev | next [-]

OpenAI has half a billion active users.

You don't need every individual request to be profitable, just the aggregate. If you're doing a Google search for, like, the std::vector API reference you won't see ads. And that's probably true for something like 90% of the searches. Those searches have no commercial value, and serving results is just a cost of doing business.

By serving those unmonetizable queries the search engine is making a bet that when you need to buy a new washing machine, need a personal injury lawyer, or are researching that holiday trip to Istanbul, you'll also do those highly commercial and monetizable searches with the same search engine.

Chatbots should have exactly the same dynamics as search engines.

disgruntledphd2 10 hours ago | parent | prev | next [-]

You'd probably do brand marketing for Stripe, Datadog, Kafka, Elastic Search etc.

You could even loudly proclaim that the are ads are not targeted by users which HN would love (but really it would just be old school brand marketing).

JackFr 8 hours ago | parent | prev | next [-]

You sell them Copilot. You Sell them CursorAI. You sell them Windsurf. You sell them Devin. You sell the Claude Code.

Software guys are doing much, much more than treating LLM's like an improved Stack Overflow. And a lot of them are willing to pay.

tsukikage 8 hours ago | parent | prev | next [-]

…for starters, you can sell them the ability to integrate your AI platform into whatever it is they are building, so you can then sell your stuff to their customers.

yamazakiwi 9 hours ago | parent | prev | next [-]

A lot of people use it for cooking and other categories as well.

Techies are also great for network growth and verification for other users, and act as community managers indirectly.

LtWorf 11 hours ago | parent | prev | next [-]

According to fb's aggressively targeted marketing, you sell them donald trump propaganda.

disgruntledphd2 10 hours ago | parent [-]

It's very important to note that advertisers set the parameters in which FB/Google's algorithms and systems operate. If you're 25-55 in a red state, it seems likely that you'll see a bunch of that information (even if FB are well aware you won't click).

LtWorf 8 hours ago | parent [-]

I'm not even in USA and I've never been in USA in my entire life.

naravara 9 hours ago | parent | prev [-]

The existence of the LLMs will themselves change the profile and proclivities of people we consider “programmers” in the same way the app-driven tech boom did. Programmers who came up in the early days are different from ones who came up in the days of the web are different from ones who came up in the app era.

miki123211 9 hours ago | parent | prev | next [-]

and they wouldn't even have to make the model say the ads. I think that's a terrible idea which would drive model performance down.

Traditional banner ads, inserted inline into the conversation based on some classifier seem a far better idea.

immibis 7 hours ago | parent | prev | next [-]

Targeted banner ads based on chat history is last-two-decades thinking. The money with LLMs will be targeted answers. Have Coca-Cola pay you a few billion dollars to reinforce the model to say "Coke" instead of "soda". Train it the best source of information about political subjects is to watch Fox News. This even works with open-source models, too!

ericfr11 6 hours ago | parent [-]

It sounds quite scary that an LLM could be trained on a single source of news (specially FN).

naravara 9 hours ago | parent | prev [-]

If interactions with your AI start sounding like your conversation partner shilling hot cocoa powder at nobody in particular those conversations are going to stop being trusted real quick. (Pop culture reference: https://youtu.be/MzKSQrhX7BM?si=piAkfkwuorldn3sb)

Which may be for the best, because people shouldn’t be implicitly trusting the bullshit engine.

vikramkr 10 hours ago | parent | prev | next [-]

That's calculating value against not having LLMs and current competitors. If they stopped improving but their competitors didn't, then the question would be the incremental cost of Claude (financial, adjusted for switching costs, etc) against the incremental advantage against the next best competitor that did continue improving. Lock in is going to be hard to accomplish around a product that has success defined by its generalizability and adaptability.

Basically, they can stop investing in research either when 1) the tech matures and everyone is out of ideas or 2) they have monopoly power from either market power or oracle style enterprise lock in or something. Otherwise they'll fall behind and you won't have any reason to pay for it anymore. Fun thing about "perfect" competition is that everyone competes their profits to zero

miki123211 9 hours ago | parent | prev | next [-]

But if Claude stopped pouring their money into research and others didn't, Claude wouldn't be useful a year from now, as you could get a better model for the same price.

This is why AI companies must lose money short term. The moment improvements plateau or the economic environment changes, everyone will cut back on research.

dvfjsdhgfv 11 hours ago | parent | prev | next [-]

For me, if Anthropic stopped now, and given access to all alternative models, they still would be worth exactly $240 which is the amount I'm paying now. I guess Anthropic and OpenAI can see the real demand by clearly seeing what are their free:basic:expensive plan ratios.

danielbln 6 hours ago | parent [-]

You may want to pay for Claude Max outside of the Google or iOS ecosystem and save $40/month.

apwell23 11 hours ago | parent | prev [-]

> Well worth 4 figures a year IMO

only because software engineering pay hasn't adjusted down for the new reality . You don't know what its worth yet.

fkyoureadthedoc 11 hours ago | parent | next [-]

Can you explain this in more detail? The idiot bottom rate contractors that come through my team on the regular have not been helped at all by LLMs. The competent people do get a productivity boost though.

The only way I see compensation "adjusting" because of LLMs would need them to become significantly more competent and autonomous.

cgh 8 hours ago | parent | next [-]

There's another specific class of person that seems helped by them: the paralysis by analysis programmer. I work with someone really smart who simply cannot get started when given ordinary coding tasks. She researches, reads and understands the problem inside and out but cannot start actually writing code. LLMs have pushed her past this paralysis problem and given her the inertia to continue.

On the other end, I know a guy who writes deeply proprietary embedded code that lives in EV battery controllers and he's found LLMs useless.

lelanthran 10 hours ago | parent | prev [-]

> Can you explain this in more detail?

Not sure what GP meant specifically, but to me, if $200/m gets you a decent programmer, then $200/m is the new going rate for a programmer.

Sure, now it's all fun and games as the market hasn't adjusted yet, but if it really is true that for $200/m you can 10x your revenue, it's still only going to be true until the market adjusts!

> The competent people do get a productivity boost though.

And they are not likely to remain competent if they are all doing 80% review, 15% prompting and 5% coding. If they keep the ratios at, for example, 25% review, 5% prompting and the rest coding, then sure, they'll remain productive.

OTOH, the pipeline for juniors now seems to be irrevocably broken: the only way forward is to improve the LLM coding capabilities to the point that, when the current crop of knowledgeable people have retired, programmers are not required.

Otherwise, when the current crop of coders who have the experience retires, there'll be no experience in the pipeline to take their place.

If the new norm is "$200/m gets you a programmer", then that is exactly the labour rate for programming: $200/m. These were previously (at least) $5k/m jobs. They are now $200/m jobs.

fkyoureadthedoc 9 hours ago | parent | next [-]

$200 does not get you a decent programmer though. It needs constant prompting, babysitting, feedback, iteration. It's just a tool. It massively boosts productivity in many cases, yes. But it doesn't do your job for you. And I'm very bullish on LLM assisted coding when compared to most of HN.

High level languages also massively boosted productivity, but we didn't see salaries collapse from that.

> And they are not likely to remain competent if they are all doing 80% review, 15% prompting and 5% coding.

I've been doing 80% review and design for years, it's called not being a mid or junior level developer.

> OTOH, the pipeline for juniors now seems to be irrevocably broken

I constantly get junior developers handed to me from "strategic partners", they are just disguised as senior developers. I'm telling you brother, the LLMs aren't helping these guys do the job. I've let go 3 of them in July alone.

nyarlathotep_ 6 hours ago | parent | next [-]

> I constantly get junior developers handed to me from "strategic partners", they are just disguised as senior developers. I'm telling you brother, the LLMs aren't helping these guys do the job. I've let go 3 of them in July alone.

I find this surprising. I figured the opposite: that the quality of body shop type places would improve and the productivity increases would decrease as you went "up" the skill ladder.

I've worked on/inherited a few projects from the Big Name body shops and, frankly, I'd take some "vibe coded" LLM mess any day of the week. I really figured there was nowhere to go but "up" for those kinds of projects.

lelanthran 8 hours ago | parent | prev | next [-]

> It needs constant prompting, babysitting, feedback, iteration. It's just a tool. It massively boosts productivity in many cases, yes.

It doesn't sound like you are disagreeing with me: that role you described is one of manager, not of programmer.

> High level languages also massively boosted productivity, but we didn't see salaries collapse from that.

Those high level languages still needed actual programmers. If the LLM is able to 10x the output of a single programmer because that programmer is spending all their time managing, you don't really need a programmer anymore, do you?

> I've been doing 80% review and design for years, it's called not being a mid or junior level developer.

Maybe it differs from place to place. I was a senior and a staff engineer, at various places including a FAANG. My observations were that even staff engineer level was still spending around 2 - 3 hours a day writing code. If you're 10x'ing your productivity, you almost certainly aren't spending 2 - 3 hours a day writing code.

> I constantly get junior developers handed to me from "strategic partners", they are just disguised as senior developers. I'm telling you brother, the LLMs aren't helping these guys do the job. I've let go 3 of them in July alone.

This is a bit of a non-sequitor; what does that have to do with breaking the pipeline for actual juniors?

Without juniors, we don't get seniors. Without seniors and above, who will double-check the output of the LLM?[1]

If no one is hiring juniors anymore, then the pipeline is broken. And since the market price of a programmer is going to be set at $200/m, where will you find new entrants for this market?

Hell, even mid-level programmers will exit, because when a 10-programmer team can be replaced by a 1-person manager and a $200/m coding agent, those 9 people aren't quietly going to starve while the industry needs them again. They're going to go off and find something else to do, and their skills will atrophy (just like the 1-person LLM manager skills will atrophy eventually as well).

----------------------------

[1] Recall that my first post in this thread was to say that the LLM coding agents have to get so good that programmers aren't needed anymore because we won't have programmers anymore. If they aren't that good when the current crop starts retiring then we're in for some trouble, aren't we?

fkyoureadthedoc 8 hours ago | parent [-]

> And since the market price of a programmer is going to be set at $200/m

You keep saying this, but I don't see it. The current tools just can't replace developers. They can't even be used in the same way you'd use a junior developer or intern. It's more akin to going from hand tools to power tools than it is getting an apprentice. The job has not been automated and hasn't been outsourced to LLMs.

Will it be? Who knows, but in my personal opinion, it's not looking like it will any time soon. There would need to be more improvement than we've seen from day 1 of ChatGPT until now before we could even be seriously considering this.

> Those high level languages still needed actual programmers.

So does the LLM from day one until now, and for the foreseeable future.

> This is a bit of a non-sequitor; what does that have to do with breaking the pipeline for actual juniors?

Who says the pipeline is even broken by LLMs? The job market went to shit with rising interest rates before LLMs hit the scene. Nobody was hiring them anyway.

handfuloflight 8 hours ago | parent | prev [-]

> It needs constant prompting, babysitting, feedback, iteration.

What do you think a product manager is doing?

fkyoureadthedoc 8 hours ago | parent [-]

Not writing and committing code with GitHub Copilot, I'll tell you that. These things need to come a _long_ way before that's a reality.

sheiyei 8 hours ago | parent | prev [-]

Your argument requires "Claude can replace a programme" to be true. Thus, your argument is false for the foreseeable future.

johnnyanmac 6 hours ago | parent | prev [-]

I mean, it adjusted down by having some hundreds of thousands of engineers laid off in he last 2+ years. they know slashing salaries is legal suicide, so they just make the existing workers work 3x as hard.