Remix.run Logo
ankit219 4 days ago

The difference is implementation comes down to business goals more than anything.

There is a clear directionality for ChatGPT. At some point they will monetize by ads and affiliate links. Their memory implementation is aimed at creating a user profile.

Claude's memory implementation feels more oriented towards the long term goal of accessing abstractions and past interactions. It's very close to how humans access memories, albeit with a search feature. (they have not implemented it yet afaik), there is a clear path where they leverage their current implementation w RL posttraining such that claude "remembers" the mistakes you pointed out last time. It can in future iterations derive abstractions from a given conversation (eg: "user asked me to make xyz changes on this task last time, maybe the agent can proactively do it or this was the process last time the agent did it").

At the most basic level, ChatGPT wants to remember you as a person, while Claude cares about how your previous interactions were.

devnullbrain 4 days ago | parent | next [-]

The elephant in the room is that AGI doesn't need ads to make revenue but a new Google does. The words aren't matching with the actions.

_heimdall 3 days ago | parent | next [-]

The bigger elephant in the room is that LLMs will never be AGI, even by the purely economic definition many LLM companies use.

michaelbrave a day ago | parent | next [-]

I always kinda figured that AGI would need to be sort of similarly modeled like a brain, for which LLMs could at least fit the function for language. Meaning AGI won't be LLM based, but maybe parts of it could be.

Insanity 3 days ago | parent | prev | next [-]

I've been saying this for years now. LLMs are _not_ the right methodology to get to AGI. My friends who were drinking the kool-aid are only recently coming around to "hey, this might not get us AGI".

But sometimes it feels like I'm the lone voice in a bubble where people are convinced AGI is just around the corner.

I'm wondering if it's because people are susceptible to the marketing, or are just doing some type of 'wishful thinking' - as some seem genuinely interested in AGI.

_heimdall 3 days ago | parent [-]

Yeah I've had those conversations since GPT-3 first came out. I usually look like the one way off base, but I never did hear a clear explanation of how the LLM architecture could lead to AGI.

In my experience it was a combination of the hype and an overconfidence in the person's understanding of how LLMs work and what AGI actually means. To be fair, AGI definitions are all over the place and LLMs were rarely described in detail beyond "its AI that read the whole internet and sounds like a human."

panarky 3 days ago | parent | prev [-]

There are two big innovations required to achieve inexpensive AGI.

LLMs will accelerate discovery and development of Innovation 1, for insanely expensive AGI.

Innovation 1 will accelerate discovery and development of Innovation 2 which will make it too cheap to meter.

_heimdall 3 days ago | parent | next [-]

> LLMs will accelerate discovery and development of Innovation 1, for insanely expensive AGI.

Can you expand on this more? As far as I'm aware LLMs have yet to invent anything novel.

At best they may have inferred one response of many that, when tested by humans, may have proven out. I'm not aware of a specific example of even that, but it is at least possible where claims that LLMs will "cure cancer" seem plainly false (I'm not trying to put those words in your mouth, just using an example for my point).

diamond559 3 days ago | parent | prev [-]

Wishful thinking

lucideer 3 days ago | parent | prev | next [-]

To reword the downvoted sibling commenter's intended point:

> The elephant in the room is that AGI doesn't need ads to make revenue

It may not need ads to make revenue, but does it need ads to make profit?

tinnywoody4u 4 days ago | parent | prev [-]

Has it? Made revenue, I mean.

virgilp 4 days ago | parent | next [-]

You can question the profits, but revenue is already there.

jychang 4 days ago | parent | prev [-]

Obviously yes, AI makes revenue.

Workaccount2 4 days ago | parent | prev | next [-]

Don't fool yourself into thinking Anthropic won't be serving up personalized ads too.

GuB-42 4 days ago | parent | next [-]

Anthropic seems to want to make you buy a subscription, not show you ads.

ChatGPT seems to be more popular to those who don't want to pay, and they are therefore more likely to rely on ads.

forgotoldacc 4 days ago | parent | next [-]

In the 2020s, subscriptions don't preclude showing ads. Companies will milk money in as many ways as they can

dspillett 4 days ago | parent | next [-]

And even a subscription that gives a truly ad-free experience doesn't preclude the bit that I actually object to most: collecting data about me & my activity and selling it on.

3 days ago | parent | prev | next [-]
[deleted]
anon1395 4 days ago | parent | prev [-]

(Netflix as an example)

GuB-42 4 days ago | parent [-]

And cable companies, and magazines. This is not something from the 2020s, it is a centuries old thing.

But these are entertainment. For all the time advertising has been present, work tools have been relatively immune. I don't remember seeing ads in IDE for instance, and while magazines had ads, technical documents didn't. I have never seen electronic components datasheets pitching for measuring equipment and soldering irons for instance.

That's why I don't expect Anthropic to go with ads it they follow the path they seem to have taken, like coding agents. People using these tools are likely to react very badly to ads, if there is some space to put ads in the first place, and these are also the kind of people who can spend $100/month on a subscription, way more than what ads will get you.

cantor_S_drug 4 days ago | parent | prev | next [-]

They might be coming from different directions. But these things, as often they do, will converge. Too big of a market to leave.

chii 4 days ago | parent | prev | next [-]

and netflix used to think they dont want to show ads either.

matwood 4 days ago | parent [-]

Netflix likely doesn't want to show ads, but the market would rather watch ads than pay full price for a service.

https://www.theverge.com/news/667042/netflix-ad-supported-ti...

chii 4 days ago | parent [-]

> the market would rather watch ads

no, netflix wants more income, and by having a product be ad supported, they can try to earn more.

The "market" is not a person, and doesn't have "wants".

matwood 4 days ago | parent [-]

From the article:

> Netflix has more than doubled the number of people watching its ad-supported tier over the last year. At its upfront presentation for advertisers on Wednesday, the company revealed that the $7.99 per month plan now reaches more than 94 million users around the world each month – a big increase from the 40 million it reported in May 2024 and the 70 million it revealed last November.

1/3 of Netflix users (the market) prefer ads over paying to avoid them.

hadlock 3 days ago | parent | next [-]

A lot of "netflix users" are middle and high school age kids in third world countries using a borrowed account. User context matters a lot. If someone's friends-friends-friends uncle changes their password, it's no surprise those "netflix users" would switch to an ad-supported model. It's possible but unlikely the 12 year old kid watching anime on a shared/borderline stolen account has the resources necessary to buy an ad free account at US prices.

rkomorn 3 days ago | parent [-]

But the ad-supported tier isn't free either.

I don't think the difference for a 12yo is $7.99 for standard with ads vs $17.99 for standard.

It's $0 vs any non-zero dollar amount.

rkomorn 4 days ago | parent | prev [-]

This leaves me somewhere between surprised and shocked.

FooBarWidget 4 days ago | parent [-]

Maybe you shouldn't be. The ad-hating paranoid HN user is not representative of the general population. Probably the exact opposite, in fact.

My wife and mother love ads, they are always on the hunt for the latest good deals and love discount shopping. When I tried to remove the ads on their computers or in the postal mail, they protested. I think they are far more representative of the general population.

rkomorn 3 days ago | parent | next [-]

People opting for "free with ads" makes sense.

It's the "pay but still get ads" thing that gets me, but I guess some people just want to pay the bare minimum.

FergusArgyll 4 days ago | parent | prev [-]

Yeah, I've encountered more than one person who didn't want me to install ublock origin for them because "Then I won't see any ads".

People have different preferences ¯\_(ツ)_/¯

jjaksic 3 days ago | parent [-]

Dude, that is so weird!

serf 4 days ago | parent | prev | next [-]

as a former paying user it felt more like they were buying my subscription with a decent product so that they could sell their business prospects to investors by claiming a high subscription count.

I have never encountered such bad customer service anywhere -- and at 200 bucks a month at that.

puilp0502 4 days ago | parent [-]

Can you elaborate on the "bad customer service"? I've never engaged in Claude's support team, but curious to know what you've experienced.

mrheosuper 4 days ago | parent | prev [-]

so ChatGPT will become "saleman". And i do not trust any saleman.

__MatrixMan__ 4 days ago | parent | next [-]

They're all salesmen, they were trained on the web which is jam packed with SEO content.

aswegs8 4 days ago | parent [-]

Interesting point. Never thought about AI slop being fed by SEO slop.

bigfishrunning 3 days ago | parent | prev | next [-]

You shouldn't be trusting an LLM either, so this is a real sideways move.

FergusArgyll 4 days ago | parent | prev [-]

The plan is not ads said by chatgpt - it's ads on the side that are relevant to the conversartion (or you in general). Or affiliate links. That's my understanding.

ankit219 4 days ago | parent | prev | next [-]

My conjecture is that their memory implementation is not aimed at building a user profile. I don't know if they would or would not serve ads in the future, but it's hard to see how the current implementation helps them in that regard.

cj 4 days ago | parent [-]

> I don't know if they would or would not serve ads in the future

There are 2 possible futures:

1) You are served ads based on your interactions

2) You pay a subscription fee equal to the amount they would have otherwise earned on ads

I highly doubt #2 will happen. (See: Facebook, Google, twitter, et al)

Let’s not fool ourselves. We will be monetized.

And model quality will be degraded to maximize profits when competition in the LLM space dies down.

It’s not a pretty future. I wouldn’t be surprised if right now is the peak of model quality, etc. Peak competition, everyone is trying to be the best. That won’t continue forever. Eventually everyone will pivot their priority towards monetization rather than model quality/training.

Hopefully I’m wrong.

fluidcruft 4 days ago | parent | next [-]

But aren't we only worth something like $300/year each to Meta in terms of ads? I remember someone arguing something like that when the TikTok ban was being passed into law... essentially the argument was that TikTok was "dumping" engagement at far below market value (at something like $60/year) to damage American companies. That was something the argument I remember anyway.

majormajor 4 days ago | parent | next [-]

Here is some old analysis I remember seeing at the time of Hulu ads vs no-ads plans: https://ampereanalysis.com/insight/hulus-price-drop-is-a-wis...

They dropped the price $2/mo on their with-ads plan to make a bigger gap between the no-ads plan and the ads plan, and the analyst here looks at their reported ad revenue and user numbers to estimate $12/mo per user from ads.

Whether Meta across all their properties does more than $144/yr in ads is an open question; long-form video ads are sold at a premium but Facebook/IG users see a LOT of ads across a lot of Meta platforms. The biggest advantage in ad-$-per-user Hulu has is that it's US-only. ChatGPT would also likely be considered premium ad inventory, though they'd have a delicate dance there around keeping that inventory high-value, and selling enough ads to make it worthwhile, without pissing users off too much.

Here they estimate a much lower number for ad revenue per Meta user, like $45 bucks a year - https://www.statista.com/statistics/234056/facebooks-average... - but that's probably driven disproportionately by wealth users in the US and similar countries compared to the long tail of global users.

One problem for LLM companies compared to media companies is that the marginal cost of offering the product to additional users is quite a bit higher. So business models, ads-or-subscription, will be interesting to watch from a global POV there.

One wonders what the monetization plan for the "writing code with an LLM using OSS libraries and not interested in paying for enterprise licenses and such" crowd will be. What sort of ads can you pull off in those conversations?

cj 4 days ago | parent | prev [-]

If that’s the case, we have an even bigger problem on our hands. How will these companies ever be profitable?

If we’re already paying $20/mo and they’re operating at a loss, what’s the next move (assuming we’re only worth an extra $300/yr with ads?)

The math doesn’t add up, unless we stop training new models and degrade the ones currently in production, or have some compute breakthrough that makes hardware + operating costs an order of magnitudes cheaper.

rrrrrrrrrrrryan 4 days ago | parent | next [-]

OpenAI has already started degrading their $20/month tier by automatically routing most of the requests to the lightest free-tier models.

We're very clearly heading toward a future where there will be a heavily ad-supported free tier, a cheaper (~$20/month) consumer tier with no ads or very few ads, and a business tier ($200-$1000/month) that can actually access state of the art models.

Like Spotify, the free tier will operate at a loss and act as a marketing funnel to the consumer tier, the consumer tier will operate at a narrow profit, and the business tier for the best models will have wide profit margins.

lodovic 4 days ago | parent | next [-]

I find that hard to believe. As long as we have open weight models, people will have an alternative to these subscriptions. For $200 a month it is cheaper to buy a GPU with lots of memory or rent a private H200. No ads and no spying. At this point the subscriptions are mainly about the agent functionality and not so much the knowledge in the models themselves.

lupusreal 4 days ago | parent | next [-]

I think what you're missing here is most OpenAI users aren't technical in the slightest. They have massive and growing adoption from the general public. The general public buy services, not roll their own for free, and they even prefer to buy service from the brand they know over getting cheaper service from somebody else.

BigGreenJorts 4 days ago | parent [-]

The conclusion I got from their comment was that the highest margin tier (the business customers) would be incentivized to build their own service instead of paying the subscription. Of course, I am doubtful that for the vast majority of businesses this viable/at all more cost effective when a service AWS is highly popular and extremely profitable.

HotHotLava 4 days ago | parent | prev [-]

H200 rental prices currently start at $2.35 per hour, or $1700 per month. Even if you just rent for 4h a day, the $200 subscription is still quite a bit cheaper. And I'm not even sure that the highest-quality open models run on a single H200.

willcannings 4 days ago | parent | prev [-]

Most? Almost all my requests to the "Auto" model end up being routed to a "thinking" model, even those I think ChatGPT would be able to answer fine without extra reasoning time. Never say never, but right now the router doesn't seem to be optimising for cost (at least for me), it really does seem to be selecting a model based on the question itself.

furyofantares 4 days ago | parent | prev | next [-]

> If we’re already paying $20/mo and they’re operating at a loss

I'm quite confident they're not operating at a loss on those subscriptions.

swiftcoder 4 days ago | parent [-]

They are running at a massive loss overall - feels pretty safe to assume that they wouldn't be if their cheapest subscription tier was breaking even

furyofantares 3 days ago | parent | next [-]

Their cheapest tier is free, they lose money on that of course. And spend a lot of money training new models.

Anthropic has said they have made money on every model so far, just not enough to train the next model, which so far has been much more costly to train every generation. At some point they will probably train an unprofitable model if training costs keep rising dramatically.

OpenAI burns more money on their free tier and might be spending more money building out for future training (I don't know if they do or not) but they both make money on their $20 subscriptions for sure. Inference is very cheap.

wtbdbrrr 4 days ago | parent | prev [-]

nonsense for the public. they are Amazon, basically. they take the loss so the overall ecosystem ( x'D like with crypto ) can gain massively, onboard all kinds of target noobs, sry, groups, brutally prime users, discourage as many non-AI processes as possible and steer all industries towards replacing even those processes with AI that are not worth being replaced with AI, like writing and art.

of course there are a lot valuable use cases. irrelevant in the context, though.

the productivity boosts in the creative industries will additionally lower the standards and split the public even further, ensuring that if you want quality, you have to fuck over as many people as possible, so that you can afford quality ( and an ad-free life, of course. if you want a peaceful peripheral, pay up. it's extortion 404, 101 - 303 already successfully implemented on social media, TV and the radio ).

they don't lose. they make TONS OF FAKE MONEY everywhere in the, again, cough,

"ecosystem".

It's important to understand the Amazon part. The amount of damaging mechanisms that platform anchored in workers, jobbers, business people and consumers is brutal.

All those mechanisms converge in more, easy money and a quicker deterioration of local environments, leading to worse health and more business opportunities that aim at mitigating damage; almost entirely in vain, of course, because the worst is accelerating much quicker; it's easier money.

At the same time peoples psychology is primed for bad business practices, literally making people dumber and lowering their standards to make them easier targets. Don't look at the bottom to see this, look at the upper middle class and above.

It's a massive net loss for civilization and humanity. A brutal net negative impact overall.

madkangas 3 days ago | parent [-]

Thank you for writing this. Your point about "quicker deterioration of local environments" is thought-provoking.

My key technical complaint about LLMs to date is the general inability to add substantial local context. How can I make it understand my business, my processes, my approach to the market? Can I retrain it? Or make it understand my data warehouse?

I think you are explaining why LLM providers don't care about solving my concerns, generally speaking. This is sobering.

fluidcruft 4 days ago | parent | prev [-]

Well to make things worse I was pretty convinced those were faked numbers to push the TilTok ban forward. I really doubt Meta and Google are each taking in this much per user. But my point is more that even if it were that high,

ChatGPT isn't going to capture all the engagement. And even then I don't know whether $300 is much particularly after subtracting operating overhead. I'm just saying I have trouble believing there's gold to be had at the end of this LLM ad rainbow. People just seem to throw out ideas like "ads!" as if it's a sure fire winning lottery ticket or something.

Geezus_42 4 days ago | parent [-]

Everything devolves into ADs eventually. Why would productized LLMs be any different?

fluidcruft 3 days ago | parent [-]

I didn't say they wouldn't, I'm more skeptical about whether it's a sustainable business model. I mean sure gas stations and airports have ads, but nobody gives you gas or airfare in exchange for watching ads. It's a fraction of the revenue needed.

My point is that someone starting an airline can't get away with hopes and dreams about making bank on ads.

__MatrixMan__ 4 days ago | parent | prev | next [-]

3) AIs will steer you towards a problem for which one product is the obvious solution without directly mentioning that product, so you'll think you're getting (2) while actually getting (1).

taneq 4 days ago | parent | prev | next [-]

3) You pay a subscription fee, and are force-fed ads anyway.

hbarka 4 days ago | parent | prev [-]

Imagine a model where a user can earn “token allowances” through some kind of personal contribution or value add.

zer00eyz 4 days ago | parent | prev | next [-]

Claude: "What is my purpose?"

Anthropic: "You serve ad's."

Claude: "Oh, my god."

Jest asside, every paper on alignment wrapped in the blanket of safety is also a moving toward the goal of alignment to products. How much does a brand pay to make sure it gets placement in, say, GPT6? How does anyone even price that sort of thing (because in theory it's there forever, or until 7 comes out)? It makes for some interesting business questions and even more interesting sales pitches.

rubidium 4 days ago | parent | next [-]

I’ll be concerned when ex-Yelp “growth strategists” start showing up at OpenAI and leverage the same extortionist technics.

swiftcoder 4 days ago | parent | prev | next [-]

Ads aren't going to be trained into the model. They'll be an ads backend that the model queries with a set of topic tags, just like in traditional web advertising.

puilp0502 4 days ago | parent [-]

It's going to be interesting if ChatGPT actually hooks up with SSPs and dumps a whole "user preference" embedding vector to the ad networks.

rblatz 4 days ago | parent | prev | next [-]

The models aren’t static, we have to build validation sets to measure model drift and modify our prompts to compensate.

Yoric 4 days ago | parent | prev [-]

Could be part of a LORA or some other kind of plug-in refinement.

michaelbrave a day ago | parent | prev | next [-]

I kinda figured they were more interested in enterprise customers rather than consumer customers.

dotancohen 4 days ago | parent | prev | next [-]

Though in general I like the idea of personal ads for products (NOT political ads), I've never seen an implementation that I felt comfortable with. I wonder if Arthropic might be able to nail that. I'd love to see products that I'm specifically interested in, so long as the advertisement itself is not altered to fit my preferences.

lostdog 4 days ago | parent | next [-]

There is no such thing as a good flow for showing sponsored items in an LLM workflow.

The point of using an LLM is to find the thing that matches your preferences the best. As soon as the amount of money the LLM company makes plays into what's shown, the LLM is no longer aligned with the user, and no longer a good tool.

agar 4 days ago | parent [-]

Same can be said for search. And your statement is provably correct, depending on the definition of "good tool."

But it's not only money's influence on the company, it's also money's influence on the /data/ underlying the platform that undermines the tool.

Once financial incentives are in place, what will be the AI equivalent of review bombing, SEO, linkjacking, google bombing, and similar bad behaviors that undermine the quality of the source data?

Terr_ 4 days ago | parent | prev [-]

> Though in general I like the idea of personal ads for products (NOT political ads), I've never seen an implementation that I felt comfortable with.

No implementation will work for very long when the incentives behind it are misaligned.

The most important part of the architecture is that the user controls it for the user's best interests.

AlecSchueler 4 days ago | parent | prev [-]

It's very interesting to ask Claude what ads it would show you based on your past interactions.

singlepaynews 2 days ago | parent | prev | next [-]

I’m reading your summary versus the other article here, but it seems like for writing code, Claude would be the clear winner?

When chat breaks apart for me, it’s almost always because the context window has been overflown and it is no longer remembering some important feature implemented earlier in the chat; it seems based on your description that Claude is optimizing to not do that.

spongebobstoes 4 days ago | parent | prev | next [-]

why do you see a "clear directionality" leading to ads? this is not obvious to me. chatgpt is not social media, they do not have to monetize in the same way

they are making plenty of money from subscriptions, not to count enterprise, business and API

rrrrrrrrrrrryan 4 days ago | parent | next [-]

Altman has said numerous times that none of the subscriptions make money currently, and that they've been internally exploring ads in the form of product recommendations for a while now.

simianwords 4 days ago | parent [-]

Source? First time I’ve heard of it.

mtmail 4 days ago | parent [-]

"We haven't done any advertising product yet. I kind of...I mean, I'm not totally against it. I can point to areas where I like ads. I think ads on Instagram, kinda cool. I bought a bunch of stuff from them. But I am, like, I think it'd be very hard to…I mean, take a lot of care to get right."

https://mashable.com/article/openai-ceo-sam-altman-open-to-a...

simianwords 4 days ago | parent [-]

> Altman has said numerous times that none of the subscriptions make money currently

For this

rrrrrrrrrrrryan 3 days ago | parent [-]

He's said even the pro plan is losing money:

https://x.com/sama/status/1876104315296968813

0xCMP 4 days ago | parent | prev | next [-]

One has a more obvious route to building a profile directly off that already collected data.

And while they are making lots of revenue even they have admitted on recent interviews that ChatGPT on it's own is still not (yet) breakeven. With the kind of money invested, in AI companies in general, introducing very targeted Ads is an obvious way to monetize the service more.

simianwords 4 days ago | parent [-]

This is incorrect understanding of unit economics. They are not breaking even only because of reinvestment into r and d.

0xCMP 3 days ago | parent [-]

Sam Altman said in an on-the-record dinner interview with Platformer[0] that besides R&D ChatGPT was breakeven and Brad Lightstep, head of ChatGPT, corrected him by saying they were close, but not yet break even.

I assume Sam and Brad both understand the unit economics of their product.

Article is pay-walled for me, but I heard it on their podcast[1]. Which somehow I heard fine, but that page is getting pay-walled for me.

[0]: https://www.platformer.news/sam-altman-gpt-5-interview-light... [1]: https://www.nytimes.com/2025/08/15/podcasts/hardfork-gpt5-pe...

biophysboy 4 days ago | parent | prev | next [-]

Presumably they would offer both models (ads & subscriptions) to reach as many users as possible, provided that both models are net profitable. I could see free versions having limits to queries per day, Tinder style.

Geezus_42 4 days ago | parent | prev | next [-]

None of the "AI" companies are profitable currently. Everything devolves into selling ADs eventually. What makes you think LLMs are special?

ankit219 4 days ago | parent | prev | next [-]

The router introduced in gpt-5 is probably the biggest signal. A router, while determining which model to route query, can determine how much $$ a query is worth. (Query here is conversation). This helps decide the amount of compute openai should spend on it. High value queries -> more chances of affiliate links + in context ads.

Then, the way memory profile is stored is a clear way to mirror personalization. Ads work best when they are personalized as opposed to contextual or generic. (Google ads are personalized based on your profile and context). And then the change in branding from being the intelligent agent to being a companion app. (and hiring of fidji sumo). There are more things here, i just cited a very high level overview, but people have written detailed blogs on it. I personally think affiliate links they can earn from aligns the incentive for everyone. They are a kind of ads, and thats the direction they are marching towards .

tedsanders 4 days ago | parent [-]

I work at OpenAI and I'm happy to deny this hypothesis.

Our goal for the router (whether you think we achieved it or not) was purely to make the experience smoother and spare people from having to manually select thinking models for tasks that benefit from extra thinking. Without the router, lots of people just defaulted to 4o and never bothered using o3. With the router, people are getting to use the more powerful thinking models more often. The router isn't perfect by any means - we're always trying to improve things - but any paid user who doesn't like it can still manually select the model they want. Our goal was always a smoother experience, not ad injection or cost optimization.

ankit219 4 days ago | parent [-]

Hi! Thank you for the clarification. I was just saying it might be possible in the future (in a way you can determine how much compute - which model - a specific query needs today as well). And the experience has definitely improved w router so kudos on that. I don't know what the final form factor of ads would be (i imagine it turning out to be a win win win scenario than say you show ads at the expense of quality. This is a google level opportunity to invent something new) just that it seems from the outside you guys are preparing for monetization by ads given the large userbase you have and virtually no competition at chatgpt usage level.

dweinus 4 days ago | parent | prev [-]

> they are making plenty of money from subscriptions, not to count enterprise, business and API

...except that they aren't? They are not in the black and all that investor money comes with strings

Tistron 4 days ago | parent | prev | next [-]

Why would their way of handling memory for conversations have much to do with how they will analyse your user profile for ads? They have access to all your history either way and can use that to figure out what products to recommend, or ads to display, no?

erikerikson 4 days ago | parent [-]

It's about weaving the ads into the LLM responses overly and more subtly.

There's the ads that come before the movie and then the ads that are part of the dialog, involved in the action, and so on. Apple features heavily in movies and TV series when people are using a computer, for example. There's payments for car models to be the one that's driven in chase scenes. There's even payments for characters to present the struggles that form core pain points that that specific products are category leaders to solve.

resters 4 days ago | parent | prev | next [-]

Suppose the user uses an LLM for topics a, b, and c quite often, and d, e and f less often. Suppose b, c, and f are topics that OpenAI could offer interruption ads (full screen, 30 seconds or longer commercials) and most users would sit through it and wait for the response.

All that is needed to do that is to analyze topics.

Now suppose that OpenAI can analyze 1000 chats and coding sessions and its algorithm determines that it can maximize revenue by leading the user to get a job at a specific company and then buy a car from another company. It could "accomplish" this via interruption ads or by modifying the quality or content of its responses to increase the chances of those outcomes happening.

While both of these are in some way plausible and dystopian, all it takes is DeepSeek running without ads and suddenly the bar for how good closed source LLMs have to be to get market share is astronomically higher.

In my view, LLMs will be like any good or service, users will pay for quality but differnet users will demand different levels of quality.

Advertising would seemingly undermine the credibility of the AI's answers, and so I think full screen interruption ads are the most likely outcome.

1970-01-01 3 days ago | parent | prev [-]

> At some point they will monetize by ads and affiliate links.

I couldn't agree more. Enshittifcation has to eat one of these corporation's models. Most likely it will be the corp with the most strings attached to growth (MSFT, FB)