Remix.run Logo
kozikow 3 days ago

Ads inside LLMs (e.g. pay $ to boost your product in LLM recommendation) is going to be a big thing.

My guess is that Google/OpenAI are eyeing each other - whoever does this first.

Why would that work? It's a proven business model. Example: I use LLMs for product research (e.g. which washing machine to buy). Retailer pays if link to their website is included in the results. Don't want to pay? Then redirect the user to buy it on Walmart instead of Amazon.

kieckerjan 2 days ago | parent | next [-]

I actually encountered this pretty early in one of these user tuned GPT's in OpenAI's GPT store. It was called Sommelier or something and it was specialized in conversations about wine. It was pretty useful at first, but after a few weeks it started lacing all its replies with tips for wines from the same online store. Needless to say, I dropped it immediately.

enahs-sf 2 days ago | parent | prev | next [-]

Forget links, agents are gonna just go upstream to the source and buy it for you. I think it will change the game because intent will be super high and conversion will go through the roof.

hyperadvanced 2 days ago | parent | next [-]

Yeah I’m gonna give an AI agent my credit card and complete autonomy with my finances so it can hallucinate me a new car. I love getting findommed.

weatherlite 2 days ago | parent | next [-]

Look, the car shop might not bill you at all because their A.I agent will hallucinate the purchase, so I don't see why you're so pessimistic about agents.

manmal 2 days ago | parent | prev [-]

It can still give you an overview with a few choices and a link to the prepared checkout page, and you enter your CC details yourself.

hyperadvanced 12 hours ago | parent [-]

That’s basically what any halfway decent e-commerce site is today

heavyset_go 2 days ago | parent | prev [-]

Feels like this hope is in the same vein as Amazon Dash and then the expectation that people would buy shit with voice assistants like Alexa.

msgodel 2 days ago | parent | prev | next [-]

People are already wary of hosted LLMs having poisoned training data. That might kill them altogether and push everyone to using eg Qwen3-coder.

landl0rd 2 days ago | parent [-]

No, a small group of highly tech-literate people are wary of this. Your personal bubble is wary of this. So is some of mine. "People" don't care and will use the packaged, corporate, convenient version with the well-known name.

People who are aware of that and care enough to change consumption habits are an inconsequential part of the market.

msgodel 2 days ago | parent [-]

I don't know, a bunch of the older people from the town I grew up in avoided using LLMs until Grok came out because of what they saw going on with alignment in the other models (they certainly couldn't articulate this but listening to what said it's what they were thinking.) Obviously Grok has the same problems but I think it goes to show the general public is more aware of the issue than they get credit for.

You combine this with Apple pushing on device inference and making it easy and anything like ads probably will kill hosted LLMs for most consumers.

tokioyoyo 2 days ago | parent | next [-]

Yeah, average people that I know (across continents) just ChatGPT their way into literally anything without a second thought. They don't care.

manmal 2 days ago | parent | prev [-]

Maybe Grok was just pushed by their political influencers. It’s a republican, anti-woke LLM after all.

pacifika 2 days ago | parent | prev [-]

Who doesn’t want to associate their product with unreliability and incorrect information? Think about that reputational damage.