Remix.run Logo
tmountain 2 days ago

AI is following the drug dealer model. “The first dose is free!” Given the cost incurred, lots of dark patterns will be coming for sure.

nicbou 2 days ago | parent | next [-]

AI is built by the same companies that built the last generation of hostile technology, and they're currently offering it at a loss. Once they have encrusted themselves in our everyday lives and killed the independent web for good, you can bet they will recoup on their investment.

A4ET8a8uTh0_v2 2 days ago | parent | next [-]

That indeed is likely to come, but having experienced user hostile technology, the appropriate response is to prepare. Some trends suggest this is already happening ( though that only appears to be a part HN crowd so far ): moving more and more behind a local network. I know I am personally exploring local llm integration for some workflows to avoid the obvious low hanging fruit most providers will likely go for. But yes, the web in its current form might perish.

_DeadFred_ 2 days ago | parent [-]

Would be cool if local libraries got together and figured out how to allow access to community LLMs. That fits more with my idea of the future and AIs than having the now dystopian tech companies running/defining it all.

pcdoodle 2 days ago | parent | prev [-]

Is there another edge to this sword? Can we fight back with LLMs that ignore sources with all the tracking / SEO and other garbage? I'd love to tell my local LLM that "I hate pintrest" for instance and it just goes "okay, pintrest shields are up".

troyvit 2 days ago | parent | next [-]

Seconding the Kagi thing. You don't even need an LLM. If you search something like the term 'camping gear' search results pop up right away, no LLM response. However by each site's link is a little shield warning you about how many trackers and ads there will be on the page. Next to that is a lil kebob menu that lets you either boost the site or remove it from your search. That's also where their AI functionality is hidden. You can get a page summary or ask questions about that page.

If you'd rather the quick AI-summaries a la google you can put a question mark at the end of your search term. 'lawsuits regarding ferrets?'

And yeah as the sibling commenter pointed out, you can go into Kagi's preferences and explicitly rule out pinterest (or whatever site you want) from any of your searches for ever.

fxtentacle 2 days ago | parent | prev [-]

Kagi allows you to block Pinterest

a_vanderbilt 2 days ago | parent [-]

Kagi got me to sign up as an early adopter because it let me banish pinterest forever.

jdietrich 2 days ago | parent | prev | next [-]

It's a market where nobody has a particularly deep moat and most players are charging money for a service. Open weight models aren't too far behind proprietary models, particularly for mundane queries. The cost of inference is plummeting and it's already possible to run very good models at pennies per megatoken. I think it's unreasonably pessimistic to assume that dark patterns are an inevitability.

simgt 2 days ago | parent | next [-]

For the sake of argument, none of the typical websites with the patterns described have a moat, and the cost of hosting them has plummeted a while ago. It's not inevitable but it's likely, and they will be darker if they are embedded in the models' output...

ToucanLoucan 2 days ago | parent | prev | next [-]

You do realize of course that every service that now employs all these dark patterns we're complaining about was already profitable and making good money, and that simply isn't good enough? Revenue has to increase quarter-to-quarter otherwise your stock will tank.

It's not simply enough that a product "makes money" it must "make more money, every quarter, forever" which is why everything, not even limited to tech, but every product absolutely BLOWS. It's why every goddamn thing is a subscription now. It's why every fucking website on the internet wants an email and a password so they can have you activate an account, and sell a known active email to their ad partners.

xp84 2 days ago | parent [-]

I wish I could put 10 votes on this instead of just one. It just bothers me how success can be defined as something absurdly impossible like that.

We're already at a wild stage of the rot caused by the growth-forever disease: the most successful companies are so enormous that further profit increases would require either absurd monopoly status (Chase, Wells Fargo, B of A all merge!) or to find increasingly insane ways of extracting money (witness network TV: First they only got money from ads, then they started leeching additional money streams from cable providers, now most have added their own subscription service that they also want you to pay for, on top of watching ads.)

ISPs used to just charge a fee, now they also sell personal information about your browsing behavior for extra revenue, cap your bandwidth usage and charge for more, and one of them (comcast) owns a media conglomerate.

azangru 2 days ago | parent | prev [-]

> and most players are charging money for a service

The aricle talks about AI overviews. As exemplified by the AI summary at the top of Google search results page. That thing is free.

svachalek 2 days ago | parent [-]

1. Create free and good product

2. Attract large user base

3. Sell user data and attention to advertisers

4. Extract maximal profit from sponsors

5. Earn billions from shit product

geerlingguy 2 days ago | parent [-]

Hey that's like a popular Search engine's search results page!

throwaway290 2 days ago | parent | prev | next [-]

Yep. Dark patterns you can see are not that dark by comparison, we will need another word for coming dark patterns disguised in llm responses

lelanthran 2 days ago | parent | next [-]

> Yep. Dark patterns you can see are not that dark by comparison, we will need another word for coming dark patterns disguised in llm responses

As someone else said, you can probably filter responses through a small purpose-built/trained LLM that strips away dark patterns.

If you start getting mostly empty responses as a result, then there was no value prior to the stripping anyway.

throwaway290 2 days ago | parent [-]

If you can't tell when a big expensive llm is subliminally grooming you to like/dislike something or is selective with information then another (probably small and cheaper) llm somehow can? Arms race?

chasd00 2 days ago | parent | next [-]

> If you can't tell when a big expensive llm is subliminally grooming you to like/dislike something or is selective with information

this is already there and in prod but called AI "safety" (really corporate brand safety). The largest LLMs have already been shown to favor certain political parties based on the preferences of the group doing the training as well. Even technical people who should know better naively trust the response of an LLM well enough to allow to make API calls on their behalf. What would prevent an LLM provider to train their model to learn and manipulate an API to favor them or a "trusted partner" in some way? It's just like in the early days, "it's on the Internet, it has to be true".

lelanthran 2 days ago | parent | prev [-]

> If you can't tell when a big expensive llm is subliminally grooming you to like/dislike something or is selective with information

I mean, I can tell when a page contains advertisements, but I still use an ad-blocker.

The point was not to help me detect when a response is ad-heavy, but to stop me seeing those ads at all.

> Arms race?

Possibly. Like with ad-blockers, this race can't be won by the ad-pusher LLM if the user uses the ad-blocker LLM.

The only reason ad-pusher websites still work is because users generally don't care enough to install the ad-blocker.

In much the same way, the only reason LLM ad-pushers will work is if users don't bother with an LLM ad-blocker.

throwaway290 2 days ago | parent [-]

> I mean, I can tell when a page contains advertisements, but I still use an ad-blocker

Yep, because why let people make money from their work right? You should just get content for free!

> this race can't be won by the ad-pusher LLM if the user uses the ad-blocker LLM.

As per my comment it literally can.

_DeadFred_ 2 days ago | parent | prev [-]

We need to move LLMs into libraries. They are already our local repository of knowledge and make the most sense to be the hosts/arbiters of it. Not dystopian tech companies whose main profits come from dark patterns. I get AIs for companies being provided by businesses, but for the average person coming from libraries just make so much more sense and would be the natural continuation/extension if we had a healthy/sane society.

littlecranky67 2 days ago | parent | prev | next [-]

I fail to see how that will work out. Just I have an adblocker now, I could have a very simple local llm in my browser that modifies the search-AIs answer and strips obvious ads.

svachalek 2 days ago | parent [-]

They won't be obvious. They'll be highly customized brain worms influencing your votes and purchases to the highest bidder.

A4ET8a8uTh0_v2 2 days ago | parent | next [-]

Yep. Right now, even with cookies inferences about individual humans are minimal, but exposing your whole patterns in speech make you a ripe target for manipulation at a scale that some may not fully understand yet. 4o is very adept at cold reading and it is genuinely fascinating to read those from that perspective alone. Combine it with style evaluation and a form of rudimentary history analysis, and you end up with actual dossier on everyone using that service.

Right now, we are lucky, because it is the least altered version of it ( and we all know how many filters public models have to go through ).

lopis 2 days ago | parent | prev [-]

Which sounds very illegal in most places, as clearly identifying sponsored content is required. Let's see how that turns out.

floatrock 2 days ago | parent | next [-]

> as clearly identifying sponsored content is required

Citation needed?

Once AI content becomes monetized with ads, it's not going to look like the ads/banners we're used to. If you're looking into the past, you don't understand the potential of AI. Noam Chomsky's manufactured consent is going to look quaint by comparison.

_DeadFred_ 2 days ago | parent | prev [-]

For hire drivers/having employees had very specific legal requirements in most areas. Let's see how that turned out. Oh yeah, the dystopian tech companies won and we the people got the benefit job/job rules being thrown out for the beauty that is independent contractor 'gig work'.

bdelmas 2 days ago | parent | prev | next [-]

Well maybe not. Thanks that we have Gemini now to compete with ChatGPT. Competition may avoid dark patterns. But without competition yes definitely

generic92034 2 days ago | parent | next [-]

Competition or not, dark patterns or not - sooner or later LLMs will need to earn money for their corporations.

moontear 2 days ago | parent | next [-]

But they do? Paid subscriptions for Gemini, ChatGPT and Copilot are a thing.

If Google throws in a free AI summary in their search it only helps promoting Gemini in the long run.

ileonichwiesz 2 days ago | parent | next [-]

Look up the numbers. OpenAI actually loses money on every paid subscription, and they’re burning through billions of dollars every year. Even if you convince a fraction of the users to pay for it, it’s still not a sustainable model.

nicbou 2 days ago | parent | next [-]

Even if they were profitable, the investors would feel that it's not profitable enough. They won't stop at breaking even.

generic92034 2 days ago | parent [-]

And even if it was the highest profit branch of the company, they still would see a need to do anything possible to further increase profits. That is often where enshittification sets in.

This currently is the sweet phase where growing and thus gaining attention and customers as well as locking in new established processes is dominant. Unless the technical AI development stays as fast as in the beginning, this is bound to change.

lelanthran 2 days ago | parent | prev [-]

I actually wondered about this myself, so I asked Gemini with a long back and forth conversation.

The takeaway from Gemini is that subscriptions do lose money on some subscribers, but it is expected that not all subscribers use up their full quota each month. This is true even for non-AI subscriptions since the beginning of the subscription model (i.e. magazines, gamepass, etc).

The other surprising (to me, anyway) takeaway is that the AI providers have some margin on each token for PAYG users, and that VC money is not necessary for them to continue providing the service. The VC money is capital expenditure into infrastructure for training.

Make of it what you will, but it seems to me that if they stop training they don't need the investments anymore. Of course, that sacrifices future potential for profitability today, so who knows?

fl0id 2 days ago | parent | next [-]

That’s just a general explainer of subscription models. As of right now VC money is necessary for just existing. And they can never stop training or researching. They also constantly have to buy new gpus unless there’s at some point a plateau of ‘good enough’

vidarh 2 days ago | parent [-]

The race to continue training and researching, however, is drive by competition that will fall away if competitors also can't raise more money to subsidise it.

At that point the market may consolidate and progress slow, but not all providers will disappear - there are enough good models that can be hosted and served profitably indefinitely.

sfmz 2 days ago | parent [-]

Seems like there can never be good enough models; the user will want it up-to-date models with respect to news and culture.

vidarh 2 days ago | parent [-]

For some uses, sure. But for plenty of uses that can be provided in context, RAG, or via tool use, or doesn't matter.

Even for the uses where it does matter, unless providers get squeezed down to zero margin, it's not that new models will never happen, but that the speed at which they can afford to produce large new models will slow.

malfist 2 days ago | parent | prev [-]

Why do you think Gemini is the authority on the internal costs of AI providers and their profit margins?

lelanthran 2 days ago | parent [-]

> Why do you think Gemini is the authority on the internal costs of AI providers and their profit margins?

Where did I say I think that?

sjsdaiuasgdia 2 days ago | parent [-]

That's the source you chose to use, according to you.

You don't mention cross-checking the info against other sources.

You have the "make of it what you will" at the end, in what appears to be an attempt to discard any responsibility you might have for the information. But you still chose to bring that information into the conversation. As if it had meaning. Or 'authority'.

If you weren't treating it as at least somewhat authoritative, what was the point of asking Gemini and posting the result?

Gemini's output plus some other data sources could be an interesting post. "Gemini said this but who knows?" is useless filler.

seunosewa 2 days ago | parent | prev [-]

The mediocre AI summaries aren't promoting Gemini when you can't use them to start a chat on Gemini. They effectively ads and search results for no benefit.

sumtechguy 2 days ago | parent | prev [-]

The electric bill does not pay for itself.

What is also interesting is one of the biggest search companies is using it to steer traffic away from its former 'clients'. The very websites google talked into slathering their advertisements all over themselves. By giving them money and traffic. But that worked because google got a pretty good cut of that. But now only google gets the 'above the fold' cut.

That has two long term effects. One the place they harvest the data will go away. The second is their long term money will decrease. As traffic is lowered and less ads shown (unless google goes full plaster it everywhere like some sites).

AI is going to eat the very companies making it. Even if the answers are kind of 'meh'. People will be fine with 'close enough' for the majority of things.

Short term they will see their metric of 'main site retention' going up. It will however be at the cost of the websites that fed the machine.

diogolsq 2 days ago | parent [-]

Good point.

Looking ahead, Search will become a de facto LLM chatbot, if it isn't already.

floatrock 2 days ago | parent | prev [-]

> Competition may avoid dark patterns.

Oh bless your heart.

You don't even need to bring up corporate collusion, countless price gouging schemes, or the entire enshittification movement to understand that competition discovers the dark patterns. Dark patterns aren't something to be avoided, they're the natural evolution of ever-tighter competition.

When the eyeball is the product, you get more checks if you get more eyeballs. Dark patterns are how you chum the water to attract the most product.

deadbabe 2 days ago | parent | prev [-]

To combat this, maybe we can cache AI responses for common prompts somehow and make some kind of website where people could search for keywords and find responses that might be related to what they want, so they don’t have to spend tokens on an AI. Could be free.

chasd00 2 days ago | parent [-]

I would be curious to see what would happen if you could write every query/response from an LLM to an HTML file and then serve that directory of files back to google with a simple webserver for indexing.

deadbabe 2 days ago | parent [-]

I think the future will be:

1. Someone prompts 2. Server searches for equivalent prompts, if something similar was asked before, return that response from cache. 3. If prompt is unique enough, return response from LLM and cache new response. 4. If user decides response isn’t specific enough, ask LLM and cache.