Remix.run Logo
daoboy 5 hours ago

It sounds like there wasn't really a counter narrative for the models to learn from. This feature of how llms accumulate information is already being gamed by seeding the internet with preferred narratives.

I'm not sure how many Medium articles, blog posts and reddit threads I need to put out before grok starts telling everyone my widget is the best one ever made, but it's a lot cheaper than advertising.

latexr 3 hours ago | parent | next [-]

> I'm not sure how many Medium articles, blog posts and reddit threads I need to put out

Probably not that many.

https://www.anthropic.com/research/small-samples-poison

https://www.bbc.com/future/article/20260218-i-hacked-chatgpt...

pjc50 4 hours ago | parent | prev | next [-]

People really like using the word "narrative". I guess we're creatures of story.

But this really highlights how much we've been benefiting from living in a high-trust society, where people don't just "go on the internet and tell lies" - filtered by the existing anti-spam and anti-SEO measures intended to cut out the 80% of the internet where people do just make things up to sell products.

LLMs are extremely post-structuralist. They really force the user to decide whether to pick the beautiful eternal fountain of plausible looking text with no ground truth, or a much harder road of distrust, verification, and old-school social proof.

eqvinox 4 hours ago | parent | prev | next [-]

I'm not sure "being gamed" is the lens I would see this particular instance through. People (some at least) have gotten into their heads that they can ask LLMs objective questions and get objectively correct answers. The LLM companies are doing very little to dissuade them of that belief.

Meanwhile, LLMs are essentially internet regurgitation machines, because of course they are, that's what they do. Which makes them useless for getting "hard truth" answers especially in contested or specialized fields.

I'm honestly afraid of the impact of this. The internet has enough herd bullshit on it as it is. (e.g. antivaxxers, flat earthers, electrosensitivity, vitamin/supplement junk, etc.) We don't need that amplified.

simmerup 3 hours ago | parent [-]

One impact is the Iran war.

The AI told the government what it wanted to hear contrary to its entire security apparatus, and then they went to war assuming they could win

joenot443 2 hours ago | parent | prev | next [-]

I have a friend who recently hit $3000 MRR with a webapp most of us could prototype in a weekend.

Nearly all his traffic comes from ChatGPT

acdha 2 hours ago | parent [-]

I’m expecting a lot of things like that similar to the 2000s blog boom, only to see it whither even more quickly as the AI companies switch to value extraction mode. You’re really exposed if one company you don’t even have a contract with controls your customer supply.

sublinear 4 hours ago | parent | prev | next [-]

This is the future of advertising, and that was always the true purpose of having LLMs become the first choice for user search.

I seriously do not understand why people keep falling for this. These tools are not made free or cheap out of the kindness of their heart.

teaearlgraycold 4 hours ago | parent | prev | next [-]

I’ve seen an estimate before and it’s in the low 10s.

21asdffdsa12 4 hours ago | parent | prev [-]

Can a model not just ignor all things that have no counter-argument by default? Like - if there are not flat earthers, widly debunked, drop the idea of a spherical earth? It only exists if it was fought over?

rcxdude 3 hours ago | parent | next [-]

Even if you could do this rigorously (not at all obvious with how LLMs work), it's not a reliable metric: you can easily fabricate debate as well, and in this case the main issue was essentially skimming the surface of the reports and not looking any deeper to see the obvious red flags that it was an april-fools-level fake (which obviously even a person can fall for, but LLMs are being given a far greater level of trust for some reason)

saidnooneever 4 hours ago | parent | prev | next [-]

you would just game it the same way then, and how would it know who won an internet argument? how can it prove who is telling the truth and whos... hallucinating?

linzhangrun 4 hours ago | parent | prev | next [-]

It's not very realistic. It would significantly impact the user experience. Many things have not been fully discussed on the internet; there isn't that much luxury of corpus data available.

21asdffdsa12 4 hours ago | parent [-]

But then mono-opinion- aka certainty - is actually peak uncertainty? Could that number of occurrence be baked into as a sort of detrimental weight?

baobun 3 hours ago | parent | next [-]

You're grasping for a reliable unsupervised truth machine. That's a fundamentally intractable problem until you limit it down to a wolframalpha clone. And not even that by LLMs.

simmerup 4 hours ago | parent | prev [-]

We need to give the LLMs robot bodies so they can practise medicine and see the illnesses that do and don’t exist first hand

pjc50 4 hours ago | parent | prev | next [-]

> drop the idea of a spherical earth

I think I see a problem here.

sublinear 4 hours ago | parent | prev [-]

https://en.wikipedia.org/wiki/Anti-realism