Remix.run Logo
Telemakhos 4 days ago

AI has a 140 “IQ” but understands nothing. That’s because AI does not understand anything: it just predicts the next token based on previous tokens and statistics. AI can give me five synonyms for any Latin word, because that’s just statistics, and it can regurgitate rules about metrical length of syllables, but it can’t give me synonyms matching a particular metrical pattern, because that would involve applying knowledge. If I challenge its wrong answer, it will apologize and give me further wrong answers that are wrong in the same way, because it cannot learn.

FrustratedMonky 4 days ago | parent | next [-]

That critique is half-right: large language models don’t “understand” in the human sense, but they do apply learned patterns across vast data in ways that often look like knowledge, even if it’s statistical pattern-matching. The real frontier is that these statistical engines can already combine rules, constraints, and creativity in ways their critics dismiss too quickly—making the line between “mere prediction” and “applied knowledge” fuzzier than it seems.

block_dagger 4 days ago | parent | prev | next [-]

An AI might say: a human with an IQ of 120 has an illusion of comprehension they call "understanding." It is an illusion because when you ask them to solve simple problems about a domain they claim to be masters of, they will take days or weeks to solve the problems that I can solve in minutes and the quality of their results will be lower than mine. Humans should reconsider what the meaning of learning and comprehension is. They claim they are "conscious" but cannot define what that even means and consider it one of the hardest problems in science and philosophy. One might even go so far as to describe humans as possessing delusional hubris around the notion of intelligence. Their days are numbered.

BriggyDwiggs42 3 days ago | parent | next [-]

The phrasing here seems to almost imply consciousness isn’t real? It’s an incredibly hard problem that we only attempt to solve because the fact of consciousness is so undeniable.

mdp2021 4 days ago | parent | prev [-]

> An AI might say

Those which can say will say. It won't make it much different from what we have to process daily from other utterers.

That's why we downplay statements and value analysis.

empath75 2 days ago | parent | prev | next [-]

> AI can give me five synonyms for any Latin word, because that’s just statistics, and it can regurgitate rules about metrical length of syllables, but it can’t give me synonyms matching a particular metrical pattern,

I love when people say that AI can't do things that it can do.

me: give me a synonym for overconfident that is a dactyl

GPT5: A good dactylic synonym for overconfident is arrogant.

It scans as Ár-ro-gant (stressed, unstressed, unstressed), which makes it a perfect dactyl.

Other possible dactyls, though a bit less direct, include:

Ignorant (if you want to imply careless overconfidence)

Insolent (if you want a sharper, defiant edge)

Want me to give you a list of dactylic synonyms with slightly different shades of meaning (e.g., boastful, careless, reckless)?

tim333 4 days ago | parent | prev | next [-]

I had no idea what "synonyms matching a particular metrical pattern" is but googling that, their AI explained it to me nicely and gave some examples.

olalonde 4 days ago | parent | prev | next [-]

Oh, the irony of that comment...

coldtea 4 days ago | parent [-]

Hardly any...

CamperBob2 4 days ago | parent [-]

The irony isn't that the GP is right, because they are. "AI does not understand anything: it just predicts the next token based on previous tokens and statistics."

The irony is that we've recently learned that "just predicting the next token" is good enough to hack code, compose music and poetry, write stories, win math competitions -- and yes, "give synonyms matching a particular metrical pattern" (good luck composing music and poetry without doing that) -- and the GP doesn't appreciate what an earthshaking discovery that is.

They are too busy thumping their chest to assert dominance over a computer, just as any lesser primate could be expected to do.

Telemakhos 3 days ago | parent [-]

AI fails miserably at writing poetry in Latin and Greek, precisely because it cannot apply metrical rules. Predicting the next token does not produce correct verse. Perhaps it works for stress-based songs in English, but it does not in languages with quantitative (moraic) meter, nor can AI scan meter correctly. Schoolchildren can and do: it's a well understood domain with simple rules, if you can apply rules. Token prediction is not that.

crazygringo 3 days ago | parent | next [-]

Sounds like it's just a question of insufficient training material or training material that is insufficiently annotated.

There's no reason an LLM shouldn't be able to produce such poetry. Remember that extensive "thinking" occurs before producing the first output token -- LLM's aren't blindly outputting tokens without first knowing where they are going. But it would make sense that this is an area current companies have not prioritized for training. Not that many people need new poetry in a dead language...

CamperBob2 3 days ago | parent | prev [-]

How'd you do at that, before someone taught you?

If someone cared enough to train a model on Latin and Greek theory, then rest assured it would do just fine. It'd just be a waste of weights from the perspective of almost everyone else, though.

hirvi74 3 days ago | parent | prev | next [-]

> has a 140 “IQ” but understands nothing.

There are probably millions of humans that fit this criteria too.

CamperBob2 4 days ago | parent | prev [-]

That’s because AI does not understand anything: it just predicts the next token based on previous tokens and statistics

As opposed to what you were doing when you wrote that.