Remix.run Logo
travisjungroth a day ago

I’ve realized while reading these comments my opinions on LLMs being intelligent has significantly increased. Rather than argue any specific test, I believe no one can come up with a text-based intelligence test that 90% of literate adults can pass but the top LLMs fail.

This would mean there’s no definition of intelligence you could tie to a test where humans would be intelligent but LLMs wouldn’t.

A maybe more palatable idea is that having “intelligence” as a binary is insufficient. I think it’s more of an extremely skewed distribution. With how humans are above the rest, you didn’t have to nail the cutoff point to get us on one side and everything else on the other. Maybe chimpanzees and dolphins slip in. But now, the LLMs are much closer to humans. That line is harder to draw. Actually not possible to draw it so people are on one side and LLMs on the other.

fc417fc802 a day ago | parent | next [-]

Why presuppose that it's possible to test intelligence via text? Most humans have been illiterate for most of human history.

I don't mean to claim that it isn't possible, just that I'm not clear why we should assume that it is or that there would be an obvious way of going about it.

travisjungroth a day ago | parent [-]

Seems pretty reasonable to presuppose this when you filter to people who are literate. That’s darn near a definition of literate, that you can engage with the text intelligently.

fc417fc802 21 hours ago | parent [-]

I thought the definition of literate was "can interpret text in place of the spoken word". At which point it's worth noting that text is a much lower bandwidth channel than in person communication. Also worth noting that, ex, a mute person could still be considered intelligent.

Is it necessarily the case that you could discern general intelligence via a test with fixed structure, known to all parties in advance, carried out via a synthesized monotone voice? I'm not saying "you definitely can't do that" just that I don't see why we should a priori assume it to be possible.

Now that likely seems largely irrelevant and out in the weeds and normally I would feel that way. But if you're going to suppose that we can't cleanly differentiate LLMs from humans then it becomes important to ask if that's a consequence of the LLMs actually exhibiting what we would consider general intelligence versus an inherent limitation of the modality in which the interactions are taking place.

Personally I think it's far more likely that we just don't have very good tests yet, that our working definition of "general intelligence" (as well as just "intelligence") isn't all that great yet, and that in the end many humans who we consider to exhibit a reasonable level of such will nonetheless fail to pass tests that are based solely on an isolated exchange of natural language.

tsimionescu 18 hours ago | parent [-]

I generally agree with your framing, I'll just comment on a minor detail about what "literate" means. Typically, people are classed in three categories of literacy, not two: illiterate means you essentially can't read at all, literate means you can read and understand text to some level, but then there are people who are functionally illiterate - people who can read the letters and sound out text, but can't actively comprehend what they're reading to a level that allows them to function normally in society - say, being able to read and comprehend an email they receive at work or a news article. This difference between literate and functionally illiterate may have been what the poster above was referring to.

Note that functional illiteracy is not some niche phenomenon, it's a huge problem in many school systems. In my own country (Romania), while the rate of illiteracy is something like <1% of the populace, the rate of functional illiteracy is estimated to be as high as 45% of those finishing school.

nl a day ago | parent | prev [-]

Or maybe accept that LLMs are intelligent and it's human bias that is the oddity here.

travisjungroth a day ago | parent [-]

My whole comment was accepting LLMs as intelligent. It’s the first sentence.