Remix.run Logo
creata 4 days ago

> As I understand them, LLMs right now don’t understand concepts.

In my uninformed opinion it feels like there's probably some meaningful learned representation of at least common or basic concepts. It just seems like the easiest way for LLMs to perform as well as they do.

jmcgough 4 days ago | parent | next [-]

Humans assume that being able to produce meaningful language is indicative of intelligence, because the only way to do this until LLMs was through human intelligence.

notahacker 4 days ago | parent | next [-]

Yep. Although the average human also considered proficiency in mathematics to be indicative of intelligence until we invented the pocket calculator, so maybe we're just not smart enough to define what intelligence is.

creata 3 days ago | parent [-]

Sorry if I'm being pedantic, but I think you mean arithmetic, not mathematics in general.

Izkata 3 days ago | parent | prev [-]

Not really, we saw this decades ago: https://en.wikipedia.org/w/index.php?title=ELIZA_effect

creata 3 days ago | parent [-]

I don't think I'm falling for the ELIZA effect.* I just feel like if you have a small enough model that can accurately handle a wide enough range of tasks, and is resistant to a wide enough range of perturbations to the input, it's simpler to assume it's doing some sort of meaningful simplification inside there. I didn't call it intelligence.

* But I guess that's what someone who's falling for the ELIZA effect would say.

yunwal 4 days ago | parent | prev [-]

Your uninformed opinion would be correct

https://www.anthropic.com/news/golden-gate-claude