Remix.run Logo
j1mr10rd4n 3 days ago

Geoffrey Hinton's recent lecture at the Royal Institute[1] is a fascinating watch. His assertion that human use of language being exactly analogous to neural networks with back-propagation really made me think about what LLMs might be able to do, and indeed, what happens in me when I "think". A common objection to LLM "intelligence" is that "they don't know anything". But in turn... what do biological intelligences "know"?

For example, I "know" how to do things like write constructs that make complex collections of programmable switches behave in certain ways, but what do I really "understand"?

I've been "taught" things about quantum mechanics, electrons, semiconductors, transistors, integrated circuits, instruction sets, symbolic logic, state machines, assembly, compilers, high-level-languages, code modules, editors and formatting. I've "learned" more along the way by trial and error. But have I in effect ended up with anything other than an internalised store of concepts and interconnections? (c.f. features and weights).

Richard Sutton takes a different view in an interview with Dwarkesh Patel[2] and asserts that "learning" must include goals and reward functions but his argument seemed less concrete and possibly just a semantic re-labelling.

[1] https://www.youtube.com/watch?v=IkdziSLYzHw [2] https://www.youtube.com/watch?v=21EYKqUsPfg

zeroonetwothree 3 days ago | parent [-]

The vast majority of human learning is in constructing a useful model of the external world. This allows you to predict extremely accurate the results of your own actions. To that end, every single human knows a huge amount.