▲ | sangnoir 9 days ago | |||||||
> Why should we expect a general-purpose instruction-tuned LLM to get this right in the first place? The argument goes: Language encodes knowledge, so from the vast reams of training data, the model will have encoded the fundamentals of electromagnetism. This is based in the belief that LLMs being adept at manipulating language, are therefore inchoate general intelligences, and indeed, attaining AGI is a matter of scaling parameters and/or training data on the existing LLM foundations. | ||||||||
▲ | TheOtherHobbes 9 days ago | parent | next [-] | |||||||
Which is like saying that if you read enough textbooks you'll become an engineer/physicist/ballerina/whatever. | ||||||||
| ||||||||
▲ | garyfirestorm 9 days ago | parent | prev | next [-] | |||||||
This could be up for debate - https://www.scientificamerican.com/article/you-dont-need-wor... | ||||||||
▲ | Vampiero 8 days ago | parent | prev [-] | |||||||
Yeah but language sucks at encoding the locality relations that represent a 2D picture such as a circuit diagram. Language is a fundamentally 1D concept. And I'm baffled that HN is not picking up on that and ACTUALLY BELIEVES that you can achieve AGI with a simple language model scaled to billions of parameters. It's as futile as trying to explain vision to a blind man using "only" a few billion words. There's simply no string of words that can create a meaningful representation in the mind of the blind man. |