▲ | nerdponx 9 days ago | ||||||||||||||||||||||||||||
> I couldn’t believe they whiffed it Why should we expect a general-purpose instruction-tuned LLM to get this right in the first place? I am not at all surprised it didn't work, and I would be more than a little surprised if it did. | |||||||||||||||||||||||||||||
▲ | sangnoir 9 days ago | parent [-] | ||||||||||||||||||||||||||||
> Why should we expect a general-purpose instruction-tuned LLM to get this right in the first place? The argument goes: Language encodes knowledge, so from the vast reams of training data, the model will have encoded the fundamentals of electromagnetism. This is based in the belief that LLMs being adept at manipulating language, are therefore inchoate general intelligences, and indeed, attaining AGI is a matter of scaling parameters and/or training data on the existing LLM foundations. | |||||||||||||||||||||||||||||
|