| ▲ | simianwords 2 hours ago | |||||||||||||||||||||||||
>If you built an LLM exclusively on the writings and letters of John Steinbeck, you could NOT tell the LLM to solve an integral for you amd expect it to be right. this shows that you have very less idea on how llm's work. LLM that is trained only on john steinbeck will not work at all. it simply does not have the generalised reasoning ability. it necessarily needs inputs from every source possible including programming and maths. You have completely ignored that LLMs have _generalised_ reasoning ability that it derives from disparate sources. | ||||||||||||||||||||||||||
| ▲ | bigfishrunning an hour ago | parent [-] | |||||||||||||||||||||||||
LLMs have the ability to convince you that they've "reasoned". sometimes, an application will loop the output of an LLM to its input to provide a "chain of reasoning" This is not the same thing as reasoning. LLMs are pattern matchers. If you trained an llm only to map some input to the output of John Steinbeck, then by golly that's what it'll be able to do. If you give it some input that isn't suitably like any of the input you gave it during training, then you'll get some unpredictable nonsense as output. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||