| ▲ | bigfishrunning an hour ago | ||||||||||||||||
LLMs have the ability to convince you that they've "reasoned". sometimes, an application will loop the output of an LLM to its input to provide a "chain of reasoning" This is not the same thing as reasoning. LLMs are pattern matchers. If you trained an llm only to map some input to the output of John Steinbeck, then by golly that's what it'll be able to do. If you give it some input that isn't suitably like any of the input you gave it during training, then you'll get some unpredictable nonsense as output. | |||||||||||||||||
| ▲ | simianwords an hour ago | parent [-] | ||||||||||||||||
this is outdated stuff from 3 years ago. > If you trained an llm only to map some input to the output of John Steinbeck this is literally not possible because the llm does not get generalised reasoning ability. this is not a useful hypothetical because such an llm will simply not work. why do you think you have never seen a domain specific model ever? if you wanted to falsify this claim: "llm's cant reason" how would one do that? can you come up with some examples that shows that it can't reason? what if we come up with a new board game with some rules and see if it can beat a human at it. just feed the rules of the game to it and nothing else. here is gpt-5.4 solving never before seen mathematics problems: https://epochai.substack.com/p/gpt-54-set-a-new-record-on-fr... you could again say its just pattern matching but then i would argue that its the same thing we are doing. | |||||||||||||||||
| |||||||||||||||||