| ▲ | simianwords an hour ago | |||||||
this is outdated stuff from 3 years ago. > If you trained an llm only to map some input to the output of John Steinbeck this is literally not possible because the llm does not get generalised reasoning ability. this is not a useful hypothetical because such an llm will simply not work. why do you think you have never seen a domain specific model ever? if you wanted to falsify this claim: "llm's cant reason" how would one do that? can you come up with some examples that shows that it can't reason? what if we come up with a new board game with some rules and see if it can beat a human at it. just feed the rules of the game to it and nothing else. here is gpt-5.4 solving never before seen mathematics problems: https://epochai.substack.com/p/gpt-54-set-a-new-record-on-fr... you could again say its just pattern matching but then i would argue that its the same thing we are doing. | ||||||||
| ▲ | bigfishrunning an hour ago | parent [-] | |||||||
Domain specific LLM's absolutely exist, don't assume i've never seen one. You seem very misinformed on what is "literally not possible". | ||||||||
| ||||||||