Remix.run Logo
simianwords an hour ago

this is outdated stuff from 3 years ago.

> If you trained an llm only to map some input to the output of John Steinbeck

this is literally not possible because the llm does not get generalised reasoning ability. this is not a useful hypothetical because such an llm will simply not work. why do you think you have never seen a domain specific model ever?

if you wanted to falsify this claim: "llm's cant reason" how would one do that? can you come up with some examples that shows that it can't reason? what if we come up with a new board game with some rules and see if it can beat a human at it. just feed the rules of the game to it and nothing else.

here is gpt-5.4 solving never before seen mathematics problems: https://epochai.substack.com/p/gpt-54-set-a-new-record-on-fr...

you could again say its just pattern matching but then i would argue that its the same thing we are doing.

bigfishrunning an hour ago | parent [-]

Domain specific LLM's absolutely exist, don't assume i've never seen one. You seem very misinformed on what is "literally not possible".

https://www.ibm.com/think/topics/domain-specific-llm

simianwords an hour ago | parent [-]

there are close to zero domain specific models that beat frontier SOTA models even in their own domain. (other than few edge cases like token extraction)

why do you think that's the case? lets start from here.

the real answer is that you get benefits from having data from many sources that add up expontentially for intelligence.

> LLMs are pattern matchers

but lets try to falsify this. can you come up with a prompt that clearly shows that LLM's can't reason?