Remix.run Logo
RobRivera 3 hours ago

Actually this is the crux and the nuance which makes discussing LLM specifics a pain in the general space.

If you built an LLM exclusively on the writings and letters of John Steinbeck, you could NOT tell the LLM to solve an integral for you amd expect it to be right.

Instead what you will receive is a text that follows a statistically derived most likely (in accordance to the perplexity tuning) response to such a question.

simianwords 25 minutes ago | parent | next [-]

>If you built an LLM exclusively on the writings and letters of John Steinbeck, you could NOT tell the LLM to solve an integral for you amd expect it to be right.

this shows that you have very less idea on how llm's work.

LLM that is trained only on john steinbeck will not work at all. it simply does not have the generalised reasoning ability. it necessarily needs inputs from every source possible including programming and maths.

You have completely ignored that LLMs have _generalised_ reasoning ability that it derives from disparate sources.

bigfishrunning 13 minutes ago | parent [-]

LLMs have the ability to convince you that they've "reasoned". sometimes, an application will loop the output of an LLM to its input to provide a "chain of reasoning"

This is not the same thing as reasoning.

LLMs are pattern matchers. If you trained an llm only to map some input to the output of John Steinbeck, then by golly that's what it'll be able to do. If you give it some input that isn't suitably like any of the input you gave it during training, then you'll get some unpredictable nonsense as output.

simianwords 4 minutes ago | parent [-]

this is outdated stuff from 3 years ago.

> If you trained an llm only to map some input to the output of John Steinbeck

this is literally not possible because the llm does not get generalised reasoning ability. this is not a useful hypothetical because such an llm will simply not work. why do you think you have never seen a domain specific model ever?

if you wanted to falsify this claim: "llm's cant reason" how would one do that? can you come up with some examples that shows that it can't reason? what if we come up with a new board game with some rules and see if it can beat a human at it. just feed the rules of the game to it and nothing else.

here is gpt-5.4 solving never before seen mathematics problems: https://epochai.substack.com/p/gpt-54-set-a-new-record-on-fr...

you could again say its just pattern matching but then i would argue that its the same thing we are doing.

bigfishrunning a few seconds ago | parent [-]

Domain specific LLM's absolutely exist, don't assume i've never seen one. You seem very misinformed on what is "literally not possible".

https://www.ibm.com/think/topics/domain-specific-llm

netdevphoenix 3 hours ago | parent | prev [-]

> If you built an LLM exclusively on the writings and letters of John Steinbeck, you could NOT tell the LLM to solve an integral for you amd expect it to be right.

Isn't this obvious? There is not enough latent knowledge of math there to enable current LLMs to approximate anything resembling an integral.

RobRivera 3 hours ago | parent | next [-]

Its obvious to me.

Its obvious to you.

It isnt obvious to the person I am responding to, and it isnt obvious to majority of individuals I speak with on the matter (which is why AI, personally, is in the bucket of religion amd politics for polite conversation to simply avoid)

simianwords 33 minutes ago | parent | next [-]

It’s obvious to me. What point are you trying to make? It’s not religion it’s falsifiable easily.

LLMs can reason about integrals as well as in a literature context. You suggested that if it’s not trained on literature then it can’t reason about it. But why does that matter?

kenjackson an hour ago | parent | prev [-]

Wait -- I'm fairly certain this is obvious to the person you were responding to. It may not be obvious to a lay person (who may not even know LLMs are trained at all). But I think this is obvious to almost all people with even a small understanding of LLMs.

Talanes 3 hours ago | parent | prev [-]

Now what if we ask the LLM to write about social media? Do you think the output would be similar to what you'd get if we had a time machine to bring the actual man back and have him form his own thoughts firsthand?

bigfishrunning 12 minutes ago | parent [-]

It may be stylistically similar, but it's impossible to predict what the content would be.