Remix.run Logo
netdevphoenix 3 hours ago

> If you built an LLM exclusively on the writings and letters of John Steinbeck, you could NOT tell the LLM to solve an integral for you amd expect it to be right.

Isn't this obvious? There is not enough latent knowledge of math there to enable current LLMs to approximate anything resembling an integral.

RobRivera 3 hours ago | parent | next [-]

Its obvious to me.

Its obvious to you.

It isnt obvious to the person I am responding to, and it isnt obvious to majority of individuals I speak with on the matter (which is why AI, personally, is in the bucket of religion amd politics for polite conversation to simply avoid)

simianwords 31 minutes ago | parent | next [-]

It’s obvious to me. What point are you trying to make? It’s not religion it’s falsifiable easily.

LLMs can reason about integrals as well as in a literature context. You suggested that if it’s not trained on literature then it can’t reason about it. But why does that matter?

kenjackson an hour ago | parent | prev [-]

Wait -- I'm fairly certain this is obvious to the person you were responding to. It may not be obvious to a lay person (who may not even know LLMs are trained at all). But I think this is obvious to almost all people with even a small understanding of LLMs.

Talanes 3 hours ago | parent | prev [-]

Now what if we ask the LLM to write about social media? Do you think the output would be similar to what you'd get if we had a time machine to bring the actual man back and have him form his own thoughts firsthand?

bigfishrunning 10 minutes ago | parent [-]

It may be stylistically similar, but it's impossible to predict what the content would be.