Remix.run Logo
lukeify 6 hours ago

Most humans also write plausible code.

tartoran 6 hours ago | parent | next [-]

LLMs piggyback on human knowledge encoded in all the texts they were trained on without understanding what they're doing.

Humans would execute that code and validate it. From plausible it'd becomes hey, it does this and this is what I want. LLMs skip that part, they really have no understanding other than the statistical patterns they infer from their training and they really don't need any for what they are.

red75prime 2 hours ago | parent | next [-]

Could we stop using vague terms like “understanding” when talking about LLMs and machine learning? You don't know what understanding is. You only know how it feels to understand something.

It's better to describe what you can do that LLMs currently can't.

stevenhuang 36 minutes ago | parent [-]

At least it's an easy way for those who don't know that they're talking about to out themselves.

If they'd bother to see how modern neuroscience tries to explain human cognition they'd see it explained in terms that parallel modern ML. https://en.wikipedia.org/wiki/Predictive_coding

We only have theories for what intelligence even means, I wouldn't be surprised there are more similarities than differences between human minds and LLMs, fundamentally (prediction and error minimization)

owlninja 6 hours ago | parent | prev | next [-]

They probably at least look at the docs?

6 hours ago | parent [-]
[deleted]
stevenhuang 5 hours ago | parent | prev [-]

LLMs can execute code and validate it too so the assertions you've made in your argument are incorrect.

What a shame your human reasoning and "true understanding" led you astray here.

gitaarik an hour ago | parent | prev | next [-]

All code is plausible by design

6 hours ago | parent | prev [-]
[deleted]