Remix.run Logo
isodev 6 hours ago

> Neural networks excel at judgment

I don’t think they do. I think they excel at outputting echoes of their training data that best fit (rhyme with, contextually) the prompt they were given. If you try using Claude with an obscure language or use case, you will notice that effect even more - it will keep pulling towards things it knows that aren’t at all what’s asked or “the best judgement” for what’s needed.

5 hours ago | parent | next [-]
[deleted]
jauntywundrkind 4 hours ago | parent | prev | next [-]

Here here. Code has uniquely an incredible volume of data. And incredibly good ways to assess & test it's weights, to immediately find out of its headed the right way on the gradient.

geraneum 3 hours ago | parent [-]

> And incredibly good ways to assess & test it's weights

What weights are you referring to? How does [Claude?] code do that

rybosworld 5 hours ago | parent | prev | next [-]

Neural nets have been better at classifying handwriting (MNIST) than the best humans for a long time. This is what the author means by judgement.

They are super-human in their ability to classify.

verdverm 5 hours ago | parent | next [-]

Classifiers and LLMs get very different training and objectives, it's a mistake to draw inference from MNIST for coding agents or LLMs more generally.

Even within coding, their capability varies widely between context and even runs with the same context. They are not better at judgement in coding for all cases, def not

kranner 4 hours ago | parent [-]

A lot of the context is not even explicit, unlike the case for toy problems like MNIST.

PostOnce 4 hours ago | parent | prev [-]

Tell that to all the OCR fuckups I see in all the ebooks I read.

raincole 4 hours ago | parent [-]

Your ebooks are made with handwriting recognition...? What do you read, the digital version of Dead Sea Scrolls?

PostOnce 4 hours ago | parent | next [-]

Some of them are, most of them are standard typesetting, which you would think would be all the easier to OCR, due to the uniformity.

But because you're curious, there are some fairly famous handwritten books that maintain their handwriting in publication, my favorite being: https://boingboing.net/2020/08/31/getting-started-in-electro...

Old manuscripts are another one, there are a LOT of those. Is that handwriting? Maybe you'd argue it's "hand-printing" because its so meticulous.

esafak an hour ago | parent | prev [-]

They could be OCRs of scanned printed books.

boogrpants 5 hours ago | parent | prev [-]

> I think they excel at outputting echoes of their training data that best fit (rhyme with, contextually) the prompt they were given.

Just like people who get degrees in economics or engineering and engage in such role-play for decades. They're often pretty bad at anything they are not trained on.

Coincidentally, if you put a single American English speaker on a team of native German language speakers you will notice information transference falls apart.

Very normal physical reality things occurring in two substrates, two mediums. As if there is a shared limitation called the rest of the universe attempting to erode our efforts via entropy.

LLM is a distribution of human generated data sets. Since humans have the same incompleteness problems in society this affords enough statistical wiggle room for LLMs to make shit up; humans do it! Look in their data!

We're massively underestimating realities indifference to human existence.

There is no doing any better until we effectively break physics, by that I really mean come upon a game changing discovery that informs us we had physics all wrong to begin with.

harry8 3 hours ago | parent | next [-]

The fact there are a lot of people around who don't think (including me at times!) does mean LLMs doing that are thinking.

Much like LLMs writing text like mindless middle managers, it doesn't mean they're intelligent, more that mindless middle managers aren't.

isodev an hour ago | parent | prev [-]

> Just like people

I understand that having model related vocabulary borrow similar words we use to describe human brains and cognition gets confusing. We are not the same, we don’t “learn” the same we certainly don’t use the knowledge we posses in the same way.

The major difference between an LLM and a human is that as a human, I can look at your examples (which sound solid at first glance) and choose to truly “reason” about them in a way that allows me to judge if they’re correct or even applicable.

boogrpants 8 minutes ago | parent | next [-]

Obviously. You are not exactly the same as your nearest neighbor but have similar observable traits to outside observers.

But since you end up trying to differentiate yourself from an LLM in vague, conceptual qualifiers, not empirical differences, what it means to "reason" ...I am left uncertain what you mean at all.

An LLM can reject false assertions and generate false positives just like a human.

Within a culture too individual people become pretty copy paste distillations of their generations customs. As a social creature you aren't that different. Really all that sets you apart from other people or a computer is a unique meat suit.

Unfortunately for your meat suit most people don't care it exists and will carry on with their lives never noticing it.

While LLMs have massive valuations right now. Pretty sure the public has spoken when it comes to the differences you fail to illustrate actually mattering.

perfmode 38 minutes ago | parent | prev [-]

how’s your reasoning different from LLM reasoning?