Remix.run Logo
Borealid 14 hours ago

If all the training data contains semantically-meaningful sentences it should be possible to build a network optimized for generating semantically-meaningful sentence primarily/only.

But we don't appear to have entirely done that yet. It's just curious to me that the linguistic structure is there while the "intelligence", as you call it, is not.

dvt 13 hours ago | parent | next [-]

> If all the training data contains semantically-meaningful sentences it should be possible to build a network optimized for generating semantically-meaningful sentence primarily/only.

Not necessarily. You can check this yourself by building a very simple Markov Chain. You can then use the weights generated by feeding it Moby Dick or whatever, and this gap will be way more obvious. Generated sentences will be "grammatically" correct, but semantically often very wrong. Clearly LLMs are way more sophisticated than a home-made Markov Chain, but I think it's helpful to see the probabilities kind of "leak through."

WarmWash 13 hours ago | parent [-]

But there is a very good chance that is what intelligence is.

Nobody knows what they are saying either, the brain is just (some form) of a neural net that produces output which we claim as our own. In fact most people go their entire life without noticing this. The words I am typing right now are just as mysterious to me as the words that pop on screen when an LLM is outputting.

I feel confident enough to disregard duelists (people who believe in brain magic), that it only leaves a neural net architecture as the explanation for intelligence, and the only two tools that that neural net can have is deterministic and random processes. The same ingredients that all software/hardware has to work with.

dvt 13 hours ago | parent | next [-]

> I feel confident enough to disregard duelists

I'm a dualist, but I promise no to duel you :) We might just have some elementary disagreements, then. I feel like I'm pretty confident in my position, but I do know most philosophers generally aren't dualists (though there's been a resurgence since Chalmers).

> the brain is just (some form) of a neural net that produces output

We have no idea how our brain functions, so I think claiming it's "like X" or "like Y" is reaching.

WarmWash 13 hours ago | parent [-]

Again, unless you are a dualist, we can put comfortable bounds on what the brain is. We know it's made from neurons linked together. We know it uses mediators and signals. We know it converts inputs to outputs. We know it can only be using deterministic and random processes.

We don't know the architecture or algorithms, but we know it abides by physics and through that know it also abides by computational theory.

Jblx2 12 hours ago | parent [-]

https://www.dictionary.com/browse/duelist

WarmWash 12 hours ago | parent [-]

Thanks

Jensson 10 hours ago | parent | prev [-]

Brains invented this language to express their inner thoughts, it is made to fit our thoughts, it is very different from what LLM does with it they don't start with our inner thoughts and learning to express those it just learns to repeat what brains have expressed.

staticassertion 13 hours ago | parent | prev | next [-]

Sentences only have semantic meaning because you have experiences that they map to. The LLM isn't training on the experiences, just the characters. At least, that seems about right to me.

codebje 13 hours ago | parent | prev | next [-]

Why would that be curious? The network is trained on the linguistic structure, not the "intelligence."

It's a difficult thing to produce a body of text that conveys a particular meaning, even for simple concepts, especially if you're seeking brevity. The editing process is not in the training set, so we're hoping to replicate it simply by looking at the final output.

How effectively do you suppose model training differentiates between low quality verbiage and high quality prose? I think that itself would be a fascinatingly hard problem that, if we could train a machine to do, would deliver plenty of value simply as a classifier.

thrownthatway 12 hours ago | parent | prev [-]

I’m not up with what all the training data is exactly.

If it contains the entire corpus of recorded human knowledge…

And most of everything is shit