Remix.run Logo
sirwhinesalot 2 days ago

> Maybe what you're trying to say here is that we understand LLMs enough in such a way that you aren't spooked. Since you made up all that bullshit about me being spooked, I'm guessing that's what you mean.

Correct. I bundled you with the alarmists who speak in similar ways, as pattern matching brains tend to do. Not an hallucination in the LLM sense, just standard probability.

> If we understand 1 percent of LLMs but only 0.1% of the human brain. That's a 10x dramatic increase in our understanding of LLMs OVER the brain. But it still doesn't change my main point: Overall we. don't. understand. how. LLMs. work. This is exactly the way I would characterize our overall understanding holistically.

And it's not how I characterize it at all. What algorithm is your brain running right now? Any idea? We have no clue. We know the algorithm the LLM is executing: it's a token prediction engine running in a loop. We wrote it. We know enough about how it works to know how to make it better (e.g., Mixture of Experts, "Reasoning").

This is not a "0.1x" or "10x" or whatever other quantitative difference, it's a qualitative difference in understanding. Not being able to predict the input-output relationship of any sufficiently large black-box algorithm does not give one carte-blanche to jump to conclusions regarding what they may or may not be.

How large does a black-box model need to be before you entertain that it might be "conscious" (whatever that may actually be). Is a sufficiently large spam filter conscious? Is it even worth entertaining such an idea? Or is it just worth entertaining for LLMs because they write text that is sufficiently similar to human written text? Does this property grant them enough "weight" that questions regarding "consciousness" are even worth bringing up? What about a starcraft playing bot based on reinforcement learning? Is it worth bringing up for one? We "do not understand" how they work either.

ninetyninenine 2 days ago | parent [-]

>Correct. I bundled you with the alarmists who speak in similar ways, as pattern matching brains tend to do. Not an hallucination in the LLM sense, just standard probability.

First off what you did here is common for humans.

Second off it's the same thing that happens for LLMs. You don't know fact from fiction, neither does the LLM, so it predicts something probable given limited understanding. It is not different. You made shit up. You hallucinated off of a probable outcome. LLMs do the same.

Third, as a human, it's on you when you don't verify the facts. You make shit up on accident that's your fault and your reputation on the line. It's justified here to call you out for making crap up out of thin air.

Either make better guesses or don't guess at all. For example this guess: "Maybe what you're trying to say here is that we understand LLMs enough in such a way that you aren't spooked." was spot on by your own admission.

>And it's not how I characterize it at all. What algorithm is your brain running right now? Any idea? We have no clue. We know the algorithm the LLM is executing: it's a token prediction engine running in a loop. We wrote it. We know enough about how it works to know how to make it better (e.g., Mixture of Experts, "Reasoning").

This has nothing to do with quantization, that's just an artifact of the example I'm using and is only there to illustrate relative differences in the amount we know.

Your characterization is that we know MUCH more about the LLM than we do the brain. So I'm illustrating that, while, yeah EVEN though your characterization is true THE amount we know about the LLM is still miniscule. Hence the 10x improvement on 1% from 0.1%. In the end we still don't know shit, it's still at most 1% of what we need to know. Quantization isn't the point, it wasn't your point, it's not mine. It's here to illustrate proportion of knowledge WHICH was indeed your POINT.

>How large does a black-box model need to be before you entertain that it might be "conscious" (whatever that may actually be). Is a sufficiently large spam filter conscious?

I don't know. You don't know either. We both don't know. Because like I said we don't even know what the word means.

>Is it even worth entertaining such an idea?

Probably not for a spam box filter. But technically speaking We. don't. actually. know.

However, qualitatively speaking it is worth Entertaining the idea for an LLM Given how similar it is to humans. We both understand WHY plenty of people are entertaining the idea. Right? you and I totally get it. What I'm saying is that GIVEN that we don't know either way, we can't dismiss what other people are entertaining.

Also your method of rationalizing all of this is flawed. Like you, use comparisons to justify your thoughts. You don't want to think a spam filter is sentient so you think if the spam filter is comparable to an LLM then we must think an LLM isn't sentient. But that doesn't logically flow right? How is a spam filter similar to an LLM? There are differences right? Just because they share similarities doesn't make your argument suddenly logically flow. There are similarities between spam filters and humans too! We both use neural nets? Therefore since spam filters aren't sentient, humans aren't either? Do you see how this line of reasoning can be fundamentally misapplied everywhere?

I mean the comparison logic is flawed, ON top of the fact that we don't even know what we're talking about... i mean... What is consciousness? And we don't in actuality understand the spam filter enough to know if it's sentient. I mean if ONE aspect of your logic made sense we could possibly say I'm just being pedantic that certain assumptions are given... but your logic is broken everywhere. Nothing works. So I'm not being pedantic.

>Or is it just worth entertaining for LLMs because they write text that is sufficiently similar to human written text?

Yes. Many people would agree. It's worth entertaining. This "worth" is a human measure. Not just qualitative, but also opinionated and it is of my opinion and many peoples opinion that "yes" it is worth it. Hence why there's so much debate around it. Even if you don't feel it's worth "entertaining" at least you have the intelligence to understand why so many people think it's worth it to discuss.

>What about a starcraft playing bot based on reinforcement learning? Is it worth bringing up for one? We "do not understand" how they work either.

Most people are of the opinion that "no" it is not worth understanding. It is better to ask this question of the LLM. Of course you bring up these examples because you think the comparison chains everything together. You think if it's not worth it for the spam filter it's not worth it in your mind to consider sentience for anything that is in your opinion "comparable" to it. And like I deduced earlier I'm saying, you're wrong, this type of logic doesn't work.