▲ | sirwhinesalot 2 days ago | ||||||||||||||||||||||||||||||||||
There's a huge jump from "we cannot predict the output of an LLM given its input" to "we don't understand LLMs", or that they might be conscious or that this is in any way equivalent to our lack of understanding of the human brain. We also don't understand (in that sense) any other ML model of sufficient size. It learning features we humans cannot come up with is its job. We can understand (in that sense) sufficiently small models because we have enough computational power to translate them to a classical AI model and query it. That means it is a problem of scale, not of some fundamental property unique to LLMs. The bizarre take is being spooked by this. It's been true of simpler models for a very long time. Not a problem. | |||||||||||||||||||||||||||||||||||
▲ | ninetyninenine 2 days ago | parent [-] | ||||||||||||||||||||||||||||||||||
>There's a huge jump from "we cannot predict the output of an LLM given its input" to "we don't understand LLMs", or that they might be conscious or that this is in any way equivalent to our lack of understanding of the human brain. No it's not. There's huge similarities between artificial neural networks and the human brain. We not only understand atoms. We understand individual biological neurons. So the problem of understanding the human brain is in actuality ALSO a scaling problem. Granted I realize the human brain is much more complex in terms of network connections and how it rewires dynamically, but my point still stands. Additionally we can't even characterize the meaning of consciousness. Like you're likely thinking consciousness is some sort of extremely complex or very powerful concept. But the word is loaded and we don't know so much that we actually don't know this. Consciousness could be a very trivial thing, we actually have no idea. I agree that the brain is much more complex and much harder to understand and we understand much less. But this does not detract from the claim above that we fundamentally don't understand the LLM to such a degree that we can't even make a statement about whether or not an LLM is conscious or not. To reiterate PART of this comes from the fact that we ALSO don't understand what consciousness is itself. >The bizarre take is being spooked by this. It's been true of simpler models for a very long time. Not a problem. This is an hallucination by you. I'm not spooked at all. I don't know wwhere you're getting that from. My initial post, the tone was one of annoyance not "spooked". I'm annoyed by all the claims from people like you saying "we completely understand LLMs". I mean doesn't this show how similar you are to an LLM? You hallucinated that I was spooked when I indicated no such thing. I think here's a more realistic take: You're spooked. If what I said was categorically true, than you'd be spooked by the implications so part of what you do is to choose the most convenient reality that's within the realm of possibility such that you aren't spooked. Like I understand that classifying consciousness as this trivial thing that can possibly come about as an emergent side effect in an LLM could be a spooky thing. But think rationally. Given how much we don't know both about LLMs, human brains and consciousness, we in ACTUALITY don't know if this is what's going on. We can't make a statement either way. And this is the most logical explanation. It has NOTHING to do with being "spooked" which is an attribute that shouldn't be part of any argument. | |||||||||||||||||||||||||||||||||||
|