▲ | ninetyninenine 2 days ago | |||||||||||||||||||||||||
>There's a huge jump from "we cannot predict the output of an LLM given its input" to "we don't understand LLMs", or that they might be conscious or that this is in any way equivalent to our lack of understanding of the human brain. No it's not. There's huge similarities between artificial neural networks and the human brain. We not only understand atoms. We understand individual biological neurons. So the problem of understanding the human brain is in actuality ALSO a scaling problem. Granted I realize the human brain is much more complex in terms of network connections and how it rewires dynamically, but my point still stands. Additionally we can't even characterize the meaning of consciousness. Like you're likely thinking consciousness is some sort of extremely complex or very powerful concept. But the word is loaded and we don't know so much that we actually don't know this. Consciousness could be a very trivial thing, we actually have no idea. I agree that the brain is much more complex and much harder to understand and we understand much less. But this does not detract from the claim above that we fundamentally don't understand the LLM to such a degree that we can't even make a statement about whether or not an LLM is conscious or not. To reiterate PART of this comes from the fact that we ALSO don't understand what consciousness is itself. >The bizarre take is being spooked by this. It's been true of simpler models for a very long time. Not a problem. This is an hallucination by you. I'm not spooked at all. I don't know wwhere you're getting that from. My initial post, the tone was one of annoyance not "spooked". I'm annoyed by all the claims from people like you saying "we completely understand LLMs". I mean doesn't this show how similar you are to an LLM? You hallucinated that I was spooked when I indicated no such thing. I think here's a more realistic take: You're spooked. If what I said was categorically true, than you'd be spooked by the implications so part of what you do is to choose the most convenient reality that's within the realm of possibility such that you aren't spooked. Like I understand that classifying consciousness as this trivial thing that can possibly come about as an emergent side effect in an LLM could be a spooky thing. But think rationally. Given how much we don't know both about LLMs, human brains and consciousness, we in ACTUALITY don't know if this is what's going on. We can't make a statement either way. And this is the most logical explanation. It has NOTHING to do with being "spooked" which is an attribute that shouldn't be part of any argument. | ||||||||||||||||||||||||||
▲ | sirwhinesalot 2 days ago | parent [-] | |||||||||||||||||||||||||
Hackernews really isn't a good place for a serious discussion, so I'll just clarify my position. I think you're spooked for the same reason I think that all the "AI alarmists" whose alarmism is based on our lack of understanding of LLMs are spooked. That because we "lack understanding" it follows that AI is "out of our control" or is on the verge of becoming "conscious" or "intelligent", whatever that means. Except this isn't true to me. Yes, we can't predict how inputs will map to outputs, but that's nothing unexpected? This has been true of nearly every ML model in practical use (not just those based on neural nets) for a very long time. I don't perceive this as a "lack of understanding", in the same way I don't consider it a "lack of understanding" the inability to predict the output of a Support Vector Machine classifying email as spam, or not being able to predict how the coefficients of a radial basis function end up accurately approximating the behavior of a complex physical system. To me they're all a "lack of interpretability", which is a different thing. This is, to me, qualitatively different from our lack of understanding of the human brain. We know the algorithm an LLM is executing, because we set it up. We know how it learns, because we invented the algorithm that does it. We understand pretty well what's happening between the neurons because it's just a scaled up version of smaller models, whose behavior we have visualized and understand pretty well. We know how it "reasons" (in the sense of "thinking" models) because we set it up to "reason" in that matter from how we trained it. Our understanding of the human brain is not even close to this. We can't even understand the most basic of brains. Even postulating that LLMs are conscious, whatever that actually is in reality, is nonsensical. They're not even alive! What would "consciousness" even entail for a pure function? There's no reason to even bring that up other than to hype these things as more than what they are (be it positively or negatively). > I think the fact of the matter is, if you're putting your foot down and saying LLMs aren't intelligent... you're wildly illogical and misinformed about the status quo of Artificial intelligence They're just as intelligent as a chess engine is intelligent. They're algorithms. > Also the characterization in the article is mistaken. It says we understand LLMs in a limited way. Yeah sure. It's as limited as our understanding of the human brain. We understand enough about how they work that we know just forcing them to output more tokens leads to better results and we have a good intuition as to why (see: Karpathy's video on the subject). It's why when asked a math question they spit out a whole paragraph rather than the answer directly, and why "reasoning" is surprisingly effective (we can see from open models that reasoning often just spits out a giant pile of nonsense). More tokens = more compute = more accuracy. A bit similar to the number of noise removal steps in a diffusion model. | ||||||||||||||||||||||||||
|