| ▲ | xg15 2 days ago | |
I think this is what makes me uneasy about the whole LLM/"consciousness" debate. I may be wrong, but as far as I know, we still don't really understand how a bunch of feedforward networks and attention modules result in the kind of crazy semantic context understanding and planning-in-human-language behavior we observe in LLMs. Neither do we know how the billions of neurons in a human brain do it. The debate how similar or dissimilar LLMs are to brains wasn't solved by any kind of scientific finding, it feels we just sort of decided at some point that they'd have to be fundamentally different, because everything else would be highly problematic. | ||