▲ | fao_ 2 days ago | |||||||
> It doesn't seem clear that there is necessarily any connection between consciousness and intelligence. If anything, LLMs are evidence of the opposite. This implies that LLMs are intelligent, and yet even the most advanced models are unable to solve very simple riddles that take humans only a few seconds, and are completely unable to reason around basic concepts that 3 year olds are able to. Many of them regurgitate whole passages of text that humans have already produced. I suspect that LLMs have more akin with Markov models than many would like to assume. | ||||||||
▲ | interstice 2 days ago | parent | next [-] | |||||||
There is an awful lot of research into just how much is regurgitated vs the limits of their creativity, and as far as I’m aware this was not the conclusion that research came to. That isn’t to say any reasoning that does happen is not fragile or prone to breaking in odd ways, but I’ve had similar experience dealing with other humans more often than I’d like too. | ||||||||
▲ | root_axis 2 days ago | parent | prev | next [-] | |||||||
Even accepting all that at face value, I don't see what any of it has to do with consciousness. | ||||||||
▲ | Uehreka 2 days ago | parent | prev [-] | |||||||
I suspect that you haven’t really used them much, or at least in a while. You’re spouting a lot of 2023-era talking points. | ||||||||
|