▲ | root_axis 3 days ago | ||||||||||||||||||||||||||||
It doesn't seem clear that there is necessarily any connection between consciousness and intelligence. If anything, LLMs are evidence of the opposite. It also isn't clear what the functional purpose of consciousness would be in a machine learning model of any kind. Either way, it's clear it hasn't been an impediment to the advancement of machine learning systems. | |||||||||||||||||||||||||||||
▲ | fao_ 2 days ago | parent [-] | ||||||||||||||||||||||||||||
> It doesn't seem clear that there is necessarily any connection between consciousness and intelligence. If anything, LLMs are evidence of the opposite. This implies that LLMs are intelligent, and yet even the most advanced models are unable to solve very simple riddles that take humans only a few seconds, and are completely unable to reason around basic concepts that 3 year olds are able to. Many of them regurgitate whole passages of text that humans have already produced. I suspect that LLMs have more akin with Markov models than many would like to assume. | |||||||||||||||||||||||||||||
|