| ▲ | outworlder 2 days ago | |
They may not be "thinking" in the way you and I think, and instead just finding the correct output from a really incredibly large search space. > Knee jerk dismissing the evidence in front of your eyes Anthropomorphizing isn't any better. That also dismisses the negative evidence, where they output completely _stupid_ things and make mind boggling mistakes that no human with a functioning brain would do. It's clear that there's some "thinking" analog, but there are pieces missing. I like to say that LLMs are like if we took the part of our brain responsible for language and told it to solve complex problems, without all the other brain parts, no neocortex, etc. Maybe it can do that, but it's just as likely that it is going to produce a bunch of nonsense. And it won't be able to tell those apart without the other brain areas to cross check. | ||