▲ | ofjcihen 6 days ago | ||||||||||||||||
It’s incredible to me that so many seem to have fallen for “humans are just LLMs bruh” argument but I think I’m beginning to understand the root of the issue. People who only “deeply” study technology only have that frame of reference to view the world so they make the mistake of assuming everything must work that way, including humans. If they had a wider frame of reference that included, for example, Early Childhood Development, they might have enough knowledge to think outside of this box and know just how ridiculous that argument is. | |||||||||||||||||
▲ | gond 6 days ago | parent | next [-] | ||||||||||||||||
That is an issue prevalent in the western world for the last 200 years, beginning possibly with the Industrial Revolution, probably earlier. That problem is reductionism, consequently applied down to the last level: discover the smallest element of every field of science, develop an understanding of all the parts from the smallest part upwards and develop, from the understanding of the parts, an understanding of the whole. Unfortunately, this approach does not yield understanding, it yields know-how. | |||||||||||||||||
| |||||||||||||||||
▲ | dmacfour 6 days ago | parent | prev [-] | ||||||||||||||||
I have a background in ML and work in software development, but studied experimental psych in a past life. It's actually kind of painful watching people slap phases related to cognition onto things that aren't even functionally equivalent to their namesakes, then parade them around like some kind of revelation. It's also a little surprising that there no interest (at least publicly) in using cognitive architectures in the development of AI systems. |