▲ | BobbyTables2 2 days ago | |||||||
Indeed. I’m shocked that we train “AI” pretty much as one would build a fancy auto-complete. Not necessarily a bad approach but feels like something is missing for it to be “intelligent”. Should really be called “artificial knowledge” instead. | ||||||||
▲ | jofla_net 2 days ago | parent | next [-] | |||||||
This and parent are both approaching toward what I see as the main obstacle, that we as a species don't know how in its entirety a human mind thinks (and it varies among people), so trying to "model" it and reproduce it is reduced to a game of black-boxing. We black box the mind in terms of what situations its been seen to be in and how it has performed, the millions of correlative inputs/outputs are the training data. Yet, since we don't know the fullness of the interior we can only see its outputs it becomes somewhat of a Plato's cave situation. We believe it 'thinks' this way but again we cannot empirically say it performed a task a certain way, so unlike most other engineering problems, we are grasping at straws while trying to reconstruct it. This doesn't not mean that a human mind's inner-workings can't ever be %100 reproduced, but not until we know it further. | ||||||||
| ||||||||
▲ | kragen 2 days ago | parent | prev [-] | |||||||
"What do you mean, they talk?" "They talk by flapping their meat at each other!" |