| ▲ | rudhdb773b 2 hours ago | |||||||
Sure the implementation details are different. I suppose I should have asked by what definition of "consciousness and agency" are today's LLMs (with proper tooling) not meeting? And if today's models aren't meeting your standard, what makes you think that future LLMs won't get there? | ||||||||
| ▲ | hedgehog an hour ago | parent [-] | |||||||
Given the large visible differences in behavior and construction, akin to the difference between a horse and a pickup truck, I would ask the reverse question: In what ways do LLMs meet the definition of having consciousness and agency? Veering into the realm of conjecture and opinion, I tend to think a 1:1 computer simulation of human cognition is possible, and transformers being computationally universal are thus theoretically capable of running that workload. That being said, that's a bit like looking at a bird in flight and imagining going to the moon: only tangentially related to engineering reality. | ||||||||
| ||||||||