| ▲ | hedgehog 3 hours ago | |
Given the large visible differences in behavior and construction, akin to the difference between a horse and a pickup truck, I would ask the reverse question: In what ways do LLMs meet the definition of having consciousness and agency? Veering into the realm of conjecture and opinion, I tend to think a 1:1 computer simulation of human cognition is possible, and transformers being computationally universal are thus theoretically capable of running that workload. That being said, that's a bit like looking at a bird in flight and imagining going to the moon: only tangentially related to engineering reality. | ||
| ▲ | ACCount37 2 hours ago | parent [-] | |
What about modern LLMs isn't "agentic" enough? Doesn't matter if they're conscious for that. They're clearly capable of goal oriented behavior. | ||