| ▲ | vidarh 2 hours ago | |
Sperry's experiments makes it quite clear that the comparison is not nonsensical: humans can't reliably tell why we do things either. It is not imbuing AI with anything more to recognise that. Rather pointing out that when we seek to imply the gap is so huge we often overestimate our own abilities. | ||
| ▲ | jayd16 11 minutes ago | parent | next [-] | |
It is non-sensical because you're simply bringing in comparisons without anything linking the two. You might as well be talking about how oranges, and bicycles think as well as that is just as relevant as how humans think in this discussion. In fact, talking about "thinking" at all is already the wrong direction to go down when trying to triage an incident like this. "Do not anthropomorphize the lawnmower" applies to AI as much as Larry Ellison. | ||
| ▲ | fluoridation an hour ago | parent | prev | next [-] | |
Humans at least have a mental state that only they are privy to to work from, and not just their words and actions. The LLM literally cannot possibly have a deeper insight into the root cause than the user, because it can only work from the information that the user has access to. | ||
| ▲ | abcde666777 an hour ago | parent | prev [-] | |
Slight pushback - I think there's still a lot more consistency and coherence in a human's recollection of their motives than an LLM. Sometimes I think we're too eager to compare ourselves to them. | ||