▲ | davidhs 5 days ago | |
If the internal representation of Tesla Autopilot is similar to what the UI displays, i.e. the location of the w.r.t. to everything else, and we had a human whose internal representation is similar, everything jumping around in consciousness, we’d be insane to allow him to drive. Self-driving is probably “AI-hard” as you’d need extensive “world knowledge” and be able to reason about your environment and tolerate faulty sensors (the human eyes are super crappy with all kinds of things that obscure it, such as veins and floaters). Also, if the Waymo UI accurately represents what it thinks is going on “out there” it is surprisingly crappy. If your conscious experience was like that when you were driving you’d think you had been drugged. | ||
▲ | ben_w 5 days ago | parent [-] | |
I agree that if Tesla's representation of what their system is seeing is accurate, it's a bad system. The human brain's vision system makes pretty much the exact opposite mistake, which is a fun trick that is often exploited by stage magicians: https://www.youtube.com/watch?v=v3iPrBrGSJM&pp And is also emphasised by driving safety awareness videos: https://www.youtube.com/watch?v=LRFMuGBP15U I wonder what we'd seem like to each other, if we could look at each other's perception as directly as we can look at an AI's perception? Most of us don't realise how much we mispercieve because it doesn't feel different in the moment to percieve incorrectly; it can't feel different in the moment, because if it did, we'd notice we were mispercieving. |