| ▲ | tippytippytango 5 hours ago | |||||||
Above I was talking more generally about full autonomy. I agree the combined human + fsd system can be at least as safe as a human driver, perhaps more, if you have a good driver. As a frequent user of FSD, it's unreliability can be a feature, it constantly reminds me it can't be fully trusted, so I shadow drive and pay full attention. So it's like having a second pair of eyes on the road. I worry that when it gets to 10,000 mile per incident reliability that it's going to be hard to remind myself I need to pay attention. At which point it becomes a de facto unsupervised system and its reliability falls to that of the autonomous system, rather than the reliability of human + autonomy, an enormous gap. Of course, I could be wrong. Which is why we need some trusted third party validation of these ideas. | ||||||||
| ▲ | terminalshort 5 hours ago | parent [-] | |||||||
Yeah, I agree with that. There's a potentially dangerous attention gap that could just play into the fundamental weakness of the human brain's ability to pay attention for long periods of time with no interaction. Unfortunately I don't see any possible way to validate this without letting the tech loose. You can't get good data on this without actual driving in real road conditions. | ||||||||
| ||||||||