Remix.run Logo
terminalshort 5 hours ago

If there was actually a rate of one life threatening accident per 10,000 miles with FSD that would be so obvious it would be impossible to hide. So I have to conclude the cars are actually much safer than that.

buran77 4 hours ago | parent | next [-]

FSD never drives alone. It's always supervised by another driver legally responsible to correct. More importantly we have no independently verified data about the self driving incidents. Quite the opposite, Tesla repeatedly obscured data or impeded investigations.

I've made this comparison before but student drivers under instructor supervision (with secondary controls) also rarely crash. Are they the best drivers?

I am not a plane pilot but I flew a plane many times while supervised by the pilot. Never took off, never landed, but also never crashed. Am I better than a real pilot or even in any way a competent one?

tippytippytango 5 hours ago | parent | prev [-]

Above I was talking more generally about full autonomy. I agree the combined human + fsd system can be at least as safe as a human driver, perhaps more, if you have a good driver. As a frequent user of FSD, it's unreliability can be a feature, it constantly reminds me it can't be fully trusted, so I shadow drive and pay full attention. So it's like having a second pair of eyes on the road.

I worry that when it gets to 10,000 mile per incident reliability that it's going to be hard to remind myself I need to pay attention. At which point it becomes a de facto unsupervised system and its reliability falls to that of the autonomous system, rather than the reliability of human + autonomy, an enormous gap.

Of course, I could be wrong. Which is why we need some trusted third party validation of these ideas.

terminalshort 5 hours ago | parent [-]

Yeah, I agree with that. There's a potentially dangerous attention gap that could just play into the fundamental weakness of the human brain's ability to pay attention for long periods of time with no interaction. Unfortunately I don't see any possible way to validate this without letting the tech loose. You can't get good data on this without actual driving in real road conditions.

Veserv 4 hours ago | parent [-]

At a certain point you do need to test in real road conditions. However, there is absolutely no need to jump straight from testing in lab conditions and “testing” using unmonitored, untrained end users.

You use professional trained operators with knowledge of the system design and operation using a designed safety plan to minimize prototype risks. At no point should your test plan increase danger to members of the public. Only when you fix problems faster than that test procedure can find do you expand scope.

If you follow the standard automotive pattern, you then expand scope to your untrained, but informed employees using monitored systems. Then untrained employees, informed employees using production systems. Then informed early release customers. Then once you stop being able to find problems regularly at all of those levels do you do a careful monitored release to the general public verifying the safety properties are maintained. Then you finally have a fully released “safe” product.