| ▲ | autoexec 8 hours ago | |
> There is also a different kind of increased safety. There is no driver. No weird conversations about slaughtering goats, no sexual advances. No worrying that your driver is going to assault you or attempt to kidnap you. There are also new risks that weren't possible before. A software error can send you into oncoming traffic. Hackers can gain control of your vehicle either directly/remotely or by cleverly designed signage placed on the roadside. A disgruntled waymo contractor in the Philippines can remote drive you into a crowd of people. A flashing stoplight can leave you stranded at an intersection. The car may not see or react appropriately any number of uncommon hazards that human drivers would recognize and avoid. Only a relatively small number of these cars have been on the road, in limited conditions, and only for a small number years. There will be failures and risks we haven't even imagined yet. | ||
| ▲ | tgsovlerkhgsel 6 hours ago | parent | next [-] | |
Frequency matters. One of these sets of risk is mostly theoretical (aside from the large scale stoplight outage), one of them is happening often enough that anyone who takes rideshare repeatedly will have a story. If we limit ourselves to risks that have actually manifested, not hypothetical risks, I'd rather risk getting stuck at an intersection if there is a city wide power outage than deal with the weird conversations I've had on rideshares (not even counting the countless drivers who demonstrated that it is possible to drive a car without crashing for the duration of one rideshare ride without taking your eyes off the phone for more than a few seconds at a time). | ||
| ▲ | TheDong 4 hours ago | parent | prev [-] | |
> A disgruntled waymo contractor in the Philippines can remote drive you into a crowd of people. They cannot. The remote drivers for Waymo offer "nudges" to the robot driver, but they cannot do full remote control. They can effectively mark a dot in the middle of a crowd of people on their tablet and say "Your best course of action is to drive here", and the waymo very well might decide to try and follow that suggestion, but they cannot override Waymo's brakes nor coded-in "do not hit humans" mandate, and the waymo would stop before hitting anyone. > Only a relatively small number of these cars have been on the road, in limited conditions, and only for a small number years. The average uber driver has driven fewer miles on the road than Waymo's software, and hasn't seen all the conditions either. Most uber drivers have cumulatively like 5-20 years driving experience in the city they're driving in. Waymo has racked up waaaay more miles than the average single human ever gets, and unlike humans, all the Waymos benefit from improvements to the software. > There will be failures and risks we haven't even imagined yet. This is pointless fearmongering. Like, ketchup could cause cancer, but we have no meaningful evidence in that direction, so saying "ketchup has unknown risks we haven't imagined yet" is silly. We know now that waymo is statistically safer than human drivers, I personally know that I haven't had a waymo driver make me feel unsafe yet, but uber drivers often did, so you know, waymo seems to have some pretty nice improvements already. I'll wait for actual evidence of these "unimaginable risks and failures" before I evaluate them. At this point, it would have to be a pretty bad failure to change the math though. | ||