Remix.run Logo
WarmWash 3 hours ago

Robotaxi supevision is just an emergency brake switch.

Consumer supervision is having all the controls of the car right there in front of you. And if you are doing it right, you have your hands on wheel and foot on the pedals ready to jump in.

estearum 3 hours ago | parent | next [-]

Nah the relevant factor, which has been obvious to anyone who cared to think about this stuff honestly for years, is that Tesla's safety claims on FSD are meaningless.

Accident rates under traditional cruise control are also extremely below average.

Why?

Because people use cruise control (and FSD) under specific conditions. Namely: good ones! Ones where accidents already happen at a way below-average rate!

Tesla has always been able to publish the data required to really understand performance, which would be normalized by age of vehicle and driving conditions. But they have not, for reasons that have always been obvious but are absolutely undeniable now.

abtinf 2 hours ago | parent | next [-]

Yup, after getting a Tesla with a free FSD trial period, it was obviously a death trap if used in any kind of slightly complex situation (like the highway on-ramp that was under construction for a year).

At least once every few days, it would do something extremely dangerous, like try to drive straight into a concrete median at 40mph.

The way I describe it is: yeah, it’s self-driving and doesn’t quite require the full attention of normal driving, but it still requires the same amount of attention as supervising a teenager in the first week of their learning permit.

If Tesla were serious about FSD safety claims, they would release data on driver interventions per mile.

Also, the language when turning on FSD in vehicle is just insulting—the whole thing about how if it were an iPhone app but shucks the lawyers are just so silly and conservative we have to call it beta.

drob518 an hour ago | parent [-]

> the same amount of attention as supervising a teenager in the first week of their learning permit.

Yikes! I’d be a nervous wreck after just a couple of days.

abtinf an hour ago | parent [-]

You learn when it’s good and bad. It definitely has a “personality”. It is awesome in certain situations, like bumper to bumper traffic.

I kept it for a couple months after the trial, but canceled because the situations it’s good at aren’t the situations I usually face when driving.

ToucanLoucan 2 hours ago | parent | prev [-]

Also, if it actually worked, Tesla's marketing would literally never shut up about it because they have a working fully self-driving car. That would be the first, second, and third bullet point in all their marketing, and they would be right to do that. It's an incredible feature differentiator from all their competition.

The only problem is, it doesn't work.

bluGill 2 hours ago | parent | next [-]

More importantly, we would have independent researchers looking at the data and commenting. I know this data exists, but I've never seen anyone who has the data and ability to understand it who doesn't also have a conflict of interest.

abtinf an hour ago | parent | prev [-]

If it actually worked, Tesla would include an indemnity clause for all accidents while it’s active.

tzs 3 hours ago | parent | prev | next [-]

> Robotaxi supevision is just an emergency brake switch

That was the case when they first started the trial in Austin. The employee in the car was a safety monitor sitting in the front passenger seat with an emergency brake button.

Later, when they started expanding the service area to include highways they moved them to the driver seat on those trips so that they can completely take over if something unsafe is happening.

ssl-3 2 hours ago | parent [-]

Interesting.

I wonder if these newly-reported crashes happened with the employee positioned in e-brake or in co-pilot mode.

tialaramex 32 minutes ago | parent [-]

Humans are extremely bad at vigilance when nothing interesting is happening. Lookout is a life critical role on the railways you might be assigned as a track worker where your whole job is to watch for railway trains and alert your co-workers when one is coming, so they retreat to a safe position while it passes. That seems easy, and these are typically close friends, you work with them every day rotating roles, you'd certainly not want them injured or killed - but it turns out it's basically impossible to stay vigilant for more than an hour or two tops. Having insisted that you aren't tired, since you're just stood somewhere watching while your mates are working hard on the track, you nevertheless lose focus and oops, a train passes without your conscious awareness and your colleague dies or has a life-changing injury.

This is awkward for any technologies where we've made it boring but not safe and so the humans must still supervise but we've made their job harder. Waymo understood that this is not a place worth getting to.

everdrive an hour ago | parent | prev | next [-]

> And if you are doing it right, you have your hands on wheel and foot on the pedals ready to jump in.

Seems like there's zero benefit to this, then. Being required to pay attention, but actually having nothing (ie, driving) to keep my engaged seems like the worst of both worlds. Your attention would constantly be drifting.

strangattractor an hour ago | parent | prev | next [-]

Similarly Tesla using Teleoperators for their Optimus robots is a safety fake for robots that are not autonomous either. They are constantly trying to cover there inability to make autonomous anything. Cheap lidars or radar would have likely prevented those "hitting stationary objects" accidents. Just because the Furher says it does not make it so.

cma 2 hours ago | parent | prev | next [-]

They had supervisors in the passenger seat for a whole but moved them back to the drivers seat, then moved some out to chase cars. In the ones where they are in driver seat they were able to take over the wheel weren't they?

Veserv 2 hours ago | parent | prev | next [-]

So the trillion dollar company deployed 1 ton robots in unconstrained public spaces with inadequate safety data and chose to use objectively dangerous and unsafe testing protocols that objectively heightened risk to the public to meet marketing goals? That is worse and would generally be considered utterly depraved self-enrichment.

Loughla 2 hours ago | parent [-]

We also dump chemicals into the water, air, and soil that aren't great for us.

Externalized risks and costs are essential for many business to operate. It isn't great, but it's true. Our lives are possible because of externalized costs.

yndoendo an hour ago | parent | next [-]

EU has one good regulation ... if safety can be engineered in it must be.

OSAH also has regulation to mitigate risk ... tag and lock out.

Both mitigate external risks. Good regulation mitigates known risk factors ... unknown take time to learn about.

Apollo program learned this when the door locks were bolted on and the pure oxygen environment burned everyone alive inside. Safety first became the base of decision making.

Veserv an hour ago | parent | prev [-]

Yes, those are bad as well. Are you seriously taking as your moral foundation that we need to poison the water supply to ensure executives get their bonuses? Is that somehow not utterly depraved self-enrichment?

UltraSane 3 hours ago | parent | prev [-]

That just makes the Robotaxi even more irresponsible.

foxyv 3 hours ago | parent [-]

I think they were so used to defending Autopilot that they got confused.