Remix.run Logo
BoorishBears 5 days ago

I've ridden just under 1,000 miles in autonmous (no scare quotes) Waymos, so it's strange to see someone letting Tesla's abject failure inform their opinions on how much progress AVs have made.

Tesla that got fired as a customer by Mobileye for abusing their L2 tech is your yardstick?

Anyways, Waymo's DC launch is next year, I wonder what the new goalpost will be.

thephotonsphere 4 days ago | parent | next [-]

Tesla uses only cameras, which sounds crazy (reflections, direct sunlight disturbances, fog , smoke, etc.

LiDAR, radar assistance feels crucial

https://fortune.com/2025/08/15/waymo-srikanth-thirumalai-int...

latexr 4 days ago | parent | next [-]

Indeed. Mark Rober did some field tests on that exact difference. LiDAR passed all of them, while Tesla’s camera-only approach failed half.

https://www.youtube.com/watch?v=IQJL3htsDyQ

randallsquared 4 days ago | parent [-]

I'm not sure the guy who did the Tesla crash test hoax and (partially?) faked his famous glitterbomb pranks is the best source. I would separately verify anything he says at this point.

latexr 4 days ago | parent [-]

> Tesla crash test hoax

First I’m hearing of that. In doing a search, I see a lot of speculation but no proof. Knowing the shenanigans perpetrated by Musk and his hardcore fans, I’ll take theories with a grain of salt.

> and (partially?) faked his famous glitterbomb pranks

That one I remember, and the story is that the fake reactions were done by a friend of a friend who borrowed the device. I can’t know for sure, but I do believe someone might do that. Ultimately, Rober took accountability, recognised that hurt his credibility, and edited out that part from the video.

https://www.engadget.com/2018-12-21-viral-glitter-bomb-video...

I have no reason to protect Rober, but also have no reason to discredit him until proof to the contrary. I don’t follow YouTube drama but even so I’ve seen enough people unjustly dragged through the mud to not immediately fall for baseless accusations.

One I bumped into recently was someone describing the “fall” of another YouTuber, and in one case showed a clip from an interview and said “and even the interviewer said X about this person”, with footage. Then I watched the full video and at one point the interviewer says (paraphrased) “and please no one take this out of context, if you think I’m saying X, you’re missing the point”.

So, sure, let’s be critical about the information we’re fed, but that cuts both ways.

ACCount37 4 days ago | parent | prev | next [-]

Humans use only cameras. And humans don't even have true 360 coverage on those cameras.

The bottleneck for self-driving technology isn't sensors - it's AI. Building a car that collects enough sensory data to enable self-driving is easy. Building a car AI that actually drives well in a diverse range of conditions is hard.

tfourb 4 days ago | parent | next [-]

That's actually categorically false. We also use sophisticated hearing, a well developed sense of inertia and movement, air pressure, impact, etc. And we can swivel our heads to increase our coverage of vision to near 360°, while using very dependable and simple technology like mirrors to cover the rest. Add to that that our vision is inherently 3D and we sport a quite impressive sensor suite ;-). My guess is that the fidelity and range of the sensors on a Tesla can't hold a candle to the average human driver. No idea how LIDAR changes this picture, but it sure is better than vision only.

I think there is a good chance that what we currently call "AI" is fundamentally not technologically capable of human levels of driving in diverse conditions. It can support and it can take responsibility in certain controlled (or very well known) environments, but we'll need fundamentally new technology to make the jump.

ACCount37 4 days ago | parent | next [-]

Yes, human vision is so bad it has to rely on a swivel joint and a set of mirrors just to approximate 360 coverage.

Modern cars can have 360 vision at all times, as a default. With multiple overlapping camera FoVs. Which is exactly what humans use to get near field 3D vision. And far field 3D vision?

The depth-discrimination ability of binocular vision falls off with distance squared. At far ranges, humans no longer see enough difference between the two images to get a reliable depth estimate. Notably, cars can space their cameras apart much further, so their far range binocular perception can fare better.

How do humans get that "3D" at far distances then? The answer is, like it usually is when it comes to perception, postprocessing. Human brain estimates depth based on the features it sees. Not unlike an AI that was trained to predict depth maps from a single 2D image.

If you think that perceiving "inertia and movement" is vital, then you'd be surprised to learn that an IMU that beats a human on that can be found in an average smartphone. It's not even worth mentioning - even non-self-driving cars have that for GPS dead reckoning.

pixl97 4 days ago | parent | prev [-]

I mean, technically what we need is fast general intelligence.

A lot of the problems with driving aren't driving problems. They are other people are stupid problems, and nature is random problems. A good driver has a lot of ability to predict what other drivers are going to do. For example people commonly swerve slightly on the direction they are going to turn, even before putting on a signal. A person swerving in a lane is likely going to continue with dumb actions and do something worse soon. Clouds in the distance may be a sign of rain and that bad road conditions and slower traffic may exist ahead.

Very little of this has to do with the quality of our sensors. Current sensors themselves are probably far beyond what we actually need. It's compute speed (efficiency really) and preemption that give humans an edge, at least when we're paying attention.

svara 4 days ago | parent | prev | next [-]

A fine argument in principle, but even if we talk only about vision, the human visual system is much more powerful than a camera.

Between brightly sunlit snow and a starlit night, we can cover more than 45 stops with the same pair of eyeballs; the very best cinematographic cameras reach something like 16.

In a way it's not a fair comparison, since we're taking into account retinal adaptation, eyelids/eyelashes, pupil constriction. But that's the point - human vision does not use cameras.

the8472 4 days ago | parent [-]

> In a way it's not a fair comparison,

Indeed. And the comparison is unnecessarily unfair.

You're comparing the dynamic range of a single exposure on a camera vs. the adaptive dynamic range in multiple environments for human eyes. Cameras do have comparable features: adjustable exposure times and apertures. Additionally cameras can also sense IR, which might be useful for driving in the dark.

svara 4 days ago | parent | next [-]

Exposure adjustment is constrained by frame rate, that doesn't buy you very much dynamic range.

A system that replicates the human eye's rapid aperture adjustment and integration of images taken at quickly changing aperture/ filter settings is very much not what Tesla is putting in their cars.

But again, the argument is fine in principle. It's just that you can't buy a camera that performs like the human visual system today.

the8472 4 days ago | parent [-]

Human eyes are unlikely the only thing in parameter-space that's sufficient for driving. Cameras can do IR, 360° coverage, higher frame rates, wider stereo separation... but of course nothing says Teslas sit at a good point in that space.

svara 4 days ago | parent [-]

Yes, agreed, but that's a different point - I was reacting to this specifically:

> Humans use only cameras.

Which in this or similar forms is sometimes used to argue that L4/5 Teslas are just a software update away.

the8472 3 days ago | parent [-]

Ah yeah, that's making even more assumptions. Not only does it assume the cameras are powerful enough but that there already is enough compute. There's a sensing-power/compute/latency tradeoff. That is you can get away with poorer sensors if you have more compute that can filter/reconstruct useful information from crappy inputs.

vrighter 2 days ago | parent | prev [-]

"adjustable exposure times and apertures"

That means that to view some things better, you have to accept being completely blind to others. That is not a substitute for dynamic range.

the8472 2 days ago | parent [-]

Yes, and? Human eyes also have limited instantaneous dynamic range much smaller than their total dynamic range. Part of the mechanism is the same (pupil vs. camera iris). They can't see starlight during the day and tunnels need adaption lighting to ease them in/out.

TheOtherHobbes 4 days ago | parent | prev | next [-]

Humans are notoriously bad at driving, especially in poor weather. There are more than 6 million accidents annually in the US, which is >16k a day.

Most are minor, but even so - beating that shouldn't be a high bar.

There is no good reason not to use LIDAR with other sensing technologies, because cameras-only just makes the job harder.

ACCount37 4 days ago | parent [-]

Self-driving cars beat humans on safety already. This holds for Waymos and Teslas both.

They get into less accidents, mile for mile and road type for road type, and the ones they get into trend towards less severe. Why?

Because self-driving cars don't drink and drive.

This is the critical safety edge a machine holds over a human. A top tier human driver in the top shape outperforms this generation of car AIs. But a car AI outperforms the bottom of the barrel human driver - the driver who might be tired, distracted and under influence.

tfourb 4 days ago | parent | next [-]

I trust Tesla's data on this kind of stuff only as far as a Starship can travel on its return trip to Mars. Anything coming from Elon would have to be audited by an independent entity for me to give it an ounce of credence.

Generally you are comparing Apples and Oranges if you are comparing the safety records of i.e. Waymos to that of the general driving population.

Waymos drive under incredibly favorable circumstances. They also will simply stop or fall back on human intervention if they don't know what to do – failing in their fundamental purpose of driving from point A to point B. To actually get comparable data, you'd have to let Waymos or Teslas do the same type of drives that human drivers do, under the same curcumstances and without the option of simply stopping when they are unsure, which they simply are not capable of doing at the moment.

That doesn't mean that this type of technology is useless. Modern self-driving and adjacent tech can make human drivers much safer. I imagine, it would be quite easy to build some AI tech that has a decent success rate in recognizing inebriated drivers and stopping the cars until they have talked to a human to get cleared for driving. I personally love intelligent lane and distance assistance technology (if done well, which Tesla doesn't in my view). Cameras and other assistive technology are incredibly useful when parking even small cars and I'd enjoy letting a computer do every parking maneuver autonomously until the end of my days. The list could go on.

Waymos have cumulatively driven about 100 million miles without a safety driver as of July 2025 (https://fifthlevelconsulting.com/waymos-100-million-autonomo...) over a span of about 5 years. This is such a tiny fraction of miles driven by US (not to speak of worldwide) drivers during that time, that it can't usefully be expressed. And they've driven these miles under some of the most favorable conditions available to current self-driving technology (completely mapped areas, reliable and stable good weather, mostly slow, inner city driving, etc.). And Waymo themselves have repeatedly said that overcoming the limitations of their tech will be incredibly hard and not guaranteed.

yladiz 4 days ago | parent | prev | next [-]

Do you have independent studies to back up your assertion that they are safer per distance than a human driver?

cbrozefsky 4 days ago | parent | prev | next [-]

They data indicated they hold an edge over drunk and incapacitated humans, not humans.

davemp 4 days ago | parent | prev [-]

> A top tier human driver in the top shape outperforms this generation of car AIs.

Most non-impaired humans outperform the current gen. The study I saw had FSD at 10x fatalities per mile vs non-impaired drivers.

latexr 4 days ago | parent | prev | next [-]

> Humans use only cameras.

Not true. Humans also interpret the environment in 3D space. See a Tesla fail against a Wile E. Coyote-inspired mural which humans perceive:

https://youtu.be/IQJL3htsDyQ?t=14m34s

ACCount37 4 days ago | parent [-]

This video proves nothing other than "a YouTuber found a funny viral video idea".

Teslas "interpret the environment in 3D space" too - by feeding all the sensor data into a massive ML sensor fusion pipeline, and then fusing that data across time too.

This is where the visualizers, both the default user screen one and the "Terminator" debugging visualizer, get their data from. They show plain and clear that the car operates in a 3D environment.

You could train those cars to recognize and avoid Wile E. Coyote traps too, but do you really want to? The expected amount of walls set in the middle of the road with tunnels painted onto them is very close to zero.

latexr 4 days ago | parent | next [-]

Maybe watch the rest of the video. The Tesla, unlike the LiDAR car, also failed the fog and rain tests. The mural was just the last and funniest one.

Let’s also not forget murals like that do exist in real life. And those aren’t foam.

https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg...

Additionally, as the other commenter pointed out, trucks often have murals painted on them, either as art or adverts.

https://en.wikipedia.org/wiki/Truck_art_in_South_Asia

https://en.wikipedia.org/wiki/Dekotora

Search for “truck ads” and you’ll find a myriad companies offering the service.

paulryanrogers 4 days ago | parent | prev [-]

I've seen semi trucks with scenic views painted on them, both rear and side panels.

lagadu 4 days ago | parent | prev | next [-]

Once computers and AIs can approach even a small fraction of the our capacity then sure, only cameras is fine, it's a shame that our suite of camera data processing equipment is so far beyond our understanding that we don't even have models of how it might work at its core.

Even at that point, why would you possibly use only cameras though, when you can get far better data by using multiple complementary systems? Humans still crash plenty often, in large part because of how limited our "camera" system can be.

vrighter 4 days ago | parent | prev | next [-]

which cameras have stereoscopic vision and the dynamic range of an eye?

Even if what you're saying is true, which it's not, cameras are so inferior to eyes it's not even funny

perryizgr8 4 days ago | parent [-]

> which cameras have stereoscopic vision

Any 2 cameras separated by a few inches.

> dynamic range of an eye

Many cameras nowadays match or exceed the eye in dynamic range. Specially if you consider that cameras can vary their exposure from frame to frame, similar to the eye, but much faster.

ACCount37 4 days ago | parent [-]

What's more is, the power of depth perception in binocular vision is a function of distance between two cameras. The larger that distance is, the further out depth can be estimated.

Human skull only has two eyesockets, and it can only get this wide. But cars can carry a lot of cameras, and maintain a large fixed distance between them.

bayindirh 4 days ago | parent | prev | next [-]

Even though it's false, let's imagine that's true.

Our cameras (also called eyes) have way better dynamic range, focus speed, resolution and movement detection capabilities, Backed by a reduced bandwidth peripheral vision which is also capable of detecting movement.

No camera, incl. professional/medium format still cameras are that capable. I think one of the car manufacturers made a combined tele/wide lens system for a single camera which can see both at the same time, but that's it.

Dynamic range, focus speed, resolution, FoV and motion detection still lacks.

...and that's when we imagine that we only use our eyes.

BuckRogers 4 days ago | parent | prev [-]

Except a car isn’t a human.

That’s the mistake Elon Musk made and the same one you’re making here.

Not to mention that humans driving with cameras only is absolutely pathetic. The amount of accidents that occur that are completely avoidable doesn’t exactly inspire confidence that all my car needs to be safe and get me to my destination is a couple cameras.

ACCount37 4 days ago | parent [-]

This isn't a "mistake". This is the key problem of getting self-driving to work.

Elon Musk is right. You can't cram 20 radars, 50 LIDARs and 100 cameras into a car and declare self-driving solved. No amount of sensors can redeem a piss poor driving AI.

Conversely, if you can build an AI that's good enough, then you don't need a lot of sensors. All the data a car needs to drive safely is already there - right in the camera data stream.

vrighter 4 days ago | parent | next [-]

if additional sensors improve the ai, then your last statement is categorically untrue. The reason it worked better is that those additional sensors gave it information that wac not available in the video stream

ACCount37 4 days ago | parent [-]

"If."

So far, every self-driving accident where the self-driving car was found to be at fault follows the same pattern: the car had all the sensory data it needed to make the right call, and it didn't make the right call. The bottleneck isn't in sensors.

4 days ago | parent | next [-]
[deleted]
rootusrootus 4 days ago | parent | prev [-]

In that case we're probably even further from self-driving cars than I'd have guessed. Adding more sensors is a lot cheaper than putting a sufficient amount of compute in a car.

BuckRogers 4 days ago | parent | prev [-]

Multiple things can be true at the same time you realize. Some problems, such as insufficient AI can have a larger effect on safety, but more data to work with as well as train on always wins. You want lidar.

You keep insisting that cameras are good enough, but it’s empirically possible since safe autonomous driving AI has not been achieved yet to say that cameras alone collect enough data.

The minimum setup without lidar would be cameras, radar, ultrasonic, GPS/GNSS + IMU.

Redundancy is key. With lidar, multiple sensors cover each other’s weaknesses. If LiDAR is blinded by fog, radar steps in.

perryizgr8 4 days ago | parent | prev | next [-]

> only cameras, which sounds crazy

Crazy that billions of humans drive around every day with two cameras. And they have various defects too (blind spots, foveated vision, myopia, astigmatism, glass reflection, tiredness, distraction).

amelius 4 days ago | parent | prev [-]

The nice thing about LiDAR is that you can use it to train a model to simulate a LiDAR based on camera inputs only. And of course to verify how good that model is.

mycall 4 days ago | parent [-]

I can't wait until V2X and sensor fusion comes to autonomous vehicles, greatly improving the detailed 3D mapping of LiDAR, the object classification capabilities of cameras, and the all-weather reliability of radar and radio pings.

amanaplanacanal 4 days ago | parent | prev | next [-]

The goalpost will be when you can buy one and drive it anywhere. How many cities are Waymo in now? I think what they are doing is terrific, but each car must cost a fortune.

BoorishBears 4 days ago | parent [-]

The cars aren't expensive by raw cost (low six figures, which is about what an S-class with highway-only L3 costs)

But there is a lot of expenditure relative to each mile being driven.

> The goalpost will be when you can buy one and drive it anywhere.

This won't happen any time soon, so I and millions of other people will continue to derive value from them while you wait for that.

yladiz 4 days ago | parent | next [-]

Low six figures is quite expensive, and unobtainable to a large number of people.

BoorishBears 4 days ago | parent [-]

Not even close.

It's a 2-ton vehicle that can self-drive reliably enough to be roving a city 24/7 without a safety driver.

The measure of expensive for that isn't "can everyone afford it", the fact we can even afford to let anyone ride them is a small wonder.

yladiz 4 days ago | parent [-]

I’m a bit confused. If we’re talking about consumer cars, the end goal is not to rent a car that can drive itself, the end goal is to own a car that can drive itself, and so it doesn’t matter if the car is available for purchase but costs $250,000 because few consumers can afford that, even wealthy ones.

BoorishBears 4 days ago | parent [-]

a) I'm not talking about consumer cars, you are. I said very plainly this level of capability won't reach consumers soon and I stand by that. Some Chinese companies are trying to make it happen in the US but there's too many barriers.

b) If there was a $250,000 car that could drive itself around given major cities, even with the geofence, it would sell out as many units as could be produced. That's actually why I tell people to be weary of BOM costs: it doesn't reflect market forces like supply and demand.

You're also underestimating both how wealthy people and corporations are, and the relative value being provided.

A private driver in a major city can easily clear $100k a year on retainer, and there are people are paying it.

yladiz 4 days ago | parent [-]

If you look at the original comment that you replied to, the goalpost was explained clearly:

> The goalpost will be when you can buy one and drive it anywhere.

So let’s just ignore the non-consumer parts entirely to avoid shifting the goalpost. I still stand by the fact that the average (or median) consumer will not be able to afford such an expensive car, and I don’t think it’s controversial to state this given the readily available income data in the US and various other countries. The point isn’t that it exists, Rolls Royce and Maseratis exist, but they are niche and so if self-driving cars will be so expensive to be niche they won’t actually make a real impact on real people, thus the goalpost of general availability to a consumer.

freehorse 4 days ago | parent | prev [-]

> I and millions of other people

People "wait" because of where they live and what they need. Not all people live and just want to travel around SF or wherever these go nowadays.

BoorishBears 4 days ago | parent [-]

Why the scare quotes on wait? There is literally nothing for you to do but wait.

At the end of the day it's not like no one lives in SF, Phoenix, Austin, LA, and Atlanta either. There's millions of people with access to the vehicles and they're doing millions of rides... so acting like it's some great failing of AVs that the current cities are ones with great weather is frankly, a bit stupid.

It takes 5 seconds to look up the progress that's been made even in the last few years.

saint_yossarian 4 days ago | parent | prev [-]

How many of those rides required human intervention by Waymo's remote operators? From what I can tell they're not sharing that information.

BoorishBears 4 days ago | parent [-]

I worked at Zoox, which has similar teleoperations to Waymo: remote operators can't joystick the vehicles.

So if we're saying how many times would it have crashed without a human: 0.

They generally intervene when the vehicles get stuck and that happens pretty rarely, typically because humans are doing something odd like blocking the way.