Remix.run Logo
ACCount37 6 hours ago

This LIDAR wank annoys me.

If you can train a policy that drives well on cameras, you can get self-driving. If you can't, you're fucked, and no amount of extra sensors will save you.

Self-driving isn't a sensor problem. It always was, is, and always will be an AI problem.

No amount of LIDAR engineering will ever get you a LIDAR that outputs ground truth steering commands. The best you'll ever get is noisy depth estimate speckles that you'll have to massage with, guess what, AI, to get them to do anything of use.

Sensor suite choice is an aside. Camera only 360 coverage? Good enough to move on. The rest of the problem lies with AI.

lateforwork 6 hours ago | parent | next [-]

Even the best AI can't drive without good sensors. Cameras have to guess distance and they fail when there is insufficient contrast, direct sunlight and so on. LiDARs don't have to guess distance.

slfnflctd 4 hours ago | parent [-]

Cameras also fail when weather conditions cake your car in snow and/or mud while you're driving. Actually, from what I just looked up, this is an issue with LiDAR as well. So it seems to me like we don't even have the sensors we need to do this properly yet, unless we can somehow make them all self-cleaning.

It always goes back to my long standing belief that we need dedicated lanes with roadside RFID tags to really make this self driving thing work well enough.

ACCount37 4 hours ago | parent [-]

Nah. That's a common "thought about it for 15 seconds but not 15 minutes" mistake.

Making a car that drives well on arbitrary roads is freakishly hard. Having to adapt every single road in the world before even a single self-driving car can use them? That's a task that makes the previous one look easy.

Learned sensor fusion policy that can compensate for partial sensor degradation, detect severe dropout, and handle both safely? Very hard. Getting the world that can't fix the low tech potholes on every other road to set up and maintain machine specific infrastructure everywhere? A nonstarter.

slfnflctd 3 hours ago | parent [-]

Well, we already provide dedicated lanes for multi-passenger vehicles in many places, nearly all semi-major airports have dedicated lots and lanes for rideshare drivers, many parts of downtown/urban areas have the same things... and it didn't exactly take super long to roll all that out.

Also, 99% of roads in civilized areas have something alongside them already that you can attach RFID tags to. Quite a bit easier than setting up an EV charging station (another significant infrastructure thing which has rolled out pretty quickly). And let's not forget, every major metro area in the world has multi-lane superhighways which didn't even exist at all 50-70 years ago.

Believe me, I've thought about this for a lot more than 15 minutes. Yes, we should improve sensor reliability, absolutely. But it wouldn't hurt to have some kind of backup roadside positioning help, and I don't see how it would be prohibitively expensive. Maybe I am missing something, but I'm gonna need more than your dismissive comment to be convinced of that.

ACCount37 3 hours ago | parent [-]

You are missing the sheer soul-crushing magnitude of the infrastructure problem. You are missing the little inconvenient truth that live in a world full of roads that don't even consistently have asphalt on them. That real life Teslas ship with AI that does vibe-based road lane estimation because real life roads occasionally fail to have any road markings a car AI could see.

Everything about road infrastructure is "cheap to deploy, cheap to maintain". This is your design space: the bare minimum of a "road" that still does its job reasonably well. Gas stations and motels are an aside - they earn money. Not even the road signs pay for themselves.

Now, you propose we design some type of, let's say, a machine only mark that helps self-driving cars work well. They do nothing for human drivers, who are still a road majority. And then you somehow manage to make every country and every single self-driving car vendor to get to agree on the spec, both on paper and in truth.

Alright, let's say we've done that. Why would anyone, then, put those on the road? They're not the bare minimum. And if we wanted to go beyond the bare minimum, we'd plug the potholes, paint the markings and fix the road signs first.

slfnflctd an hour ago | parent [-]

You definitely have a point. It would not be rolled out all at once, everywhere. It would happen sporadically, starting with areas that have a higher tax revenue base. There may never be an international standard. There will be tons of places it will never work at all.

All the same, it still reminds me of past infrastructure changes which ended up being widely distributed, with or without standards, from railroads to fiber optic cables.

And this:

> if we wanted to go beyond the bare minimum, we'd plug the potholes, paint the markings and fix the road signs first

...just strikes me as a major logical fallacy. It's like the people who say we shouldn't continue exploring our solar system because we have too many problems on Earth. We will always have problems here, from people starving because of oppressive and unaccountable hierarchies they're stuck under to potholes and road markings the local government is too broke or incompetent to fix. We should work on those, yeah, but we should also be furthering the research and development of technology from every angle we realistically can. It feels weird to be explaining this here.

ActorNightly 6 hours ago | parent | prev | next [-]

You are correct, but the problem is nobody at Tesla or any other self driving company for that matter knows what they are doing when it comes to AI

If you are doing end to end driving policy (i.e the wrong way of doing it), having lidar is important as a correction factor to the cameras.

ACCount37 5 hours ago | parent [-]

So far, end to end seems to be the only way to train complex AI systems that actually works.

Every time you pit the sheer violent force of end to end backpropagation against compartmentalization and lines drawn by humans, at a sufficient scale, backpropagation gets its win.

jasondigitized 2 hours ago | parent | prev | next [-]

Just don't drive up north in the snow and your good.

top_sigrid 6 hours ago | parent | prev | next [-]

> If you can train a policy that drives well on cameras, you can get self-driving. If you can't, you're fucked, and no amount of extra sensors will save you.

Source: trust me, bro? This statement has no factual basis. Calling the most common approach of all other self-driving developers except Tesla a wank also is no argument but hate only.

ACCount37 6 hours ago | parent [-]

[flagged]

ultrattronic 6 hours ago | parent | next [-]

Yes that’s why having both makes sense.

top_sigrid 6 hours ago | parent | prev [-]

This is so dumb, I don't even know if you are serious. Nobody ever said it is lidar instead of cameras, but as additional sensor to cameras. And everybody seems to agree that that is valuable sensor-information (except Tesla).

sejje 5 hours ago | parent [-]

I'm able to drive without lidar, with just my eyeball feeds.

I agree that lidar is very valuable right now, but I think in the endgame, yeah it can drive with just cameras.

The logic follows, because I drive with just "cameras."

senordevnyc 5 hours ago | parent [-]

Yeah, but your "cameras" also have a bunch of capabilities that hardware cameras don't, plus they're mounted on a flexible stalk in the cockpit that can move in any direction to update the view in real-time.

Also, humans kinda suck at driving. I suspect that in the endgame, even if AI can drive with cameras only, we won't want it to. If we could upgrade our eyeballs and brains to have real-time 3D depth mapping information as well as the visual streams, we would.

ACCount37 4 hours ago | parent [-]

What "a bunch of capabilities"?

A complete inability to get true 360 coverage that the neck has to swivel wildly across windows and mirrors to somewhat compensate for? Being able to get high FoV or high resolution but never both? IPD so low that stereo depth estimation unravels beyond 5m, which, in self-driving terms, is point-blank range?

Human vision is a mediocre sensor kit, and the data it gets has to be salvaged in post. Human brain was just doing computation photography before it was cool.

Edman274 3 hours ago | parent [-]

What do you believe the frame rate and resolution of Tesla cameras are? If a human can tell the difference between two virtual reality displays, one with a frame rate of 36hz and a per eye resolution of 1448x1876, and another display with numerically greater values, then the cameras that Tesla uses for self driving are inferior to human eyes. The human eye typically has a resolution from 5 to 15 megapixels in the fovea, and the current, highest definition automotive cameras that Tesla uses just about clears 5 megapixels across the entire field of view. By your criterion, the cameras that Tesla uses today are never high definition. I can physically saccade my eyes by a millimeter here or there and see something that their cameras would never be able to resolve.

ACCount37 2 hours ago | parent [-]

Yep, Tesla's approach is 4% "let's build a better sensor system than what humans have" and 96% "let's salvage it in post".

They didn't go for the easy problem, that's for sure. I respect the grind.

Edman274 an hour ago | parent [-]

I can't figure out your position, then. You were saying that human eyes suck and are inferior compared to sensors because human eyes require interpretation by a human brain. You're also saying that if self driving isn't possible with only camera sensors, then no amount of extra sensors will make up for the deficiency.

This came from a side conversation with other parties where one noted that driving is possible with only human eyes, another person said that human eyes are superior to cameras, you disagreed, and then when you're told that the only company which is approaching self driving with cameras alone has cameras with worse visual resolution and worse temporal resolution than human eyes, you're saying you respect the grind because the cameras require processing by a computer.

If I understand correctly, you believe:

1. Driving should be possible with vision alone, because human eyes can do it, and human eyes are inferior to camera sensors and require post processing, so obviously with superior sensors it must be possible 2. Even if one knows that current automotive camera sensors are not actually superior to human eyes and also require post processing, then that just means that camera-only approaches are the only way forward and you "respect the grind" of a single company trying to make it work.

Is that correct? Okay, maybe that's understandable, but it makes me confused because 1 and 2 contradict each other. Help me out here.

mrexcess 6 hours ago | parent | prev [-]

>Self-driving isn't a sensor problem. It always was, is, and always will be an AI problem.

AI + cameras have relevant limitations that LIDAR augmented suites don't. You can paint a photorealistic roadway onto a brick wall and AI + cameras will try to drive right through it, dubbed the "Wile E. Coyote" problem.

sejje 5 hours ago | parent [-]

Will humans?