Remix.run Logo
public_void 7 hours ago

I used to work at a company that did self driving. The sensor setup was more complicated than Tesla (cameras, lidar, etc), but the fact that FSD can still work on this car despite the cameras being in a different place is really impressive to me. Our sensors were pretty sensitive to accurate calibration, and iirc any time we tried to move our sensor array to a new car it took a ton of work to reconfigure it to make the sensor fusion output work.

Induane 6 hours ago | parent [-]

This is one of the real advantages to the (often insulted and/or chastised) vision only approach to FSD.

People can easily adapt to different vehicles in a similar manner.

brk 4 hours ago | parent | next [-]

Most sensors can be implemented in a way that enables self-calibration.

I'm oversimplifying it here, but the macro process is taking some known attributes and mapping them to what you are observing. For example, if you can detect people, and you know the average height of a person, you can compute where your horizon is, and where you should (or shouldn't) expect to see people in the FOV. You can do this with cameras, lidar, etc. When you have multiple sensors you can do a lot more to have them all sample an object in their own ways and converge on agreement of where they are relative to each other and the object.

amluto 5 hours ago | parent | prev [-]

I’m not sure this has much to do with vision as opposed to fancy self-calibration software. At least a few years ago, Tesla cars would be in self-calibration mode for a while after delivery while they calibrated their cameras. I think the idea is that it’s cheaper to figure out in software where everything is than to calibrate the camera mounts and lenses at the factory.

I see no reason that LiDAR couldn’t participate in a similar algorithm.

A bigger issue would be knowing the shape of the car to avoid clipping an obstacle.

omgwtfbyobbq 3 hours ago | parent [-]

It probably could, but I imagine a LIDAR system would need a similar (large) amount of training data to enable effective self-calibration across a wide variety of situations.

At some point, with enough sensor suites, we might be able to generalize better and have effective lower(?)-shot training for self-calibration of sensor suites.