| |
| ▲ | brk 4 hours ago | parent | next [-] | | Most sensors can be implemented in a way that enables self-calibration. I'm oversimplifying it here, but the macro process is taking some known attributes and mapping them to what you are observing. For example, if you can detect people, and you know the average height of a person, you can compute where your horizon is, and where you should (or shouldn't) expect to see people in the FOV. You can do this with cameras, lidar, etc. When you have multiple sensors you can do a lot more to have them all sample an object in their own ways and converge on agreement of where they are relative to each other and the object. | |
| ▲ | amluto 5 hours ago | parent | prev [-] | | I’m not sure this has much to do with vision as opposed to fancy self-calibration software. At least a few years ago, Tesla cars would be in self-calibration mode for a while after delivery while they calibrated their cameras. I think the idea is that it’s cheaper to figure out in software where everything is than to calibrate the camera mounts and lenses at the factory. I see no reason that LiDAR couldn’t participate in a similar algorithm. A bigger issue would be knowing the shape of the car to avoid clipping an obstacle. | | |
| ▲ | omgwtfbyobbq 3 hours ago | parent [-] | | It probably could, but I imagine a LIDAR system would need a similar (large) amount of training data to enable effective self-calibration across a wide variety of situations. At some point, with enough sensor suites, we might be able to generalize better and have effective lower(?)-shot training for self-calibration of sensor suites. |
|
|