▲ | briandw 5 days ago | |
Lidar is the first thing brought up in these discussions. Lidar isn’t that great of a sensor. It does one thing well and that’s measure distance. A visual sensor can be measured along the axis of spatial resolution (x,y,z) temporal resolution(fps) and dynamic range(bit depth). You could add things like light frequency and phase etc. Lidar is quite poor in all of these except the spatial z dimension, measuring distance as mentioned before. Compared to a cheep camera the fps is very low, the spatial resolution in x and y is pathetic 128. in the vertical, higher horizontal but its not mega pixels. Finally the dynamic range is 1 bit(something is there or not). Lidars use near infrared and are just as susceptible to problems with natural fog (not the theatrical fog like in that Roper video) and rain. Multiple cameras can do good enough depth estimation with modern neural networks. But cameras are vastly better at making sense of the world. You can’t read a sign with lidar. | ||
▲ | smilekzs 4 days ago | parent [-] | |
Lidars have been reporting per-point intensity values for quite a while. The dynamic range is definitely not 1 bit. Many Lidar visualization software will happily pseudocolor the intensity channel for you. Even with a mechanically scanning 64-line Lidar you can often read a typical US speed limit sign at ~50 meter in this view. |