▲ | lukeschlather 5 days ago | |
I had taken for granted that the cameras in the Tesla might be equivalent to human vision, but now I'm realizing that's probably laughable. I'm reading it's 8 cameras at 30fps and it sounds like the car's bus can only process about 36fps (so a total of 36fps, not 8x30 = 240fps theoretically available from the cameras, if they had a better memory bus.) It also seems plausible you would need at least 10,000 FPS to fully match human vision (especially taking into account that humans turn their heads which in a CV situation could be analogous to the CV algorithm having 32x30 = 960 FPS, but typically only processing 140 frames this second from cameras pointing in a specific direction. So maybe LIDAR isn't necessary but also if Tesla were actually investing in cameras with a memory bus that could approximate the speed of human vision I doubt it would be cheaper than LIDAR to get the same result. | ||
▲ | tialaramex 5 days ago | parent | next [-] | |
Mostly human vision is just violently different from a camera, but you could interpret that as a mix of better and worse. One of the ways it's better is that humans can sense individual photons. Not 100% reliably, but pretty well, which is why humans can see faint stars on a dark night without any special tools even though the star is thousands of light years away. On the other hand, our resolution for most of our field of vision is pretty bad - this is compensated for by changing what we're looking it when we care about details we can just look directly at it and the resolution is better right in the centre of the picture. | ||
▲ | asats 5 days ago | parent | prev [-] | |
Also the human vision is backed by the general intelligence, which those cameras are very much not. |