Remix.run Logo
tzs 2 hours ago

Tesla claims that their cameras use "photon counting" and that this lets them see well in the dark, in fog, in heavy rain, and when facing bright lights like the sun.

Photon counting is a real thing [1] but that's not what Tesla claims to be doing.

I cannot tell if what they are doing is something actually effective that they should have called something other than "photon counting" or just the usual Musk exaggerations. Anyone here familiar with the relevant fields who can say which it is?

Here's what they claim, as summarized by whatever it is Google uses for their "AI Overview".

> Tesla photon counting is an advanced, raw-data approach to camera imaging for Autopilot and Full Self-Driving (FSD), where sensors detect and count individual light particles (photons) rather than processing aggregate image intensity. By removing traditional image processing filters and directly passing raw pixel data to neural networks, Tesla improves dynamic range, enabling better vision in low light and high-contrast scenarios.

It says these are the key aspects:

> Direct Data Processing: Instead of relying on image signal processors (ISPs) to create a human-friendly picture, Tesla feeds raw sensor data directly into the neural network, allowing the system to detect subtle light variations and near-IR (infrared) light.

> Improved Dynamic Range: This approach allows the system to see in the dark exceptionally well by not losing information to standard image compression or exposure adjustments.

> Increased Sensitivity: By operating at the single-photon level, the system achieves a higher signal-to-noise ratio, effectively "seeing in the dark".

> Elimination of Exposure Limitations: The technique helps mitigate issues like sun glare, allowing for better visibility in extreme lighting conditions.

> Neural Network Training: The raw, unfiltered data is used to train Tesla's neural networks, allowing for more robust, high-fidelity perception in complex, real-world driving environments.

[1] https://en.wikipedia.org/wiki/Photon_counting

iknowstuff 2 hours ago | parent [-]

all the sensor has to do is keep count of how many times a pixel got hit by a photon in the span of e.g. 1/24th of a second (long exposure) and 1/10000th of a second (short exposure). Those two values per pixel yield an incredible dynamic range and can be fed straight into the neural net.