Remix.run Logo
d--b 4 days ago

There is some optics thing that looks cool, but it doesn't say how the image is actually recorded.

Then there is the whole "neural" part. Do these get "enhanced" by a generative AI that fills the blur based on the most statistically likely pixels?

The article is pretty bad.

DCH3416 3 days ago | parent [-]

From what I can tell it's using a neural network to derive an image from the interference patterns of light.

I imagine you could do this using a standard computational model, it would just be very intensive. So I guess it would be 'enhanced' in the same way a JPEG stores an image in a lossy format.

d--b 3 days ago | parent [-]

My question was more about what it was that records the patterns of light.

DCH3416 15 hours ago | parent [-]

The nature article shows some sort of cmos like sensor with a surface made of pegs which seem to be conveniently close in size to the wavelengths of visible light. That passes through some sort of meta optic which presumably measures the diffraction off the sensor surface. Both sensor and "meta optic" data combined and extrapolated to form an image.

It's quite a clever way of designing a "lens" like that. Because you can generate an image from practically a flat surface. Of course the output image is "calculated" instead of just bending light through a series of glass lenses.