| ▲ | bhaney 4 days ago |
| > produce full-color images that are equal in quality to those produced by conventional cameras I was really skeptical of this since the article conveniently doesn't include any photos taken by the nano-camera, but there are examples [1] in the original paper that are pretty impressive. [1] https://www.nature.com/articles/s41467-021-26443-0/figures/2 |
|
| ▲ | roelschroeven 4 days ago | parent | next [-] |
| Those images are certainly impressive, but I certainly don't agree with the statement "equal in quality to those produced by conventional cameras": they're quite obviously lacking in sharpness and color. |
| |
| ▲ | neom 4 days ago | parent | next [-] | | conventional ultra thin lens cameras are mostly endoscopes, so it's up against this: https://www.endoscopy-campus.com/wp-content/uploads/Neuroend... | | | |
| ▲ | queuebert 3 days ago | parent | prev | next [-] | | Tiny cameras will always be limited in aperture, so low light and depth of field will be a challenge. | |
| ▲ | card_zero 4 days ago | parent | prev [-] | | I wonder how they took pictures with four different cameras from the exact same position at the exact same point in time. Maybe the chameleon was staying very still, and maybe the flowers were indoors and that's why they didn't move in the breeze, and they used a special rock-solid mount that kept all three cameras perfectly aligned with microscopic precision. Or maybe these aren't genuine demonstrations, just mock-ups, and they didn't even really have a chameleon. | | |
| ▲ | derefr 4 days ago | parent | next [-] | | They didn't really have a chameleon. See "Experimental setup" in the linked paper [emphasis mine]: > After fabrication of the meta-optic, we account for fabrication error by performing a PSF calibration step. This is accomplished by using an optical relay system to image a pinhole illuminated by fiber-coupled LEDs. We then conduct imaging experiments by replacing the pinhole with an OLED monitor. The OLED monitor is used to display images that will be captured by our nano-optic imager. But shooting a real chameleon is irrelevant to what they're trying to demonstrate here. At the scales they're working at here ("nano-optics"), there's no travel distance for chromatic distortion to take place within the lens. Therefore, whether they're shooting a 3D scene (a chameleon) or a 2D scene (an OLED monitor showing a picture of a chameleon), the light that makes it through their tiny lens to hit the sensor is going to be the same. (That's the intuitive explanation, at least; the technical explanation is a bit stranger, as the lens is sub-wavelength – and shaped into structures that act as antennae for specific light frequencies. You might say that all the lens is doing is chromatic distortion — but in a very controlled manner, "funnelling" each frequency of inbound light to a specific part of the sensor, somewhat like a MIMO antenna "funnels" each frequency-band of signal to a specific ADC+DSP. Which amounts to the same thing: this lens doesn't "see" any difference between 3D scenes and 2D images of those scenes.) | |
| ▲ | gcanyon 4 days ago | parent | prev | next [-] | | Given the size of their camera, you could glue it to the center of another camera’s lens with relatively insignificant effect on the larger camera’s performance. | |
| ▲ | cliffy 4 days ago | parent | prev [-] | | Camera rigs exist for this exact reason. | | |
| ▲ | dylan604 4 days ago | parent [-] | | what happens when you go too far from trusting what you see/read/hear on the internet? simple logic gets tossed out like a baby in the bathwater. now, here's the rig I'd love to see with this: take a hundred of them and position them like a bug's eye to see what could be done with that. there'd be so much overlapping coverage that 3D would be possible, yet the parallax would be so small that makes me wonder how much depth would be discernible |
|
|
|
|
| ▲ | Intralexical 4 days ago | parent | prev | next [-] |
| > Ultrathin meta-optics utilize subwavelength nano-antennas to modulate incident light with greater design freedom and space-bandwidth product over conventional diffractive optical elements (DOEs). Is this basically a visible-wavelength beamsteering phased array? |
| |
| ▲ | itishappy 4 days ago | parent [-] | | Yup. It's also passive. The nanostructures act like delay lines. | | |
| ▲ | mrec 4 days ago | parent [-] | | Interesting. This idea appears pretty much exactly at the end of Bob Shaw's 1972 SFnal collection Other Days, Other Eyes. The starting premise is the invention of "slow glass" that looks like an irrelevant gimmick but ends up revolutionizing all sorts of things, and the final bits envisage a disturbing surveillance society with these tiny passive cameras spread everywhere. It's a good read; I don't think the extrapolation of one technical advance has ever been done better. | | |
|
|
|
| ▲ | baxtr 4 days ago | parent | prev | next [-] |
| Also interesting: the paper is from 2021. |
|
| ▲ | andrepd 4 days ago | parent | prev [-] |
| How does this work? If it's just reconstructing the images with nn, a la Samsung pasting a picture of the moon when it detected a white disc on the image, it's not very impressive. |
| |
| ▲ | nateroling 4 days ago | parent [-] | | I had the same thought, but it sounds like this operates at a much lower level than that kind of thing: > Then, a physics-based neural network was used to process the images captured by the meta-optics camera. Because the neural network was trained on metasurface physics, it can remove aberrations produced by the camera. | | |
| ▲ | Intralexical 4 days ago | parent [-] | | I'd like to see some examples showing how it does when taking a picture of completely random fractal noise. That should show it's not just trained to reconstruct known image patterns. Generally it's probably wise to be skeptical of anything that appears to get around the diffraction limit. | | |
| ▲ | brookst 4 days ago | parent [-] | | I believe the claim is that the NN is trained to reconstruct pixels, not images. As in so many areas, the diffraction limit is probabalistic so combining information from multiple overlapping samples and NNs trained on known diffracted -> accurate pairs may well recover information. You’re right that it might fail on noise with resolution fine enough to break assumptions from the NN training set. But that’s not a super common application for cameras, and traditional cameras have their own limitations. Not saying we shouldn’t be skeptical, just that there is a plausible mechanism here. | | |
| ▲ | Intralexical 4 days ago | parent | next [-] | | My concern would be that if it can't produce accurate results on a random noise test, then how do we trust that it actually produces accurate results (as opposed to merely plausible results) on normal images? Multilevel fractal noise specifically would give an indication of how fine you can go. | | |
| ▲ | brookst 3 days ago | parent [-] | | "Accurate results" gets you into the "what even is a photo" territory. Do today's cameras, with their huge technology stack, produce accurate results? With sharpening and color correction and all of that, probably not. I agree that measuring against such a test would be interesting, but I'm not sure it's possible or desirable for any camera tech to produce an objectively "true" pixel by pixel value. This new approach may fail/cheat in different ways, which is interesting but not disqualifying to me. |
| |
| ▲ | neom 4 days ago | parent | prev [-] | | we've had very good chromatic aberration correction since I got a degree in imaging technology and that was over 20 years ago so I'd imagine it's not particularly difficult for name your flavour of ML. |
|
|
|
|