| ▲ | Scientists create ultra fast memory using light(isi.edu) |
| 117 points by giuliomagnifico 7 days ago | 29 comments |
| |
|
| ▲ | cycomanic a day ago | parent | next [-] |
| People have done these sort of "optical computing" based demonstrations for decades, despite David Miller showing that fundamentally digital computing with optical photons will be immensely power hungry (I say digital here, because there are some applications where analog computing can make sense, but it almost never relies memory for bits). Specifically this paper is based on simulations, and I've only skimmed the paper, but the power efficiency numbers sound great because they say 40 GHz read/write speeds, but these consume comparatively large powers even if not reading or writing (the lasers have to be running constantly). I also think they did not include the contributions of the modulation and the required drivers (typically you need quite large voltages)? Somebody already pointed out that the size of these is massive, and that's again fundamental. As someone working in the broad field, I really wish people would stop these type of publications. While these numbers might sound impressive at a first glance, they really are completely unrealistic. There are lots of legitimate applications of optics and photonics, we don't need to resort to this sort of stuff. |
| |
| ▲ | embedding-shape a day ago | parent | next [-] | | > showing that fundamentally digital computing with optical photons will be immensely power hungry > they really are completely unrealistic Unrealistic only because they're power hungry? That sounds like a temporary problem, kind of like when we come up with a bunch of ML approaches we couldn't actually do in the 80s/90s because of the hardware resources required, but today work fine. Maybe even if the solution aren't useful today, they could be useful in the future? Or maybe with these results, there are more people being inspired to create solutions specifically about the power usage? "we don't need to resort to this sort of stuff" makes it sound like this is all so beneath you and not deserving of attention, but why are you then paying attention to it? | | |
| ▲ | cycomanic 3 hours ago | parent | next [-] | | > > showing that fundamentally digital computing with optical photons will be immensely power hungry
>
> > they really are completely unrealistic
>
> Unrealistic only because they're power hungry? That sounds like a temporary problem, kind of like when we come up with a bunch of ML approaches we couldn't actually do in the 80s/90s because of the hardware resources required, but today work fine.
>
> Maybe even if the solution aren't useful today, they could be useful in the future? Or maybe with these results, there are more people being inspired to create solutions specifically about the power usage?
> No they are fundamentally power hungry because you essentially need a nonlinear response, i.e. photons need to interact with each other. However photons are bosons and really dislike interacting with each other. Same thing about the size of the circuits they are determined by the wavelength of light, so fundamentally they are much larger than electronic circuits. > "we don't need to resort to this sort of stuff" makes it sound like this is all so beneath you and not deserving of attention, but why are you then paying attention to it? That's not what I said, in fact they deserve my attention because they need to be called out, as the article clearly does not highlight the limitations. | |
| ▲ | gsf_emergency_6 18 hours ago | parent | prev | next [-] | | Miller limit is fundamentally due to photons being bosons, not great for digital logic (switches) vs carrying info. There are promising avenues to use "bosonic" nonlinearity to overtake traditional fermionic computing, but they are basically not being explored by EE departments despite (because of?) their oversized funding and attention | |
| ▲ | scarmig 4 hours ago | parent | prev [-] | | Universities believe that constantly putting out pieces that sound like some research is revolutionary and will change everything increases public support of science. It doesn't, because the vast majority of science is incremental and mostly learning about some weird, niche thing that probably won't translate into applications. This causes the public to misunderstand the role of scientific research and lose faith in it when it doesn't deliver on its promises (made by the university press office, not the researcher). |
| |
| ▲ | gsf_emergency_6 18 hours ago | parent | prev | next [-] | | I only upvoted to send a msg to the moderators not to upweight uni/company press releases :) sadly the energy of VC-culture goes into refining
wolf-crying despite all the talk of due dilligence,"thinking for yourself" and "understanding value" The core section from paper (linked below) is pp8-9. 2mW for 100s of picosecs is huge. (Also GIANT voltages,if only to illustrate how coarse their simulations are): As shown in Figure
6(a), even with up to 1 V of noise on each Q and QB node (resulting in a 2 V differential between Q and QB), the pSRAM
bitcell successfully regenerates the previously stored data. It is important to note that higher noise voltages increase the time
required to restore the original state, but the bitcell continues to function correctly due to its regenerative behavior. | |
| ▲ | fooker 11 hours ago | parent | prev [-] | | 40GHZ memory/compute for 10-100x power sounds like a great idea to me. We are going tohave energy abundant at some point. |
|
|
| ▲ | adrian_b a day ago | parent | prev | next [-] |
| Free version of the research paper: https://arxiv.org/abs/2503.19544v1 The memory cell is huge in comparison with semiconductor memories, but it is very fast, with a 40 GHz read/write speed. There are important applications for a very high speed small memory, e.g. for digital signal processing in radars and other such devices, but this will never replace a general-purpose computer memory, where much higher bit densities are needed. |
| |
| ▲ | jdub 17 hours ago | parent | next [-] | | Careful about "never"… individual transistors used to be large, heavy, power hungry, and expensive. | | |
| ▲ | fsh 13 hours ago | parent [-] | | That's not true. Transistors were commercialized a few years after their invention, and already the first generation vastly outperformed vacuum tubes in size, weight, and power. Optical computing has been done for a few decades now with very little progress. | | |
| ▲ | jdub 6 hours ago | parent [-] | | (I was being a little facetious – vacuum tubes being the original "transistors".) |
|
| |
| ▲ | ElectricalUnion 10 hours ago | parent | prev | next [-] | | I might have done the math wrong, but is this really supposed to be 330 * 290 um² * 128GiB * 8 = 96 m² big? And this is the RAM one expects per node cluster element for current LLM AI, nevermind future GAI. | |
| ▲ | aj7 18 hours ago | parent | prev [-] | | This is a hugely important point. The de Broglie wavelength of the photon is hundreds to thousands of nm. There is no possibility of VLSI scale-up, a point conveniently omitted in hundreds of decks and at least $1B in investment. Photonic techniques will remain essentially a part of the analog pallette in system design. |
|
|
| ▲ | ilaksh a day ago | parent | prev | next [-] |
| MRAM and MRAM-CIM is like 10 years ahead of this and going to make a huge impact on efficiency and performance in the next few years, right? Or so I thought I heard. Memristors are also probably coming after MRAM-CIM and before photonic computing. |
|
| ▲ | cs702 a day ago | parent | prev | next [-] |
| Cool. Memory bandwidth is a major bottleneck for many important applications today, including AI. Maybe this kind of memory "at the speed of light" can help alleviate the bottleneck? For a second, I thought the headline was copied & pasted from the hallucinated 10-years-from-now HN frontpage that recently made the HN front page: https://news.ycombinator.com/item?id=46205632 |
|
| ▲ | lebuffon a day ago | parent | prev | next [-] |
| Wow 300mm chips. They must be huge! (I am sure they meant nm, but nobody is checking the AI output) |
| |
| ▲ | KK7NIL a day ago | parent | next [-] | | It almost certainly refers to 300 mm wafers, which are the largest size used right now. They offer significantly better economics than the older 200 mm wafers or lab experiments done in even smaller (i.e. 100 mm) wafers. The text in the article supports this: > This is a commercial 300mm monolithic silicon photonics platform, meaning the technology is ready to scale today, rather than being limited to laboratory experiments. | |
| ▲ | vlovich123 a day ago | parent | prev [-] | | From the paper > footprint of 330 × 290 µm2 using the GlobalFoundries 45SPCLO That’s a 45nm process but the units for the chip size probably should have been 330um? However I’m not well versed enough in the details to parse it out. https://arxiv.org/abs/2503.19544 | | |
| ▲ | bgnn a day ago | parent [-] | | I'm very familiar with this process as I use it regularly. The area is massive. 330um × 290um are the X and Y dimensions. The area is roughly 0.1 mm2. You can see the comparison on table 1. This is roughly 50000 times larger than an SRAM of 45nm process. This is the problem with photonic circuits. They are massive compared to electronics. | | |
| ▲ | pezezin 20 hours ago | parent | next [-] | | Would it be possible to use something similar to DWDM to store/process multiple bits in parallel in the same circuit? | | |
| ▲ | bgnn 7 hours ago | parent [-] | | It isn't unfortunately as the physical size of the resonators need to match a given wavelength. So for each wavelength you need a new circuit in parallel. |
| |
| ▲ | bun_at_work a day ago | parent | prev [-] | | Is it prohibitively larger? And is the size a fundamental constraint of the technology, or is it possible to reduce the size? | | |
| ▲ | adrian_b a day ago | parent | next [-] | | The size is a fundamental constraint of optical technologies, because it is related to the wavelength of light, which is much bigger than the sizes of semiconductor devices. This is why modern semiconductor devices no longer use lithography with visible light or even with near ultraviolet, but they must use extreme ultraviolet. The advantage of such optical devices is speed and low power consumption in the optical device itself (ignoring the power consumption of lasers, which might be shared by many devices). Such memories have special purposes in various instruments, they are not suitable as computer memories. | |
| ▲ | bgnn 6 hours ago | parent | prev [-] | | Previous reply is correct. To give a feeling: micro-ribg resonators are anywhere between 10 to 40 micrometer in diameter. You also need a bunch of other waveguides. The process in the paper uses silicon waveguides, with 400nm width if I'm not wrong. So any optical feature unfortunately isn't going down as much as CMOS technology. Fun fact: the photolithography has the same limitations. They use all kinds of tricks (different optical affects to shrink the features) but fundamentally limited by the wavelength used. This is why we are seeing a push to a lower and lower wavelengths by ASML. That + multiple patterning helps to scale CMOS down. |
|
|
|
|
|
| ▲ | xienze a day ago | parent | prev | next [-] |
| This just in, OpenAI has already committed to buying the entire world’s supply once it becomes available. |
| |
| ▲ | AlOwain 12 hours ago | parent [-] | | I am not much into AI, but more demand for faster memory is good for all of us. Even if, in the short term, prices increase. |
|
|
| ▲ | moi2388 14 hours ago | parent | prev [-] |
| “ This represents more than a laboratory proof-of-concept; it’s a functional component manufactured using industry-standard processes.” Nice AI text again |
| |
| ▲ | in_a_hole 10 hours ago | parent | next [-] | | Is it possible that this is actually just a very common writing pattern used by actual humans and that's the reason AI uses it so much? | |
| ▲ | IntrepidPig 13 hours ago | parent | prev [-] | | God it infuriates me |
|