| ▲ | sh3rl0ck 7 hours ago | ||||||||||||||||||||||
There's no mention of SLMs or LLMs, though. > This work represents a compelling real-world demonstration of “tiny AI” — highly specialised, minimal-footprint neural networks FPGAs for Neural Networks have been s thing since before the LLM era. | |||||||||||||||||||||||
| ▲ | 7 hours ago | parent | next [-] | ||||||||||||||||||||||
| [deleted] | |||||||||||||||||||||||
| ▲ | 100721 7 hours ago | parent | prev [-] | ||||||||||||||||||||||
Huh? The first paragraph literally says they are using LLMs > [ GENEVA, SWITZERLAND — March 28, 2026 ] — CERN is using extremely small, custom large language models physically burned into silicon chips to perform real-time filtering of the enormous data generated by the Large Hadron Collider (LHC). | |||||||||||||||||||||||
| |||||||||||||||||||||||