Using these tiny floats for classic engineering workloads—if I were starting gradschool right now, that’s definitely what I’d look at. Mixed precision numerical algorithms are already a topic, but with the hardware glut there’ll definitely be room to push things.
So it’s in a nice spot where there’s some scaffolding, but it isn’t totally done, and the hardware is probably going to make things possible.
> Using these tiny floats for classic engineering workloads
I think that's the idea. Back in the stone age, I did that with Apple II floats (40-bits) because I was hitting quantization problems in a Mandelbrot explorer program I wrote. Wrapping my head around it in BASIC was hard. You can only go so far when your abstraction level is that low.