Remix.run Logo
fc417fc802 6 days ago

Personally I think 12/48/96 would be more practical than the current 8/32/64. 32 bits is almost trivially easy to overflow whereas 48 bits is almost always enough when working with integers. And 64 bits is often insufficient or at least uncomfortably tight when packing bits together. Whereas by the time you've blown past 96 you should really just bust out the arrays and eat any overhead. Similarly I feel that 24 bits is also likely to be more practical than 16 bits in most cases.

phkahler 5 days ago | parent | next [-]

12 bit color would have been great. In the old days 4 bits for each of RGB, or even packing 2 pixels per byte. Today 12 bit per channel would be awesome, although high end cameras seem to be at 14 (which doesn't fit bytes well either).

Instruction sets - 12 bits for small chips and 24 for large ones. RISC-V instructions encode better in 24bits if you use immediate data after the opcode instead of inside it.

Physical memory is topping out near 40bits of address space and some virtual address implementations don't even use 64 bits on modern systems.

Floating point is kinda iffy. 36 bits with more than 24bit mantissa would be good. not sure what would replace doubles.

fc417fc802 5 days ago | parent [-]

Yeah it would be much more practical for color. 12 bit rgb4, 24 bit rgb8 or rgba6, and 48 bit rgb16 or rgba12 would all have proper alignment. The obvious rgb12 would obviate the need for the unholy mess of asymmetric 32 bit packed rgb formats we "enjoy" today.

Physical memory - Intel added support for 57 bits (up from 48 bits) in 2019, and AMD in 2022. 48 bit pointers obviously address the vast majority of needs. 96 bit pointers would make the developers of GC'd languages and VMs very happy (lots of tag bits).

For floats presumably you'd match the native sizes to maintain alignment. An f48 with a 10 bit exponent and an f96 with a 15 or 17 bit exponent. I doubt the former has any downsides relative to an f32 and the latter we've already had the equivalent of since forever in the form of 80 bit extended precision floats with a 16 bit exponent.

Amusingly I'm just now realizing that the Intel 80 bit representation has a wider exponent than IEEE binary128.

I guess high end hardware that supports f128 would either be f144 or f192. The latter maintains alignment so presumably that would win out. Anyway pretty much no one supports f128 in hardware to begin with.

galangalalgol 5 days ago | parent | prev [-]

The fixed point TI dsp chips always had a long int that was 48. Intel had 84 bit floating point registers before simd registers took over. And the pdp-11... Powers of two aren't as ubiquitous as it seems. If anything, the hardware uses whatever sizes it wants and that gets abstracted from the rest of the world by compilers and libraries.

meepmorp 5 days ago | parent [-]

The pdp-11 was 16-bit.

galangalalgol 5 days ago | parent | next [-]

Thanks! I picked the only pdp that was actually a power of 2.

Narishma 2 days ago | parent | prev [-]

They probably meant PDP-8.