Remix.run Logo
cubefox 3 days ago

Unfortunately the article mentions nowhere why a GPU would ever need to compress (rather than decompress) images. What's the application for that? In the beginning the article mentions formats for computer game textures, but I'm pretty sure those already ship compressed, and they only need to be decompressed by the client GPUs.

01HNNWZ0MV43FF 3 days ago | parent | next [-]

Someone mentioned environment maps. Anything that's done with framebuffers or render-to-texture might benefit. e.g. Water reflections and refractions, metal surfaces reflecting the world, mirrors in bathrooms, panini distortion for high-FOV cameras, TV screens like the Breencasts in Half-Life 2

cubefox 3 days ago | parent [-]

Why would they benefit from hardware compression?

castano-ludicon 3 days ago | parent [-]

The most immediate benefit is reduced memory use. Many devices are memory limited and with skyrocketing RAM prices this is becoming more problematic.

Oversubscription drops performance catastrophically, but even without running into memory limits, compression reduces bandwidth which increases performance and lowers power use. This results in better experiences and longer battery life.

cubefox 3 days ago | parent [-]

But in order to be compressed, don't we have to load the image into memory first, uncompressed? I don't quite see how this could result in reduced memory usage.

castano-ludicon 2 days ago | parent [-]

It needs to be decompressed, but it does not stay uncompressed. That memory is only used temporarily. Games usually have a pinned staging buffer to upload data to the GPU. This memory is reused and does not contribute significantly to the total memory use.

castano-ludicon 3 days ago | parent | prev [-]

There are many textures that can't be encoded in advance: images compressed with transmission formats such as jpeg or avif, procedural textures, terrain splatting, user generated textures, environment maps, dynamic lightmaps, etc.