▲ | exDM69 4 days ago | |
Has anyone got suggestions for blur algorithms suitable for compute shaders? The usual Kawase blur (described in the article) uses bilinear sampling of textures. You can, of course, implement the algorithm as is on a compute shader with texture sampling. But I have a situation where the inputs and outputs should be in shared memory. I'm trying to avoid writing the results out to off-chip DRAM, which would be necessary to be able to use texture sampling. I spent some time looking into a way of doing an efficient compute shader blur using warp/wave/subgroup intrinsics to downsample the image and then do some kind of gaussian-esque weighted average. The hard part here is that the Kawase blur samples the input at "odd" locations but warp intrinsics are limited to "even" locations if that makes sense. I would appreciate if anyone knows any prior art in this department. | ||
▲ | 4 days ago | parent [-] | |
[deleted] |