▲ | crazygringo 5 days ago | ||||||||||||||||
Fascinating! But... > The go-to solution here is GPU accelerated video compression Isn't the solution usually hardware encoding? > I think this is an order of magnitude faster than even dedicated hardware codecs on GPUs. Is there an actual benchmark though? I would have assumed that built-in hardware encoding would always be faster. Plus, I'd assume your game is already saturating your GPU, so the last thing you want to do is use it for simultaneous video encoding. But I'm not an expert in either of these, so curious to know if/how I'm wrong here? Like if hardware encoders are designed to be real-time, but intentionally trade off latency for higher compression? And is the proposed video encoding really is so lightweight it can easily share the GPU without affecting game performance? | |||||||||||||||||
▲ | averne_ 5 days ago | parent [-] | ||||||||||||||||
Hardware GPU encoders refer to dedicated ASIC engines, separate from the main shader cores. So they run in parallel and there is no performance penalty for using both simultaneously, besides increased power consumption. Generally, you're right that these hardware blocks favor latency. One example of this is motion estimation (one of the most expensive operations during encoding). The NVENC engine on NVidia GPUs will only use fairly basic detection loops, but can optionally be fed motion hints from an external source. I know that NVidia has a CUDA-based motion estimator (called CEA) for this purpose. On recent GPUs there is also the optical flow engine (another separate block) which might be able to do higher quality detection. | |||||||||||||||||
|