▲ | averne_ 5 days ago | |||||||
Hardware GPU encoders refer to dedicated ASIC engines, separate from the main shader cores. So they run in parallel and there is no performance penalty for using both simultaneously, besides increased power consumption. Generally, you're right that these hardware blocks favor latency. One example of this is motion estimation (one of the most expensive operations during encoding). The NVENC engine on NVidia GPUs will only use fairly basic detection loops, but can optionally be fed motion hints from an external source. I know that NVidia has a CUDA-based motion estimator (called CEA) for this purpose. On recent GPUs there is also the optical flow engine (another separate block) which might be able to do higher quality detection. | ||||||||
▲ | miladyincontrol 4 days ago | parent [-] | |||||||
Im pretty sure they arent dedicated ASIC engines anymore. Thats why hacks like nvidia-patch are a thing where you can scale up NVENC usage up to the full GPU's compute rather than the arbitrary limitation nvidia adds. The penalty for using them within those limitations tends to be negligible however. And on a similar note, NvFBC helps a ton with latency but its disabled on a driver level for consumer cards. | ||||||||
|