Remix.run Logo
ablob 5 hours ago

Common video codecs are often hardware accelerated. This should be on the CPU side quite often, as there are a lot of systems without dedicated GPUs that still play video, like Notebooks and smart phones. So in the end it's less about being parallelizable, but if it beats dedicated hardware, to which the answer should almost always be no.

P.S.: In video decoding speed is only relevant up to a certain point. That being: "Can I decode the next frame(s) in time to show it/them without stuttering". Once that has been achieved other factors such as power drainage become more important.

craftkiller an hour ago | parent | next [-]

It is my understanding that hardware accelerated video encoders (as in the fixed-function ones built into consumer GPUs) produce a lower quality output than software-based encoders. They're really only there for on-the-fly encoding like streaming to twitch or recording security camera footage. But if you're encoding your precious family memories or backing up your DVD collection, you want to use software encoders. Therefore, if a hypothetical software h264 encoder could be faster on the GPU, it would have value for anyone doing not-on-the-fly encoding of video where they care about the quality.

One source for the software encoder quality claim is the "transcoding" section of this article: https://chipsandcheese.com/i/138977355/transcoding

asmosoinio an hour ago | parent | prev [-]

> ... That being: "Can I decode the next frame(s) in time to show it/them without stuttering".

Except when you are editing video, or rendering output. When you have multiple streams of very high definition input, you definitely need much more than realtime speed decoding of a single video.

And you would want to scrub around the video(s), jumping to any timecode, and get the target frame preferably showing as soon as your monitor refreshes.