Remix.run Logo
drob518 7 hours ago

Hm. Not a lot of technical details about the bitrate improvement of the streams of the CPU required to decode them. I’m also wondering if all the encoding and decoding was done by software reference implementations (just VLC?) or whether anything had any form of hardware assist? It reads as “We did it” without much other information as to how well it went or what AV2’s benefits are over both AV1 and other codecs and whether those benefits were realized in the demonstration or require downstream work to achieve.

adrian_b 5 hours ago | parent [-]

In TFA, there are links to the complete specification of AV2 and to the reference software implementation, which was used in the test.

https://gitlab.com/AOMediaCodec/avm/-/tree/research-v13.0.0/...

TFA says that the test was done on an Apple laptop and the decoding was done on the CPU, so not using any special hardware support.

The reference AV2 implementation uses architecture-specific SIMD instructions on x86-64, Aarch64 and IBM POWER.

So in this test it has used the ARM vector ISA (Neon), written with intrinsics in the C language, as it can be seen in the source files:

https://gitlab.com/AOMediaCodec/avm/-/tree/research-v13.0.0/...

adrian_b 2 hours ago | parent [-]

EDIT: When looking first at TFA, I did not notice that only the first demonstration was done on an Apple laptop using Neon instructions, but the second demonstration was done on an unnamed laptop with an x86 CPU, thus using the AVX2 vector instructions.

The x86 demo decoded in real time an 1080p/24 fps video stream. Because for Apple the resolution is not specified, we can assume that it was lower than on the x86 laptop.