Remix.run Logo
lubesGordi a day ago

Still you don't necessarily need to have dynamic memory allocations if the number of deltas you have is bounded. In some codecs I could definitely see those having a varying size depending on the amount of change going on in the scene.

I'm not a codec developer, I'm only coming at this from an outside/intuitive perspective. Generally, performance concerned parties want to minimize heap allocations, so I'm interested in this as how it applies in codec architecture. Codecs seem so complex to me, with so much inscrutable shit going on, but then heap allocations aren't optimized out? Seems like there has to be a very good reason for this.

izacus a day ago | parent | next [-]

You're actually right about allocation - most video codecs are written with hardware decoders in mind which have fixed memory size. This is why their profiles hard limit the memory constraints needed for decode - resolution, number of reference frames, etc.

That's not quite the case for encoding - that's where things get murky since you have way more freedom at what you can do to compress better.

Sesse__ a day ago | parent | prev [-]

The very good reason is that there's simply not a lot of heap allocations going on. It's easy to check; run perf against e.g. ffmpeg decoding a big file to /dev/null, and observe the distinct lack of malloc high up in the profile.

There's a heck of a lot of distance from “not a lot” to “zero”, though.