Remix.run Logo
thinkingQueen 9 days ago

Who would develop those codecs? A good video coding engineer costs about 100-300k USD a year. The really good ones even more. You need a lot of them. JVET has an attendance of about 350 such engineers each meeting (four times a year).

Not to mention the computer clusters to run all the coding sims, thousands and thousands of CPUs are needed per research team.

People who are outside the video coding industry do not understand that it is an industry. It’s run by big companies with large R&D budgets. It’s like saying ”where would we be with AI if Google, OpenAI and Nvidia didn’t have an iron grip”.

MPEG and especially JVET are doing just fine. The same companies and engineers who worked on AVC, HEVC and VVC are still there with many new ones especially from Asia.

MPEG was reorganized because this Leonardo guy became an obstacle, and he’s been angry about ever since. Other than that I’d say business as usual in the video coding realm.

rwmj 9 days ago | parent | next [-]

Who would write a web server? Who would write Curl? Who would write a whole operating system to compete with Microsoft when that would take thousands of engineers being paid $100,000s per year? People don't understand that these companies have huge R&D budgets!

(The answer is that most of the work would be done by companies who have an interest in video distribution - eg. Google - but don't profit directly by selling codecs. And universities for the more research side of things. Plus volunteers gluing it all together into the final system.)

mike_hearn 9 days ago | parent | next [-]

Google funding free stuff is not a real social mechanism. It's not something you can point to and say that's how society should work in general.

Our industry has come to take Google's enormous corporate generosity for granted, but there was zero need for it to be as helpful to open computing as it has been. It would have been just as successful with YouTube if Chrome was entirely closed source and they paid for video codec licensing, or if they developed entirely closed codecs just for their own use. In fact nearly all Google's codebase is closed source and it hasn't held them back at all.

Google did give a lot away though, and for that we should be very grateful. They not only released a ton of useful code and algorithms for free, they also inspired a culture where other companies also do that sometimes (e.g. Llama). But we should also recognize that relying on the benevolence of 2-3 idealistic billionaires with a browser fetish is a very time and place specific one-off, it's not a thing that can be demanded or generalized.

In general, R&D is costly and requires incentives. Patent pools aren't perfect, but they do work well enough to always be defining the state-of-the-art and establish global standards too (digital TV, DVDs, streaming.... all patent pool based mechanisms).

breve 8 days ago | parent | next [-]

> Google funding free stuff is not a real social mechanism.

It's not a social mechanism. And it's not generosity.

Google pushes huge amounts of video and audio through YouTube. It's in Google's direct financial interest to have better video and audio codecs implemented and deployed in as many browsers and devices as possible. It reduces Google's costs.

Royalty-free video and audio codecs makes that implementation and deployment more likely in more places.

> Patent pools aren't perfect

They are a long way from perfect. Patent pools will contact you and say, "That's a nice codec you've got there. It'd be a shame if something happened to it."

Three different patent pools are trying to collect licencing fees for AV1:

https://www.sisvel.com/licensing-programmes/audio-and-video-...

https://accessadvance.com/licensing-programs/vdp-pool/

https://www.avanci.com/video/

9 days ago | parent | prev [-]
[deleted]
raverbashing 9 days ago | parent | prev | next [-]

These are bad comparisons

The question is more, "who would write the HTTP spec?" except instead of sending text back and forth you need experts in compression, visual perception, video formats, etc

rwmj 8 days ago | parent [-]

Did TBL need to patent the HTTP spec?

chubot 8 days ago | parent | prev | next [-]

> Who would write a whole operating system to compete with Microsoft when that would take thousands of engineers being paid $100,000s per year?

You might be misunderstanding that almost all of Linux development is funded by the same kind of companies that fund MPEG development.

It's not "engineers in their basement", and never was

https://www.linuxfoundation.org/about/members

e.g. Red Hat, Intel, Oracle, Google, and now MICROSOFT itself (the competitive landscape changed)

This has LONG been the case, e.g. an article from 2008:

https://www.informationweek.com/it-sectors/linux-contributor...

2017 Linux Foundation Report: https://www.linuxfoundation.org/press/press-release/linux-fo...

Roughly 15,600 developers from more than 1,400 companies have contributed to the Linux kernel since the adoption of Git made detailed tracking possible

The Top 10 organizations sponsoring Linux kernel development since the last report include Intel, Red Hat, Linaro, IBM, Samsung, SUSE, Google, AMD, Renesas and Mellanox

---

curl does seem to be an outlier, but you still need to answer the question: "Who would develop video codecs?" You can't just say "Linux appeared out of thin air", because that's not what happened.

Linux has funding because it serves the interests of a large group of companies that themselves have a source of revenue.

(And to be clear, I do not think that is a bad thing! I prefer it when companies write open source software. But it does skew the design of what open source software is available.)

rwmj 8 days ago | parent | next [-]

I've used and developed for Linux since 1994 (long before major commercial interests), and I work for Red Hat so it's unlikely I misunderstand how Linux was and is developed.

cwizou 8 days ago | parent | prev [-]

> You can't just say "Linux appeared out of thin air", because that's not what happened.

It kinda did though https://en.wikipedia.org/wiki/Linux#Creation !

The corporate support you mentioned arrived years after that.

chubot 8 days ago | parent [-]

You could say "Linux was CREATED out of thin air", and I wouldn't argue with you.

But creation only counts for so much -- without support, Linux could still be a hobby project that "won't be big and professional like GNU"

I'm saying Linux didn't APPEAR out of thin air, or at least it's worth looking deeper into the reasons why. "Appearing" to the general public, i.e. making widely useful software, requires a large group of people over a sustained time period, like 10 years.

----

i.e. Right NOW there are probably hundreds of projects like Linux that you haven't heard of, which don't necessarily align with funders

I would actually make the comparison to GNU -- GNU is a successful project, but there are various efforts underneath it that kind of languish.

Look at High Priority Free Software Projects - https://www.fsf.org/campaigns/priority-projects/

- Decentralization, federation, and self-hosting

- Free drivers, firmware, and hardware designs

- Real-time voice and video chat

- Internationalization of free software

- Security by and for free software

- Intelligent personal assistant

I'm saying that VIDEO CODECS might be structurally more similar to these projects, than they are to the Linux kernel.

i.e. making a freely-licensed kernel IS aligned with Red Hat, Intel, Google, but making an Intelligent Personal Assistant is probably not.

Somebody probably ALREADY created a good free intelligent personal assistant (or one that COULD BE as great as Linux), but you never heard of them. Because they don't have hundreds of companies and thousands of people aligned with them.

cwizou 6 days ago | parent [-]

My point was, a lot of the early corporate support were smallish companies built specifically around Linux. RedHat is the perfect example of that, it started as a university project to make a distro.

It took a while (and a lot of pain) to get a lot of driver vendors to come fully into the project, yet Linux was already gaining a bunch of traction at that time (say last half of 90s).

I'll give you that Intel was always more or less a good actor though! But Google didn't exist when Linux already mattered. And when Google was created, they definitely benefited a lot from it, basing much of their infra on it.

Marketing needs (and laywer approval) can bring support faster than most things. Opus for audio is a good example of that too.

thinkingQueen 9 days ago | parent | prev [-]

Are you really saying that patents are preventing people from writing the next great video codec? If it were that simple, it would’ve already happened. We’re not talking about a software project that you can just hack together, compile, and see if it works. We’re talking about rigorous performance and complexity evaluations, subjective testing, and massive coordination with hardware manufacturers—from chips to displays.

People don’t develop video codecs for fun like they do with software. And the reason is that it’s almost impossible to do without support from the industry.

unlord 9 days ago | parent | next [-]

> People don’t develop video codecs for fun like they do with software. And the reason is that it’s almost impossible to do without support from the industry.

As someone who lead an open source team (of majority volunteers) for nearly a decade at Mozilla, I can tell you that people do work on video codecs for fun, see https://github.com/xiph/daala

Working with fine people from Xiph.Org and the IETF (and later AOM) on royalty free formats Theora, Opus, Daala and AV1 was by far the most fun, interesting and fulfilling work I've had as professional engineer.

tux3 9 days ago | parent [-]

Daala had some really good ideas, I only understand the coding tools at the level of a curious codec enthusiast, far from an expert, but it was really fascinating to follow its progress

Actually, are Xiph people still involved in AVM? It seems like it's being developed a little bit differently than AV1. I might have lost track a bit.

Taek 9 days ago | parent | prev | next [-]

People don't develop video codecs for fun because there are patent minefields.

You don't *have* to add all the rigour. If you develop a new technique for video compression, a new container for holding data, etc, you can just try it out and share it with the technical community.

Well, you could, if you weren't afraid of getting sued for infringing on patents.

scott_w 9 days ago | parent | prev | next [-]

> Are you really saying that patents are preventing people from writing the next great video codec?

Yes, that’s exactly what people are saying.

People are also saying that companies aren’t writing video codecs.

In both cases, they can be sued for patent infringement if they do.

eqvinox 9 days ago | parent | prev | next [-]

> Are you really saying that patents are preventing people from writing the next great video codec? If it were that simple, it would’ve already happened.

You wouldn't know if it had already happened, since such a codec would have little chance of success, possibly not even publication. Your proposition is really unprovable in either direction due to the circular feedback on itself.

fires10 9 days ago | parent | prev | next [-]

I don't do video because I don't work with it, but I do image compression for fun and no profit. I do use some video techniques due to the type of images I am compressing. I don't release because of the minefield. I do it because it's fun. The simulation runs and other tasks often I kick to the cloud for the larger compute needs.

bayindirh 9 days ago | parent | prev | next [-]

> People don’t develop video codecs for fun like they do with software. And the reason is that it’s almost impossible to do without support from the industry.

Hmm, let me check my notes:

    - Quite OK Image format: https://qoiformat.org/
    - Quite OK Audio format: https://qoaformat.org/
    - LAME (ain't a MP3 Encoder): https://lame.sourceforge.io/
    - Xiph family of codecs: https://xiph.org/
Some of these guys have standards bodies as supporters, but in all cases, bigger groups formed behind them, after they made considerable effort. QOI and QOA is written by a single guy just because he's bored.

For example, FLAC is a worst of all worlds codec for industry to back. A streamable, seekable, hardware-implementable, error-resistant, lossless codec with 8 channels, 32 bit samples, and up to 640KHz sample rate, with no DRM support. Yet we have it, and it rules consumer lossless audio while giggling and waving at everyone.

On the other hand, we have LAME. An encoder which also uses psycho-acoustic techniques to improve the resulting sound quality and almost everyone is using it, because the closed source encoders generally sound lamer than LAME in the same bit-rates. Remember, MP3 format doesn't have an reference encoder. If the decoder can read the file and it sounds the way you expect, then you have a valid encoder. There's no spec for that.

> Are you really saying that patents are preventing people from writing the next great video codec?

Yes, yes, and, yes. MPEG and similar groups openly threatened free and open codecs by opening "patent portfolio forming calls" to create portfolios to fight with these codecs, because they are terrified of being deprived of their monies.

If patents and license fees are not a problem for these guys, can you tell me why all professional camera gear which can take videos only come with "personal, non-profit and non-professional" licenses on board, and you have pay blanket extort ^H^H^H^H^H licensing fees to these bodies to take a video you can monetize?

For the license disclaimers in camera manuals, see [0].

[0]: https://news.ycombinator.com/item?id=42736254

Spooky23 8 days ago | parent | prev [-]

Patents, by design, give inventors claims to ideas, which gives them the money to drive progress at a pace that meets their business needs.

Look at data compression. Sperry/Univac controlled key patents and slowed down invention in the space for years. Was it in the interest of these companies or Unisys (their successor) to invest in compression development? Nope.

That’s by design. That moat of exclusivity makes it difficult to compensate people to come up with novel inventions in-scope or even adjacent to the patent. With codecs, the patents are very granular and make it difficult for anyone but the largest players with key financial interests to do much of anything.

roenxi 9 days ago | parent | prev | next [-]

> It’s like saying ”where would we be with AI if Google, OpenAI and Nvidia didn’t have an iron grip”.

We'd be where we are. All the codec-equivalent aspects of their work are unencumbered by patents and there are very high quality free models available in the market that are just given away. If the multimedia world had followed the Google example it'd be quite hard to complain about the codecs.

thinkingQueen 9 days ago | parent [-]

That’s hardly true. Nvidia’s tech is covered by patents and licenses. Why else would it be worth 4.5 trillion dollars?

The top AI companies use very restrictive licenses.

I think it’s actually the other way around and AI industry will actually end up following the video coding industry when it comes to patents, royalties, licenses etc.

roenxi 9 days ago | parent | next [-]

Because they make and sell a lot of hardware. I'm sure they do have a lot of patents and licences, but if all that disappeared today it'd be years to decades before anyone could compete with them. Even just getting a foot in the door in TSMC's queue of customers would be hard. Their valuation can likely be justified based on their manufacturing position alone. There is literally no-one else who can do what they do, law or otherwise.

If it is a matter of laws, China would just declare the law doesn't count to dodge around the US chip sanctions. Which, admittedly, might happen - but I don't see how that could result in much more freedom than we already have now. Having more Chinese people involved is generally good for prices, but that doesn't have much to do with market structure as much as they work hard and do things at scale.

> The top AI companies use very restrictive licenses.

These models are supported by the Apache 2.0 license ~ https://openai.com/open-models/

Are they lying to me? It is hard to get much more permissive than Apache 2.

mike_hearn 9 days ago | parent [-]

The top AI companies don't release their best models under any license. They're not even distributed at all. If you did steal the weights out from underneath Anthropic they would take you to court and probably win. Putting software you develop exclusively behind a network interface is a form of ultra-restrictive DRM. Yes, some places are currently trying to buy mindshare by releasing free models and that's fantastic, thank you, but they can only do that because investors believe the ROI from proprietary firewalled models will more than fund it.

NVIDIA's advantage over AMD is largely in the drivers and CUDA i.e. their software. If it weren't for IP law or if NVIDIA had foolishly made their software fully open source, AMD could have just forked their PTX compiler and NVIDIAs advantage would never have been established. In turn that'd have meant they wouldn't have any special privileges at TSMC.

oblio 9 days ago | parent | prev [-]

I imagine a chunk of it is also covered by trade secrets and NDAs.

wmf 8 days ago | parent | prev | next [-]

I'm not opposed to codecs having patents but Chiariglione set up a system where each codec has as many patent holders as possible and any one of those patent holders could hold the entire world hostage. They should have set up the patent pool and pricing before developing each codec and not allowed any techniques in the standard that aren't part of the pool.

mschuster91 9 days ago | parent | prev | next [-]

> Who would develop those codecs? A good video coding engineer costs about 100-300k USD a year. The really good ones even more. You need a lot of them.

How about governments? Radar, Laser, Microwaves - all offshoots of US military R&D.

There's nothing stopping either the US or European governments from stepping up and funding academic progress again.

rs186 8 days ago | parent [-]

Yeah, counting on governments to develop codecs optimized for fast evolving applications for web and live streaming is a great idea.

If we did that we would probably be stuck with low-bitrate 720p videos on YouTube.

mschuster91 8 days ago | parent [-]

> Yeah, counting on governments to develop codecs optimized for fast evolving applications for web and live streaming is a great idea.

Give universities the money, let them care about the details.

rs186 8 days ago | parent [-]

It seems that you have a massive misunderstanding of how this works.

University research labs, usually with a team of no more than 10 people (at most 20), are good at producing early, proof-of-concept work, but not incredibly complex projects like creating an actual codec. They are not known for producing polished, mature commerical products that can be immediately used in the real world. They don't have the resources or the incentive to do so.

mschuster91 8 days ago | parent [-]

> They don't have the resources or the incentive to do so.

Of course they have. Guess how MP3 was developed - an offshoot of the German Fraunhofer Institute and FAU Nürnberg-Erlangen, amongst others.

The fact that no one seems to even be able to imagine how funding anything from the government could even work (despite that era being just a few decades ago) is shocking.

[1] https://de.wikipedia.org/wiki/MP3

somethingsome 8 days ago | parent | prev [-]

Hey, I attend MPEG regularly (mostly lvc lately), there's a chance we’ve crossed paths!