Remix.run Logo
basilikum 4 hours ago

ffmpeg already has network capabilities. You can let it open a tcp socket and stream input from there and write output to another TCP socket. How is this different? Is this just a wrapper around this functionality for more convenience or does it provide any fundamentally new features?

steelbrain 4 hours ago | parent [-]

To be able to use ffmpeg with its native network capabilities in a usecase of media servers, where you need to stream your input to it, and then get multiple outputs (think HLS) that are streamed back is not possible at this point in time. HTTP, FTP, SFTP, all have their limitations, some are outright broken for HLS usecases, others wont stream seeking.

I would have very much loved to use the built-in capabilities instead of patching ffmpeg to add a vfs layer and spend a ton of time figuring out the build pipeline once you add all the codecs and hwaccels. I do hope to be able to change this in the future, I've identified several bugs that I intend to submit patches for.

halayli an hour ago | parent [-]

This is not a special case. Everything you mentioned above can actually be achieved using cli. You can create listeners, configure pipelines, and sinks(granted not ergonomic). Sinks can be HTTP post for example, and sources can be tcp listeners + protocols on top. You can also configure the buffering strategies for each pipeline.