Remix.run Logo
utopiah 2 days ago

Have to admit, ffmpeg syntax is not trivial... but also the project is 24 years old and is basically the defacto industry standard. If you believe you will still be editing videos in 20 years with the CLI (or any other tool or any programming language) wrapping it then it's probably worth few hours learning how it actually works.

esperent 2 days ago | parent | next [-]

The syntax isn't too bad. The problem is that I have to use it a couple of times a year, on average. So every time I've forgotten and have to relearn. This doesn't happen with GUIs nearly as much, and it's why I prefer them over CLI tools for anything that I don't do at least once every week or two.

skydhash 2 days ago | parent [-]

That’s why you write scripts, or put a couple snippets in your notes.

esperent 2 days ago | parent [-]

I do have snippets in my notes. The problem is that nearly every time I use it, I need to do something different than the previous time.

bigiain 2 days ago | parent [-]

I started making sure my notes included the search queries keywords/keyphrases/terminology I uses and any webpages or documentation that I successfully used to come up with the solution (and where relevant, and search queries or misunderstandings on my part that lead to dead ends).

This hasn't solved the problem of sometimes needing to do new things, but it at least gives me a map to remind me of the parts of the rabbithole I've explored before.

Sean-Der 2 days ago | parent | prev | next [-]

My question/curiosity is why do so many people use ffmpeg (frustrated by the syntax) when GStreamer is available?

`gst-launch-1.0 filesrc ! qt4demux ! matroskamux ! filesink...` people would be less frustrated maybe?

People would also learn a little more and be less frustrated when conversation about container/codec/colorspace etc... come up. Each have a dedicated element and you can better understand its I/O

throwaway2046 2 days ago | parent | next [-]

I haven't tried GStreamer but I found FFmpeg to be extremely easy to compile as both a command line tool and library, plus it can do so much out of the box even without external libraries being present. It's already used in pretty much everything and does the job so it never occurred to me (or others) to look for alternatives.

thisislife2 a day ago | parent | prev | next [-]

Handbrake ( https://handbrake.fr/ ) and AviDemux ( https://www.avidemux.org/ ) is what the average user needs. Subler ( https://subler.org/ ) is also a good macOS app for muxing and tagging.

artpar 2 days ago | parent | prev [-]

I did not know gstreamer wasm also exists, I'll check it out

goeiedaggoeie 2 days ago | parent [-]

Still has a way to go, but very exciting.

jack_pp 2 days ago | parent | prev | next [-]

I agree, I suggest using this instead : https://github.com/kkroening/ffmpeg-python . While not perfect once you figure it out it is far easier to use and you can wrap more complicated workflows and reuse them later.

poly2it 2 days ago | parent [-]

Kkroening's wrapper has been inactive for some time. I suggest using https://github.com/jonghwanhyeon/python-ffmpeg instead. It has proper async support and a better API.

jack_pp 2 days ago | parent | next [-]

Scratch that I thought it was a different version. The one you linked has no support for filtergraphs so isn't even comparable to the old one.

jack_pp 2 days ago | parent | prev [-]

Thing is, if you want to use LLMs for mockups you got to use the old one.

artpar 2 days ago | parent | prev | next [-]

I think that goes with almost every tool you want to use with llm. User should already know the tool ideally so mistakes by llm can be prevented before they happen.

Here making ffmpeg as "just another capability" allows it to be stitched together in workflows

shardullavekar 2 days ago | parent | prev | next [-]

true, companies like Descript, Veed, or Kapwing exist because no coders find this syntax intimidating. Plus, a CLI tool stands out of a workflow. We wanted to change that.

petetnt 2 days ago | parent [-]

Don't "no coders" find the concepts described in this article imdimitating?

The article states that whatever the article is trying to describe "Takes about ~20-30 mins. The cognitive load is high....". while their literal actual step of "Googling "ffmpeg combine static image and audio."" gives you the literal command you need to run from a known source (superuser.com sourced from ffmpeg wiki).

Anyone even slightly familiar with ffmpeg should be able to produce the same result in minutes. For someone who doesn't understand what ffmpeg is the article means absolutely nothing. How does a "no coder" understand what a "agent in a sandboxed container" is?

shardullavekar 2 days ago | parent [-]

we took a basic example and described it. (will try adding a complex one)

we have our designer/intern in our minds who creates shorts, adds subtiles, crops them,and merges the audio generated. He is aware of ffmpeg and prefers using a SaaS UI on top of it.

However, we see him hanging out on chatgpt, or gemini all the time. He is literally the no coder we have in mind.

We just combined his type what you want + ffmpeg workflows.

EraYaN 2 days ago | parent [-]

Wouldn't that intern just use an NLE (be it Premiere, Davinci Resole etc) anyway? If you need to style subtitles and edit shorts and video content, you'll need a proper editor anyway.

shardullavekar 2 days ago | parent [-]

1. download a larger video from s3. 2. Use NLE and cut it into shorts. (crop, resize, subtitles etc.) 3. Upload shorts on YouTube, Instagram, Tiktok.

He does use davinci resolve but only for 2.

NLEs make ffmpeg a standalone yet easy to use tool.

Not denying that major heavy lifting is done by the NLE. We go a step ahead and make it embeddable in a larger workflow.

javier2 2 days ago | parent | prev | next [-]

ffmpeg is pretty complicated, but at least it actually works.

somat 2 days ago | parent | prev | next [-]

The thing that helped me get over that ffmpeg bump, where you go from copying stack overflow answers to actually sort of understanding what you are doing is the fairly recent include external file syntax. On the surface it is such a minor thing, but mentally it let me turn what was a confusing mess into a programing language. There are a couple ways to evoke it but the one I used was to load the whole file as an arg. Note the slash, it is important "-/filter_complex filter_file"

https://ffmpeg.org/ffmpeg-filters.html#toc-Filtergraph-synta...

"A special syntax implemented in the ffmpeg CLI tool allows loading option values from files. This is done be prepending a slash ’/’ to the option name, then the supplied value is interpreted as a path from which the actual value is loaded."

For how critical that was to getting over my ffmpeg hump, I wish it was not buried halfway through the documentation, but also, I don't know where else it would go.

And just because I am very proud of my accomplishment here is the ffmpeg side of my project, motion detection using mainly ffmpeg, there is some python glue logic to watch stdout for the events but all the tricky bits are internal to ffmpeg.

The filter(comments are added for audience understanding):

    [0:v]
    split  #split the camera feed into two parts, passthrough and motion
        [vis],
    scale=   #scale the motion feed way down, less cpu and it works better
        w=iw/4:
        h=-1,
    format= #needed because blend did not work as expected with yuv
        gbrp,
    tmix= #temporial blur to reduce artifacts
        frames=2,
    [1:v]  #the mask frame
    blend= #mask the motion feed
        all_mode=darken,
    tblend= #motion detect actual, the difference from the last frame
        all_mode=difference,
    boxblur= #blur the hell out of it to increase the number of motion pixels
        lr=20,
    maskfun= #mask it to black and white
        low=3:
        high=3,
    negate, #make the motion pixels black
    blackframe= #puts events on stdout when too many black pixels are found
        amount=1
        [motion]; #motion output
    [vis] 
    tpad= #delay pass through so you get the start of the event when notified
        start=30
        [original]; #passthrough output
and the ffmpeg evocation:

    ff_args = [
      'ffmpeg',
      '-nostats',
      '-an',
      '-i',
      camera_loc, #a security camera
      '-i',
      'zone_all.png', # mask as to which parts are relavent for motion detection
      '-/filter_complex',
      'motion_display.filter', #the filter doing all the work
      '-map',  #sort out the outputs from the filter
      '[original]',
      '-f',
      'mpegts', #I feel a little weied using mpegts but it was the best "streaming" of all the formats I tried
      'udp://127.0.0.1:8888',  #collect the full video from here
      '-map',
      '[motion]',
      '-f',
      'mpegts',
      'udp:127.0.0.1:8889', #collect the motion output from here, mainly for debugging
      ]
sexyman48 2 days ago | parent | prev [-]

[dead]