Remix.run Logo
CommanderData 3 hours ago

All video models are terrible at consistency. Even closed source ones.

Seedance 2.0, Kling 3 are regarded the best closed source video models we have. I have subscribed to a few AI video subreddits, consensus atm is they are good for anything but long form videos with humans.

No surprises that we're very good at spotting even the most subtle differences while looking at other people.

adenta an hour ago | parent [-]

what subreddits do _you_ subscribe to?

I've been doing some content with people at https://industrialallusions.com

CommanderData an hour ago | parent [-]

https://www.reddit.com/r/KlingAI_Videos/

https://www.reddit.com/r/HiggsfieldAI/

Higgsfield have multiple models available, people use Kling usually 2.5 & 3. There are a few good examples posted right now you'll notice the subtle differences.

I have tried to generate things myself and it's extremely hard to have more than 7-8 clips that are consistent, eventually you'll accept a compromise. I think it's why there isn't any long form content being done yet. Getting good results is sometimes just "chance" regardless of how many reference data you have.