Remix.run Logo
amelius 15 hours ago

There's still a bug: the glass with water does not distort the checker pattern in the background at 24:12.

jweir 14 hours ago | parent | next [-]

True, but with visual art there is what is correct and what looks correct. When things are moving and the area small no one is going to notice.

But now that is problem is solved a director will come along and say... I want a scene with a big glass of water and the camera will zoom in on it and will see the monster refracted through the glass.

gmueckl 14 hours ago | parent [-]

At that point it's better to do the glass entirely in post.

nstart 8 hours ago | parent | prev | next [-]

Good spot! That is the product working as intended though. The background doesn't exist except as an asset that replaces the green screen. The tool is meant to replace the green screen without the need for manual rotoscoping. Even in a traditional process, the distortion needs to be done by VFX as a separate process. To do that though, they still need the green screen keyed out and this tool does that.

CharlesW 14 hours ago | parent | prev | next [-]

When you watch the video it becomes pretty clear why it wouldn't be able to do that, although it's fun to think about how a future iteration or alternative might be able to credibly (if you don't look too hard) mimic that someday.

catapart 14 hours ago | parent | prev | next [-]

I wouldn't call it a bug. This is a first step, not a final step. Maintaining the refraction might be more realistic, but it's not necessarily what the creator wants.

DrewADesign 14 hours ago | parent | prev | next [-]

You’d have to track it, render it, and comp it in. It’s not ridiculously difficult, but there’s no way that’s going to happen automatically.

orbital-decay 14 hours ago | parent [-]

>there’s no way that’s going to happen automatically

They train their model in a pretty straightforward way, it can also be used to capture the distortion as well, just use a non-monochrome (possibly moving) background optimized for this. It's a matter of effort and attention to detail during training (uneven green screen lighting, reflections, etc), not fundamental impossibility

amelius 13 hours ago | parent [-]

Yes. But the main issue is in the way they formulate the problem. Their output is always a transparency mask, which of course will never handle distortions.

dgently7 7 hours ago | parent | next [-]

youd have to train it to also generate and st map of the distortions but creating the ground truth version of that from the synthetic data would add a lot more to render. also its very easy to plausibly fake, its not something humans are good at seeing and knowing its wrong. you can tell its completely missing but accurate vs just distorted in a plausible way is not something most brains are tuned to notice.

DrewADesign 12 hours ago | parent | prev [-]

Right. Things like this are why it’s difficult integrating AI into professional movie pipelines— they’re super complex in ways AI cannot (yet) replicate for very good reasons that seem superfluous or trivially replaceable by people not familiar with them.

orbital-decay 7 hours ago | parent [-]

People in ML have this kind of belief rooted in the bitter lesson, that everything will eventually sort itself out given enough scale and data. That often makes them ignore the nuances of particular problem domains. CC is the opposite of that, it's just impossible to do everything at once.

orbital-decay 14 hours ago | parent | prev [-]

Sure, because they used monotone backgrounds and never really captured any distortion.