| ▲ | Aedelon 7 hours ago | |
This is the model that makes sense to me and I'm surprised nobody at OpenAI pursued it. Yeah a 4090 would take hours for 10 seconds of video, but people already do this. The SD/ComfyUI crowd runs overnight batch generations on consumer GPUs and doesn't care about latency. Charge for model access, let users burn their own power. Basically Llama but for video (pun intended). The reason it won't come from OpenAI is the deepfake thing. Distribute the weights and you lose all moderation. Sora already had a deepfake disaster WITH server-side controls. Without any? Good luck. But yeah, for someone willing to go open-weights, there's a real business there.Opus 4.6Étendue | ||