▲ | Rudybega a day ago | |
> what about the fact frontier labs are spending more compute on viral AI video slop and soon-to-be-obsoleted workplace usecases than research? That's a bold claim, please cite your sources. It's hard to find super precise sources on this for 2025, but epochAI has a pretty good summary for 2024. (with core estimates drawn from the Information and NYT https://epoch.ai/data-insights/openai-compute-spend The most relevant quote: "These reports indicate that OpenAI spent $3 billion on training compute, $1.8 billion on inference compute, and $1 billion on research compute amortized over “multiple years”. For the purpose of this visualization, we estimate that the amortization schedule for research compute was two years, for $2 billion in research compute expenses incurred in 2024." Unless you think that this rough breakdown has completely changed, I find it implausible that Sora and workplace usecases constitute ~42% of total training and inference spend (and I think you could probably argue a fair bit of that training spend is still "research" of a sort, which makes your statement even more implausible). | ||
▲ | BoorishBears a day ago | parent [-] | |
Sorry I'm giving too much credit to the reader here I guess. "AI slop and workplace usecases" is a synecdoche for "anything that is not completing then deploying AGI". The cost of Sora 2 is not the compute to do inference on videos, it's the ablations that feed human preference vs general world model performance for that architecture for example. It's the cost of rigorous safety and alignment post-training. It's the legal noise and risk that using IP in that manner causes. And in that vein, the anti-signal is stuff like the product work that is verifying users to reduce content moderation. These consumer usecases could be viewed as furthering the mission if they were more deeply targeted at collecting tons of human feedback, but these applications overwhelmingly are not architected to primarily serve that benefit. There's no training on API usage, there's barely any prompts for DPO except when they want to test a release for human preference, etc. None of this noise and static has a place if you're serious about to hit AGI or even believe you can on any reasonable timeline. You're positing that you can turn grain of sand into thinking intelligent beings, ChatGPT erotica is not on the table. |