Remix.run Logo
BoorishBears a day ago

what about the fact frontier labs are spending more compute on viral AI video slop and soon-to-be-obsoleted workplace usecases than research?

Even if you don't understand the technicals, surely you understand if any party was on the verge of AGI they wouldn't behave as these companies behave?

echoangle a day ago | parent | next [-]

What does that tell you about AI in 100 years though? We could have another AI winter and then a breakthrough and maybe the same cycle a few times more and could still somehow get AGI at the end. I’m not saying it’s likely but you can’t predict the far future from current companies.

BoorishBears a day ago | parent [-]

You're making the mistake of assuming the failure of the current companies would be seperated from the failures of AI as a technology.

If we continue the regime where OpenAI gets paid to buy GPUs and they fail, we'll have a funding winter regardless of AI's progress.

I think there is a strong bull case for consumer AI but it looks nothing like AGI, and we're increasingly pricing in AGI-like advancements.

Rudybega a day ago | parent | prev | next [-]

> what about the fact frontier labs are spending more compute on viral AI video slop and soon-to-be-obsoleted workplace usecases than research?

That's a bold claim, please cite your sources.

It's hard to find super precise sources on this for 2025, but epochAI has a pretty good summary for 2024. (with core estimates drawn from the Information and NYT

https://epoch.ai/data-insights/openai-compute-spend

The most relevant quote: "These reports indicate that OpenAI spent $3 billion on training compute, $1.8 billion on inference compute, and $1 billion on research compute amortized over “multiple years”. For the purpose of this visualization, we estimate that the amortization schedule for research compute was two years, for $2 billion in research compute expenses incurred in 2024."

Unless you think that this rough breakdown has completely changed, I find it implausible that Sora and workplace usecases constitute ~42% of total training and inference spend (and I think you could probably argue a fair bit of that training spend is still "research" of a sort, which makes your statement even more implausible).

BoorishBears a day ago | parent [-]

Sorry I'm giving too much credit to the reader here I guess.

"AI slop and workplace usecases" is a synecdoche for "anything that is not completing then deploying AGI".

The cost of Sora 2 is not the compute to do inference on videos, it's the ablations that feed human preference vs general world model performance for that architecture for example. It's the cost of rigorous safety and alignment post-training. It's the legal noise and risk that using IP in that manner causes.

And in that vein, the anti-signal is stuff like the product work that is verifying users to reduce content moderation.

These consumer usecases could be viewed as furthering the mission if they were more deeply targeted at collecting tons of human feedback, but these applications overwhelmingly are not architected to primarily serve that benefit. There's no training on API usage, there's barely any prompts for DPO except when they want to test a release for human preference, etc.

None of this noise and static has a place if you're serious about to hit AGI or even believe you can on any reasonable timeline. You're positing that you can turn grain of sand into thinking intelligent beings, ChatGPT erotica is not on the table.

dwaltrip a day ago | parent | prev [-]

They don’t.

BoorishBears a day ago | parent [-]

Is that why Sam is on Twitter people paying them $20 a month is their top compute priority as they double compute in response to people complaining about their not-AGI that is a constant suck between deployment, and stuff like post-training specifically for making the not-AGI compatible with outside brand sensibilities?