Remix.run Logo
therobots927 3 days ago

The problem is that if it's an engineering problem then further advancement will rely on step function discoveries like the transformer. There's no telling when that next breakthrough will come or how many will be needed to achieve AGI.

In the meantime I guess all the AI companies will just keep burning compute to get marginal improvements. Sounds like a solid plan! The craziest thing about all of this is that ML researchers should know better!! Anyone with extensive experience training models small or large knows that additional training data offers asymptotic improvements.

kelnos 3 days ago | parent [-]

I think the LLM businesses as-is are potentially fine businesses. Certainly the compute cost of running and using them is very high, not yet reflected in the prices companies like OpenAI and Anthropic are charging customers. It remains to be seen if people will pay the real costs.

But even if LLMs are going to tap out at some point, and are a local maximum, dead-end, when it comes to taking steps toward AGI, I would still pay for Claude Code until and unless there's something better. Maybe a company like Anthropic is going to lead that research and build it, or maybe (probably) it's some group or company that doesn't exist yet.

JSR_FDED 3 days ago | parent | next [-]

“Potentially” is doing some heavy lifting here. As it stands currently, the valuations of these LLM businesses imply that they will be able to capture a lot of the generated value. But the open source/weights offerings, and competition from China and others makes me question that. So I agree these could be good businesses in theory, but I doubt whether the current business model is a good one.

therobots927 3 days ago | parent | prev [-]

Anthropic isn't even breaking even, and even if they do become profitable it's a far cry from AGI