▲ | therobots927 3 days ago | |||||||||||||
The problem is that if it's an engineering problem then further advancement will rely on step function discoveries like the transformer. There's no telling when that next breakthrough will come or how many will be needed to achieve AGI. In the meantime I guess all the AI companies will just keep burning compute to get marginal improvements. Sounds like a solid plan! The craziest thing about all of this is that ML researchers should know better!! Anyone with extensive experience training models small or large knows that additional training data offers asymptotic improvements. | ||||||||||||||
▲ | kelnos 3 days ago | parent [-] | |||||||||||||
I think the LLM businesses as-is are potentially fine businesses. Certainly the compute cost of running and using them is very high, not yet reflected in the prices companies like OpenAI and Anthropic are charging customers. It remains to be seen if people will pay the real costs. But even if LLMs are going to tap out at some point, and are a local maximum, dead-end, when it comes to taking steps toward AGI, I would still pay for Claude Code until and unless there's something better. Maybe a company like Anthropic is going to lead that research and build it, or maybe (probably) it's some group or company that doesn't exist yet. | ||||||||||||||
|