Remix.run Logo
112233 16 hours ago

Ah yes, the 0.50$/h support infrastructure from the places that cannot refuse the deal. "frontier" LLMs currently cosplay a dunk with google and late alzheimer's. Surely, they speed up brute-forcing correct answer a lot by trying more likely texts. And? This overfed markov chain doesn't need supporing infrastructure — it IS supporting infastructure, for the cognitive something that is not being worked on prominently, because all resources are needed to feed the markov chain.

The silence surrounding new LLM architectures is so loud that an abomination like "claw" gets prime airtime. Meanwhile models keep being released. Maybe the next one will be the lucky draw. It was pure luck, finding out how well LLMs scale, in the first place. Why shouldn't the rest of progress be luck driven too?

Kerbal AGI program...

rl3 7 hours ago | parent [-]

Pretty much, it's just that these overfed Markov chains when given a proper harness and agentic framework are able to produce entire software projects in a fraction of the time it used to take.

Kerbel AGI program hits the nail on the head.