Remix.run Logo
starchild3001 3 days ago

I think this essay lands on a useful framing, even if you don’t buy its every prescription. If we zoom out, history shows two things happening in parallel: (1) brute-force scaling driving surprising leaps, and (2) system-level engineering figuring out how to harness those leaps reliably. GPUs themselves are a good analogy: Moore’s Law gave us the raw FLOPs, but CUDA, memory hierarchies, and driver stacks are what made them usable at scale.

Right now, LLMs feel like they’re at the same stage as raw FLOPs; impressive, but unwieldy. You can already see the beginnings of "systems thinking" in products like Claude Code, tool-augmented agents, and memory-augmented frameworks. They’re crude, but they point toward a future where orchestration matters as much as parameter count.

I don’t think the "bitter lesson" and the "engineering problem" thesis are mutually exclusive. The bitter lesson tells us that compute + general methods win out over handcrafted rules. The engineering thesis is about how to wrap those general methods in scaffolding that gives them persistence, reliability, and composability. Without that scaffolding, we’ll keep getting flashy demos that break when you push them past a few turns of reasoning.

So maybe the real path forward is not "bigger vs. smarter," but bigger + engineered smarter. Scaling gives you raw capability; engineering decides whether that capability can be used in a way that looks like general intelligence instead of memoryless autocomplete.