▲ | btown 3 days ago | |
> AI workflows are, in practice, highly non-deterministic. While different versions of a compiler might give different outputs, they all promise to obey the spec of the language, and if they don’t, there’s a bug in the compiler. English has no similar spec. AI coding shines when this is a good thing. For instance, say you have to adapt the results of one under-documented API into another. Coding agents like Claude Code can write a prototype, get the real-world results of that API, investigate the contents, write code that tries to adapt, test the results, rewrite that code, test again, rewrite again, test again, ad nauseam. There are absolutely problem domains where this kind of iterative adaptation is slower than bespoke coding, where you already have the abstractions such that every line you write is a business-level decision that builds on years of your experience. Arguably, Geohot's low-level work on GPU-adjacent acceleration is a "wild west" where his intuition outstrips the value of experimentation. His advice is likely sound for him. If he's looking for a compiler for the highly detailed specifications that pop into his head, AI may not help him. But for many, many use cases, the analogy is not a compiler; it is a talented junior developer who excels at perseverance, curiosity, commenting, and TDD. They will get stuck at times. They will create things that do not match the specification, and need to be code-reviewed like a hawk. But by and large, if the time-determining factor is not code review but tedious experimentation, they can provide tremendous leverage. |