▲ | dsiegel2275 3 days ago | ||||||||||||||||||||||||||||||||||
So I have all kinds of problems with this post. First, the assertion that the best model of "AI coding" is that it is a compiler. Compilers deterministically map a formal language to another under a spec. LLM coding tools are search-based program synthesizers that retrieve, generate, and iteratively edit code under constraints (tests/types/linters/CI). That’s why they can fix issues end-to-end on real repos (e.g., SWE-bench Verified), something a compiler doesn’t do. Benchmarks now show top agents/models resolving large fractions of real GitHub issues, which is evidence of synthesis + tool use, not compilation. Second, that the "programming language is English". Serious workflows aren’t "just English." They use repo context, unit tests, typed APIs, JSON/function-calling schemas, diffs, and editor tools. The "prompt" is often code + tests + spec, with English as glue. The author attacks the weakest interface, not how people actually ship with these tools. Third, non-determinism isn't disqualifying. Plenty of effective engineering tools are stochastic (fuzzers, search/optimization, SAT/SMT with heuristics). Determinism comes from external specs: unit/integration tests, type systems, property-based tests, CI gates. False dichotomy: "LLMs are popular only because languages/libraries are bad." Languages are improving (e.g. Rust, Typescript), yet LLMs still help because the real bottlenecks are API lookup, cross-repo reading, boilerplate, migrations, test writing, and refactors, the areas where retrieval and synthesis shine. These are complementary forces, not substitutes. Finally, no constructive alternatives are offered. "Build better compilers/languages" is fine but modern teams already get value by pairing those with AI: spec-first prompts, test-gated edits, typed SDK scaffolds, auto-generated tests, CI-verified refactors, and repo-aware agents. A much better way to think about AI coding and LLMs is that they aren’t compilers. They’re probabilistic code synthesizers guided by your constraints (types, tests, CI). Treat them like a junior pair-programmer wired into your repo, search, and toolchain. But not like a magical English compiler. | |||||||||||||||||||||||||||||||||||
▲ | georgehotz 3 days ago | parent | next [-] | ||||||||||||||||||||||||||||||||||
Author here. I agree with this comment, but if I wrote more like this my blog post would get less traction. "LLM coding tools are search-based program synthesizers," in my mind this is what compilers are. I think most compilers do far too little search and opt for heuristics instead, often because they don't have an integrated runtime environment, but it's the same idea. "Plenty of effective engineering tools are stochastic," sure but while a SAT solver might use randomness and that might adjust your time to solve, it doesn't change the correctness of the result. And for something like a fuzzer, that's a test, which are always more of a best effort thing. I haven't seen a fuzzer deployed in prod. "Determinism comes from external specs and tests," my dream is a language where I can specify what it does instead of how it does it. Like the concept of Halide's schedule but more generic. The computer can spend its time figuring out the how. And I think this is the kind of tools AI will deliver. Maybe it'll be with LLMs, maybe it'll be something else, but the key is that you need a fairly rigorous spec and that spec itself is the programming. The spec can even be constraint based instead of needing to specify all behavior. I'm not at all against AI, and if you are using it at a level described in this post, like a tool, aware of its strengths and limitations, I think it can be a great addition to a workflow. I'm against the idea that it's a magical English compiler, which is what I see in public discourse. | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
▲ | inimino 2 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
People knock "English as a programming language", but in my opinion this is the whole value of AI programming: by the time you've expressed your design and constraints well enough that an LLM can understand it, then anyone can understand it, and you end up with a codebase that's way more maintainable than what we're used to. The problem of course is when people throw away the prompt and keep the code, like the code is somehow valuable. This would be like everyone checking in their binaries and throwing away their source code every time, while arguments rage on HN about whether compilers are useful. (Meanwhile, compiler vendors compete on their ability to disassemble and alter binaries in response to partial code snippets.) The right way to do AI programming is: English defines the program, generated code is exactly as valuable as compiler output is, i.e. it's the actual artifact that does the thing, so in one sense it's the whole point, but iterating on it or studying it in detail is a waste of time, except occasionally when debugging. It's going to take a while, but eventually this will be the only way anybody writes code. (Note: I may be biased, as I've built an AI programming tool.) If you can explain what needs to be done to a junior programmer in less time than it takes to do it yourself, you can benefit from AI. But, it does require totally rethinking the programming workflow and tooling. | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
▲ | mccoyb 3 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
Excellent response, completely agree. | |||||||||||||||||||||||||||||||||||
▲ | intothemild 3 days ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||
It's not surprising that you're finding problems with the article. It's written by George Hotz aka Geohot. | |||||||||||||||||||||||||||||||||||
|