| ▲ | woodruffw 3 hours ago |
| > I am managing projects in languages I am not fluent in—TypeScript, Rust and Go—and seem to be doing pretty well. This framing reminds me of the classic problem in media literacy: people know when a journalistic source is poor when they’re a subject matter expert, but tend to assume that the same source is at least passably good when less familiar with the subject. I’ve had the same experience as the author when doing web development with LLMs: it seems to be doing a pretty good job, at least compared to the mess I would make. But I’m not actually qualified to make that determination, and I think a nontrivial amount of AI value is derived from engineers thinking that they are qualified as such. |
|
| ▲ | muglug 2 hours ago | parent | next [-] |
| Yup — this doesn't match my experience using Rust with Claude. I've spent 2.5 years writing Rust professionally, and I'm pretty good at it. Claude will hallucinate things about Rust code because it’s a statistical model, not a static analysis tool. When it’s able to create code that compiles, the code is invariably inefficient and ugly. But if you want it to generate chunks of usable and eloquent Python from scratch, it’s pretty decent. And, FWIW, I’m not fluent in Python. |
| |
| ▲ | micahscopes an hour ago | parent | next [-] | | With access to good MCP tools, I've had really good experience using claude code to write rust: https://news.ycombinator.com/item?id=44702820 | |
| ▲ | Mockapapella an hour ago | parent | prev | next [-] | | > When it’s able to create code that compiles, the code is invariably inefficient and ugly. Why not have static analysis tools on the other side of those generations that constrain how the LLM can write the code? | |
| ▲ | js2 2 hours ago | parent | prev | next [-] | | Hah... yeah, no, its Python isn't great. I'd definitely workable and better than what I see from 9/10 junior engineers, but it tends to be pretty verbose and over-engineered. My repos all have pre-commit hooks which run the linters/formatters/type-checkers. Both Claude and Gemini will sometimes write code that won't get past mypy and they'll then struggle to get it typed correct before eventually by passing the pre-commit check with `git commit -n`. I've had to add some fairly specific instructions to CLAUDE.md/GEMINI.md to get them to cut this out. Claude is better about following the rules. Gemini just flat out ignores instructions. I've also found Gemini is more likely to get stuck in a loop and give up. That said, I'm saying this after about 100 hours of experience with these LLMs. I'm sure they'll get better with their output and I'll get better with my input. | |
| ▲ | tayo42 an hour ago | parent | prev [-] | | > Claude will hallucinate things about Rust code because it’s a statistical model, not a static analysis tool. I think that's the point of the article. In a dynamic language or a compiled language, its going to be hallucinating either way. If you vibe coding the errors are caught earlier so you can vibe code them away before it blows up at run time. | | |
| ▲ | muglug an hour ago | parent [-] | | Static analysis tools like rustc and clippy are powerful, but there are large classes of errors that escape those analyses — e.g. things like off-by-one errors. |
|
|
|
| ▲ | 3 hours ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | bravesoul2 3 hours ago | parent | prev | next [-] |
| Why I only use it on stuff I can properly judge. |
|
| ▲ | giantrobot 3 hours ago | parent | prev | next [-] |
| Gell-Mann Amnesia [0] [0] https://en.m.wikipedia.org/wiki/Gell-Mann_amnesia_effect |
| |
|
| ▲ | js2 2 hours ago | parent | prev [-] |
| After decades of writing software, I feel like I have a pretty good sense for "this can't possibly be idiomatic" in a new language. If I sniff something is off, I start Googling for reference code, large projects in that language, etc. You can also just ask the LLM: are you sure this is idiomatic? Of course it may lie to you... |
| |
| ▲ | NitpickLawyer an hour ago | parent | next [-] | | > You can also just ask the LLM: are you sure this is idiomatic? I found the reverse flow to be better. Never argue. Start asking questions first. "What is the idiomatic way of doing x in y?" or "Describe idiomatic y when working on x" or similar. Then gather some stuff out of the "pedantic" generations and add to your constraints, model.md, task.md or whatever your stuff uses. You can also use this for a feedback loop. "Here's a task and some code, here are some idiomatic concepts in y, please provide feedback on adherence to these standards". | |
| ▲ | woodruffw 2 hours ago | parent | prev [-] | | > If I sniff something is off, I start Googling for reference code, large projects in that language, etc. This works so long as you know how to ask the question. But it's been my experience that an LLM directed on a task will do something, and I don't even know how to frame its behavior in language in a way that would make sense to search for. (My experience here is with frontend in particular: I'm not much of a JS/TS/HTML/CSS person, and LLMs produce outputs that look really good to me. But I don't know how to even begin to verify that they are in fact good or idiomatic, since there's more often than not multiple layers of intermediating abstractions that I'm not already familiar with.) |
|