| ▲ | theflyinghorse 4 days ago |
| I keep thinking about perhaps LLMs would make writing code in these lower-level-but-far-better-performing languages in vogue. Why have claude generate a python service when you could write a rust or C3 service with compiler doing a lot of heavy lifting around memory bugs? |
|
| ▲ | dotancohen 4 days ago | parent | next [-] |
| > Why have claude generate a python service when you could write a rust or C3 service with compiler doing a lot of heavy lifting around memory bugs?
The architecture of my current project is actually a Python/Qt application which is a thin wrapper around an LLM generated Rust application. I go over almost every line of the LLM generated Rust myself, but that machine is far more skilled at generating quality Rust than I currently am. But I am using this as an opportunity to learn. |
| |
| ▲ | all2 4 days ago | parent [-] | | > that machine is far more skilled at generating quality Rust than I currently am. But I am using this as an opportunity to learn. I'm currently doing this with golang. It is not that bad of an experience. LLMs do struggle with concurrency, though. My current project has proved to be pretty challenging for LLMs to chew through. |
|
|
| ▲ | notimetorelax 4 days ago | parent | prev | next [-] |
| Having worked with rust in the past couple years, I can say that it hands down much better fit for LLMs than Python thanks to its explicitness and type information. This provides a lot of context for LLM to incrementally grow the codebase.
You still have to watch it, of course. But the experience is very pleasant. |
|
| ▲ | klysm 4 days ago | parent | prev | next [-] |
| Because there’s more python on the internet to interpolate from. LLMs are not equally good at all languages |
| |
| ▲ | yencabulator 4 days ago | parent | next [-] | | You can throw Claude at a completely private Rust code base with very specific niche requirements and conventions that are not otherwise common in Rust and it will demonstrate a remarkably strong ability to explain it and program according to the local idioms. I think your statement is based on liking a popular language, not on evidence.. | | |
| ▲ | all2 4 days ago | parent [-] | | I find that having a code-base properly scaffolded really, really helps a model handle implementing new features or performing bug-fixes. There's this grey area between greenfield and established that I hit every time I try to take a new project to a more stable state. I'm still trying to sort out how to get through that grey area. | | |
| ▲ | yencabulator 4 days ago | parent [-] | | I had Claude nearly one-shot (well, sequence-of-oneshots) a fairly complex multi-language file pretty-printer, but only after giving it a very specific 150-line TODO file with examples of correct results, so I think pure greenfield is very achievable if you steer it well enough. I did have to really focus on writing the tasks to be such that there wasn't much room for going off the rails, thought about their ordering, etc; it was pretty far from vibecoding, produced a strict data-driven test suite, etc. But ultimately, I agree with you, in most projects, having enough existing style, arranged in a fairly specific way, for Claude to imitate makes results a lot better. Or at least, until you get to that "good-looking codebase", you have to steer it a lot more explicitly, to the level of telling it what function signatures to use, what files to edit, etc. Currently on another project, I've had Claude make ~10 development spikes on specific ~5 high-uncertainty features on separate branches, without ever telling it what the main project structure really is. Some of the spikes implement the same functionality with e.g. different libraries, as I'm exploring my options (ML inference as a library is still a shitshow). I think that approach has some whiff of "future of programming" to it. Previously I would have spent more effort studying the frameworks up front and committed to a choice harder, now it's "let's see if this is good enough". |
|
| |
| ▲ | venuur 4 days ago | parent | prev [-] | | That’s been my experience. LLMs excel at languages that are popular. JavaScript and Python are two great examples. |
|
|
| ▲ | sonnig 4 days ago | parent | prev | next [-] |
| I think the same. It sounds quite more practical to have LLMs code in languages whose compilers provide as much compile-time guardrails as possible (Rust, Haskell?). Ironically in some ways this applies to humans writing code as well, but there you run into the (IMO very small) problem of having to write a bit more code than with more dynamic languages. |
|
| ▲ | HighGoldstein 4 days ago | parent | prev | next [-] |
| It seems cynically fitting that the future we're getting and deserve is one where we've automated the creation of memory bugs with AI. |
|
| ▲ | gitaarik 4 days ago | parent | prev [-] |
| You still want to be able to easily review the LLM generated code. At least I want to. |