Remix.run Logo
PaulRobinson 4 hours ago

No it won't.

LLMs are good at dealing with things they've seen before, not at novel things.

When novel things arise, you will either have to burn a shed ton of tokens on "reasoning", hand hold them (so you're doing advanced find and replace in this example, where you have to be incredibly precise and detailed about your language, to the point it might be quicker to just make the changes), or you have to wait until the next trained model that has seen the new pattern emerges, or quite often, all of the above.

pmarreck 3 hours ago | parent | next [-]

Apologies, but your information is either outdated from lack of experience with the latest frontier models, or you don't realize the fact that 99.9% of the work you do is not novel in all capacities. Have you only used Copilot, or something? Because that's what it sounds like. Since the performance of the latest models (Opus 4.6 max-effort, gpt-5.3-Codex) is nothing short of astonishing.

Real-world example: Claude isn't familiar with the latest Zig, so I had it write a language guide for 0.15.2 (here: https://gist.github.com/pmarreck/44d95e869036027f9edf332ce9a...) which pointed out all the differences, and that's been extremely helpful in having me not even have to touch a line of code to do the updates.

On top of that, for any Zig dependency I pull in which is written to an earlier version, I have forked it and applied these updates correctly (or it has, under my guidance, really), 100% of the time.

On the off chance that guide is not in its context, it has seen the expected warning or error message, googled it, and done the correct correction 100% of the time. Which is exactly what a human would do.

Let's play the falsifiability game: Find me a real-world example of an upgrade to a newer API from the just-previous-to-that API that a modern LLM will fail to do correctly. Your choice of beer or coffee awaits you if you provide a link to it.

flohofwoe 2 hours ago | parent | next [-]

> so I had it write a language guide for 0.15.2

Tbh, while impressive that it appears to work, that guide looks very tailored to the Zig stdlib subset used in your projects and also looks like a lot more work than just fixing the errors manually ;) For a large code base which would amortise the cost of this guide I still wouldn't trust the automatic update without carefully reviewing each change.

praveer13 2 hours ago | parent | prev [-]

I’ve been making a project in zig 0.16 with Claude as a learning experiment. It’s a fairly non trivial project (BitTorrent compliant p2p downloader for model weights on top of huggingface xet) - whenever it doesn’t know the syntax or makes errors, it literally reads the standard library code to understand and fix it. The project works too!

Philpax 3 hours ago | parent | prev | next [-]

Eh, I've had good luck with porting codebases to newer versions of Bevy by pointing CC to the migration guide, and that is harder to test than a language migration (as much of the changed behaviour would have been at runtime).

I still wouldn't want to deal with that much churn in my language, but I fully believe an agent could handle the majority of, if not all of, the migration between versions.

zozbot234 3 hours ago | parent | prev [-]

Just have to wait a few months until a new model with updated pretrained knowledge comes out.

weakfish 3 hours ago | parent [-]

Or spend those few months doing the update :-)