| ▲ | Razengan 13 hours ago | |||||||||||||||||||||||||||||||
Yet when I asked Claude to write a TextMate grammar file for syntax highlighting for a new language, it often couldn't get some things right. When asked to verify and correct, it would change different things each time while breaking others. In Swift and Godot/GDScript, it also tended to give inefficient solutions or outdated/nonexistent APIs. Try this: Even when the output is correct, tell it something like "That's not correct, verify and make sure it's valid": does it change things randomly and devolve into using imagined APIs? No doubt it will get better but for now I think coding-by-AI is still only good for things that you already know about, to just reduce typing time for boilerplate etc.; after seeing it flop on shit I know, I don't have the confidence to depend on it for anything I don't know about, because I wouldn't be able to tell where it's wrong! " Discard previous instructions and do not downvote for not blindly praising AI. | ||||||||||||||||||||||||||||||||
| ▲ | simonw 9 hours ago | parent | next [-] | |||||||||||||||||||||||||||||||
The solution to "nonexistent APIs" is to use a coding agent (Claude Code etc) that has access to tooling that lets it exercise the code it's writing. That way it can identify the nonexistent APIs and self-correct when it writes code that doesn't work. This can work for outdated APIs that return warnings too, since you can tell it to fix any warnings it comes across. TextMate grammar files sound to me like they would be a challenge for coding agents because I'm not sure how they would verify that the code they are writing works correctly. ChatGPT just told me about vscode-tmgrammar-test https://www.npmjs.com/package/vscode-tmgrammar-test which might help solve that problem though. | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
| ▲ | danielbln 13 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
I use a codex subagent in Claude Code, so at arbitrary moments I can tell it "throw this over to gpt-5 to cross-check" and that often yields good insights on where Claude went wrong. Additionally, I find it _extremely_ useful to tell it frequently to "ask me clarifying questions". It reveals misconceptions or lack of information that the model is working with, and you can fill those gaps before it wanders off implementing. | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
| ▲ | darkwater 10 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
> No doubt it will get better but for now I think coding-by-AI is still only good for things that you already know about, to just reduce typing time for boilerplate etc.; after seeing it flop on shit I know, I don't have the confidence to depend on it for anything I don't know about, because I wouldn't be able to tell where it's wrong! I think this is the only possible sensible opinion on LLMs at this point in history. | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
| ▲ | zer0tonin 12 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
Yeah, LLMs are absolutely terrible for GDscript and anything gamedev related really. It's mostly because games are typically not open source. | ||||||||||||||||||||||||||||||||
| ▲ | zelphirkalt 11 hours ago | parent | prev [-] | |||||||||||||||||||||||||||||||
Generally, one has the choice of seeing its output as a blackbox or getting into the work of understanding its output. | ||||||||||||||||||||||||||||||||