Remix.run Logo
theshrike79 2 days ago

> - The code that is written by LLMs always needs to be heavily monitored for correctness, style, and design, and then typically edited down, often to at least half its original size

For this language matters a lot, if whatever you're using has robust tools for linting and style checks, it makes the LLMs job a lot easier. Give it a rule (or a forced hook) to always run tests and linters before claiming a job is done and it'll iterate until what it produces matches the rules.

But LLM code has a habit of being very verbose and covers every situation no matter how minuscule.

This is especially grating when you're doing a simple project for local use and it's bootstrapping something that's enterprise-ready :D

WorldMaker 2 days ago | parent [-]

If you force the LLM to solve every test failure this also can lead to the same breakdown models as very junior developers coding to the tests rather than the problems, I've seen all of:

1) I broke the tests, guess I should delete them.

2) I broke the tests, guess the code I wrote was wrong, guess I should delete all of that code I wrote.

3) I broke the tests, guess I should keep adding more code and scaffolding. Another abstraction layer might work? What if I just add skeleton code randomly, does this add random code whack-a-mole work?

That last one can be particularly "fun" because already verbose LLM code skyrockets into baroque million line PRs when left truly unsupervised, and that PR still won't build or pass tests.

There's no true understanding by an LLM. Forcing it to lint/build can be important/useful, but still not a cure-all, and leads to such fun even more degenerate cases than hand-holding it.