▲ | moregrist 3 days ago | |||||||||||||||||||||||||||||||
> One AI tool found this out and fully re-implemented the solver using a custom linear algebra library it wrote from scratch. So slow, untested, and likely buggy, especially as the inputs become less well-conditioned? If this was a jr dev writing code I’d ask why they didn’t use <insert language-relevant LAPACK equivalent>. Neither llm outcome seems very ideal to me, tbh. | ||||||||||||||||||||||||||||||||
▲ | theshrike79 3 days ago | parent [-] | |||||||||||||||||||||||||||||||
With mathematical things you can always write comprehensive and complete unit tests to check the AIs work. TDD (and exhaustive unit tests in general) are a good idea with LLMs anyway. Just either tell it not to touch test, or in Claude's case you can use Hooks to _actually_ prevent it from editing any test file. Then shove it at the problem and it'll iterate a solution until the tests pass. It's like the Excel formula solver, but for code :D | ||||||||||||||||||||||||||||||||
|