| ▲ | l9o 9 hours ago | |
yes, completely agree. having some sort of guardrails for the LLM is extremely important. in the earlier models I would sometimes write tests for checking that my coding patterns were being followed correctly. basic things like certain files/subclasses being in the correct directories, making sure certain dunder methods weren't being implemented in certain classes where I noticed models had a tendency to add them, etc. these were all things that I'd notice the models would often get wrong and would typically be more of a lint warning in a more polished codebase. while a bit annoying to setup, it would vastly improve the speed and success rate at which the models would be able to solve tasks for me. nowadays many of those don't seem to be as necessary. it's impressive to see how the models are evolving. | ||