▲ | jappgar 3 months ago | |
I don't see I or any other developer would abandon their homebrew agent implementation for a "standard" which isn't actually a standard yet. I also don't see any of that implementation as "boilerplate". Yes there's a lot of similar code being written right now but that's healthy co-evolution. If you have a look at the codebases for Langchain and other LLM toolkits you will realize that it's a smarter bet to just roll your own for now. You've definitely identified the main hurdle facing LLM integration right now and it most definitely isn't a lack of standards. The issue is that the quality of raw LLM responses falls apart in pretty embarrassing ways. It's understood by now that better prompts cannot solve these problems. You need other error-checking systems as part of your pipeline. The AI companies are interested in solving these problems but they're unable to. Probably because their business model works best if their system is just marginally better than their competitor. |