| ▲ | f33d5173 2 hours ago | |
They don't need to be services. You can - and many projects do - structure your code as a set of loosely coupled modules. Each module has a responsibility or set of responsibilities. They communicate with each other via well defined interfaces. For exposing code like this to an LLM, you would have them make a change to one or sometimes two modules, with access to the interface docs of all the other modules. The disadvantage of this compared to microservices is that if a module crashes it will take the entire process down with it, you can't move a module onto a different machine or create multiple instances of it as easily, etc. The advantage is that communication is done via function calls, which are simpler and more efficient than rpc. > I think this gestures at a more general point - we're still focusing on how to integrate LLMs into existing dev tooling paradigms. This is what we should be doing. This for a couple reasons. For one thing, humans don't have an entire codebase "in context" at a time. We should be recognizing that the limitations of an AI mirror the limitations of a person, and hence can have similar solutions. For another, the limitations of today's LLMs will not be the limitations of tomorrow's LLMs. Redesigning our code to suit today's limitations will only cause us trouble down the road. | ||