▲ | csomar 3 months ago | |
One of the biggest issues of LLM is that they have a lossy memory. Say there is a function from_json that accepts 4 arguments. An LLM might predict that it accepts 3 arguments and thus produce non-functional code. However, if you add the docs for the function, the LLM will write correct code. With the LLM being able to tap up to date context (like LSP), you won't need that back-and-forth dance. This will massively improve code generations. |