| ▲ | _pdp_ 15 hours ago | |||||||||||||
Take some working code. Ask an LLM to fix bugs. Measure performance and test coverage. Feed the results back into the LLM. Repeat. This has been the standard approach for more complex LLM deployments for a while now in our shop. Using different models across iterations is also something I've found useful in my own experiments. It's like getting a fresh pair of eyes. | ||||||||||||||
| ▲ | cyanydeez 15 hours ago | parent [-] | |||||||||||||
Can we modify this approach to get LLMs that are good at specific programming languages or frameworks? That seems to be where local LLMs could really shine. | ||||||||||||||
| ||||||||||||||