| ▲ | layer8 3 days ago | ||||||||||||||||||||||||||||||||||
One important reason people like to write code is that it has well-defined semantics, allowing to reason about it and predict its outcome with high precision. Likewise for changes that one makes to code. LLM prompting is the diametrical opposite of that. | |||||||||||||||||||||||||||||||||||
| ▲ | youoy 2 days ago | parent | next [-] | ||||||||||||||||||||||||||||||||||
It completely depends on the way you prompt the model. Nothing prevents you from telling it exactly what you want, to the level of specifying the files and lines to focus on. In my experience anything other than that is a recepy for failure in sufficiently complex projects. | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
| ▲ | yunwal 3 days ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||
You’re still allowed to reason about the generated output. If it’s not what you want you can even reject it and write it yourself! | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||