| ▲ | yunwal 2 days ago | |||||||||||||||||||||||||
You’re still allowed to reason about the generated output. If it’s not what you want you can even reject it and write it yourself! | ||||||||||||||||||||||||||
| ▲ | palmotea 2 days ago | parent [-] | |||||||||||||||||||||||||
>> One important reason people like to write code is that it has well-defined semantics, allowing to reason about it and predict its outcome with high precision. Likewise for changes that one makes to code. LLM prompting is the diametrical opposite of that. > You’re still allowed to reason about the generated output. If it’s not what you want you can even reject it and write it yourself! You missed the key point. You can't predict and LLM's "outcome with high precision." Looking at the output and evaluating it after the fact (like you describe) is an entirely different thing. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||