| ▲ | jeroenhd 14 hours ago | |||||||
That's such an incredibly basic concept, surely AIs have evolved to the point where you don't need to explicitly state those requirements anywhere? | ||||||||
| ▲ | simonw 12 hours ago | parent | next [-] | |||||||
They can still make mistakes. For example, what if your code (that the LLM hasn't reviewed yet) has a dumb feature in where it dumps environment variables to log output, and the LLM runs "./server --log debug-issue-144.log" and commits that log file as part of a larger piece of work you ask it to perform. If you don't want a bad thing to happen, adding a deterministic check that prevents the bad thing to happen is a better strategy than prompting models or hoping that they'll get "smarter" in the future. | ||||||||
| ||||||||
| ▲ | thunky 12 hours ago | parent | prev [-] | |||||||
Doesn't seem to work for humans all the time either. Some of this negativity I think is due to unrealistic expectations of perfection. Use the same guardrails you should be using already for human generated code and you should be fine. | ||||||||