| ▲ | danjl 4 hours ago | |
Just saying "no" is unclear. LLMs are still very sensitive to prompts. I would recommend being more precise and assuming less as a general rule. Of course you also don't want to be too precise, especially about "how" to do something, which tends to back the LLM into a corner causing bad behavior. Focus on communicating intent clearly in my experience. | ||
| ▲ | ptak_dev 3 hours ago | parent [-] | |
[dead] | ||