▲ | Gracana 3 days ago | |||||||
Example-based prompting is a good way to get specific behaviors. Write a system prompt that describes the behavior you want, write a round or two of assistant/user interaction, and then feed it all to the LLM. Now in its context it has already produced output of the type you want, so when you give it your real prompt, it will be very likely to continue producing the same sort of output. | ||||||||
▲ | gnulinux 2 days ago | parent | next [-] | |||||||
This is true, but I still avoid using examples. Any example biases the output to an unacceptable degree even in best LLMS like Gemini Pro 2.5 or Claude Opus. If I write "try to do X, for example you can do A, B, or C" LLM will do A, B, or C great majority of the time (let's say 75% of the time). This severely reduces the creativity of the LLM. For programming, this is a big problem because if you write "use Python's native types like dict, list, or tuple etc" there will be an unreasonable bias towards these three types as opposed to e.g. set, which will make some code objectively worse. | ||||||||
▲ | XenophileJKO 3 days ago | parent | prev | next [-] | |||||||
I almost never use examples in my professional LLM prompting work. The reason is they bias the outputs way too much. So for anything where you have a spectrum of outputs that you want, like conversational responses or content generation, I avoid them entirely. I may give it patterns but not specific examples. | ||||||||
| ||||||||
▲ | lottin 3 days ago | parent | prev [-] | |||||||
Seems like a lot of work, though. |