▲ | ryao 3 days ago | |
It sounds like you are used to short conversations with few turns. In conversations with dozens/hundreds/thousands of turns, prompting to avoid bad output entering the context is generally better than prompting to try to correct output after the fact. This is due to how in-context learning works, where the LLM will tend to regurgitate things from context. That said, every LLM has its quirks. For example, Gemini 1.5 Pro and related LLMs have a quirk where if you tolerate a single ellipsis in the output, the output will progressively gain ellipses until every few words is followed by an ellipsis and responses to prompts asking it to stop outputting ellipses includes ellipses anyway. :/ |