▲ | TeMPOraL 4 days ago | |
RE 2., I doubt it'll help - for at least two reasons, already mentioned by 'duskwuff and 'daveguy. RE 1., definitely worth trying, and there's more variants of such tricks specific to models. I'm out of date on OpenAI docs, but with Anthropic models, the docs suggest using XML notation to label and categorize most important parts of the input. This kind of soft structure seems to improve the results coming from Claude models; I imagine they specifically trained the model to recognize it. See: https://docs.anthropic.com/en/docs/build-with-claude/prompt-... In author's case, for Anthropic models, the final prompt could look like this:
This kind of prompting is supposed to provide noticeable improvement for Anthropic models. Ironically, I only discovered it few weeks ago, despite having been using Claude 3.5 Sonnet extensively for months. Which goes to say, RTFM is still a useful skill. Maybe OpenAI models have similar affordances too, simple but somehow unnoticed? (I'll re-check the docs myself later.) |