Remix.run Logo
furyofantares 4 days ago

LLMs are fundamentally text-completion. The Chat-based tuning that goes on top of it is impressive but they are fundamentally text-completion, that's where most of the training energy goes. I keep this in mind with a lot of my prompting and get good results.

Regurgitating and Examples are both ways to lean into that and try to recover whatever has been lost by Chat-based tuning.

zi_ 4 days ago | parent [-]

what else do you think about when prompting, which you've found to be useful?