Remix.run Logo
cadamsdotcom 2 hours ago

This is a beautiful example of a little prompt engineering going a long way

I asked Gemini and it got it wrong, then on a fresh chat I asked it again but this time asked it to use symbolic reasoning to decide.

And it got it!

The same applies to asking models to solve problems by scripting or writing code. Models won’t use techniques they know about unprompted - even when it’ll result in far better outcomes. Current models don’t realise when these methods are appropriate, you still have to guide them.

felix089 2 hours ago | parent [-]

Interesting, which Gemini model? And how did you ask for symbolic reasoning, just added it to the prompt?