| ▲ | Decabytes 2 hours ago | ||||||||||||||||
Gemini has always felt like someone who was book smart to me. It knows a lot of things. But if you ask it do anything that is offscript it completely falls apart | |||||||||||||||||
| ▲ | dwringer an hour ago | parent | next [-] | ||||||||||||||||
I strongly suspect there's a major component of this type of experience being that people develop a way of talking to a particular LLM that's very efficient and works well for them with it, but is in many respects non-transferable to rival models. For instance, in my experience, OpenAI models are remarkably worse than Google models in basically any criterion I could imagine; however, I've spent most of my time using the Google ones and it's only during this time that the differences became apparent and, over time, much more pronounced. I would not be surprised at all to learn that people who chose to primarily use Anthropic or OpenAI models during that time had an exactly analogous experience that convinced them their model was the best. | |||||||||||||||||
| ▲ | esafak 2 hours ago | parent | prev [-] | ||||||||||||||||
I'd rather say it has a mind of its own; it does things its way. But I have not tested this model, so they might have improved its instruction following. | |||||||||||||||||
| |||||||||||||||||