▲ | kazinator 7 days ago | |||||||||||||||||||||||||||||||
Why doesn't Flash get it correct, yet comes up with plausible sounding nonsense? That means it is trained on some texts in the area. What would make 2.5 Pro (or anything else) categorically better would be if it could say "I don't know". There will be things that Claude 3.7 or Gemini Pro will not know, and the interpolations they come up with will not make sense. | ||||||||||||||||||||||||||||||||
▲ | simianwords 7 days ago | parent [-] | |||||||||||||||||||||||||||||||
Model accuracy goes up as you use heavier models. Accuracy is always preferable and the jump from Flash to Pro is considerable. You must rely on your own internal model in your head to verify the answers it gives. On hallucination: it is a problem but again, it reduces as you use heavier models. | ||||||||||||||||||||||||||||||||
|