▲ | awongh 3 days ago | |
From an outsider perspective these kinds of insights make me think: Is it just a coincidence that a lot of the recent innovations in the space look very common sense in hindsight: - if we train the model to "think" through the answer, we get better results - if we train the model to say "I don't know" when it's not sure we get less hallucinations Is it just confirmation bias or do these common sense approaches work in on LLMs in other ways? |