▲ | codeflo 5 days ago | |||||||
That's because the architecture isn't built for it to know what it knows. As someone put it, LLMs always hallucinate, but for in-distribution data they mostly hallucinate correctly. | ||||||||
▲ | bluefirebrand 5 days ago | parent | next [-] | |||||||
My vibe has it mostly hallucinates incorrectly I really do wonder what the difference is. Am I using it wrong? Am I just unlucky? Do other people just have lower standards? I really don't know. I'm getting very frustrated though because I feel like I'm missing something. | ||||||||
| ||||||||
▲ | 5 days ago | parent | prev [-] | |||||||
[deleted] |