| ▲ | andy81 9 hours ago | |||||||
We've all tried to ask the LLM about something outside of its training data by now. In that situation, they give the (wrong) answer that sounds the most plausible. | ||||||||
| ▲ | PacificSpecific 9 hours ago | parent | next [-] | |||||||
That's definitely been my experience. I work with a lot of weird code bases that have never been public facing and AI has horrible responses for that stuff. As soon as I tried to make a todomvc it started working great but I wonder how much value that really brings to the table. It's great for me though. I can finally make a todomvc tailored to my specific needs. | ||||||||
| ||||||||
| ▲ | visarga 3 hours ago | parent | prev | next [-] | |||||||
> In that situation, they give the (wrong) answer that sounds the most plausible. Not if you use web search or deep report, you should not use LLMs as knowledge bases, they are language models - they learn language not information, and are just models not replicas of the training set. | ||||||||
| ▲ | NoMoreNicksLeft 8 hours ago | parent | prev [-] | |||||||
Once or twice, for me it's deflected rather than answer at all. On the other hand, they've also surfaced information (later independently confirmed by myself) that I had not been able to find for years. I don't know what to make of it. | ||||||||