▲ | sefrost 4 days ago | |||||||
I like using LLMs and I have found they are incredibly useful writing and reviewing code at work. However, when I want sources for things, I often find they link to pages that don't fully (or at all) back up the claims made. Sometimes other websites do, but the sources given to me by the LLM often don't. They might be about the same topic that I'm discussing, but they don't seem to always validate the claims. If they could crack that problem it would be a major major win for me. | ||||||||
▲ | joegibbs 4 days ago | parent [-] | |||||||
It would be difficult to do with a raw model, but a two-step method in a chat interface would work - first the model suggests the URLs, tool call to fetch them and return the actual text of the pages, then the response can be based on that. | ||||||||
|