Remix.run Logo
Rygian 3 hours ago

How does the LLM know which sources can be trusted?

simianwords 3 hours ago | parent [-]

yeah it can avoid blogspam as sources and prioritise research from more prestigious journals or more citations. it will be smart enough to use some proxy.

palmotea 2 hours ago | parent | next [-]

You can also tell it to just not hallucinate, right? Problem solved.

I think what you'll end up is a response that still relies on whatever random sources it likes, but it'll just attribute it to the "trusted sources" you asked for.

simianwords 2 hours ago | parent [-]

you have an outdated view on how much it hallucinates.

UqWBcuFx6NV4r an hour ago | parent | next [-]

I am not anti-LLM by almost any stretch but your lack of fundamental understanding coupled with willingness to assert BS is at the point where it’s impossible to discuss anything.

You started off by asking a question, and people are responding. Please, instead of assuming that everyone else is missing something, perhaps consider that you are.

simianwords an hour ago | parent [-]

You’ve misunderstood my position and you rely on slander.

Here’s what I mean: LLMs can absolutely be directed to just search for trustable sources. You can do this yourself - ask ChatGPT a question and ask it to use sources from trustworthy journals. Come up with your own rubric maybe. It will comply.

Now, do you disagree that ChatGPT can do this much? If you do, it’s almost trivially disprovable.

One of the posters said that hallucination is a problem but if you’ve used ChatGPT for search, you would know that it’s not. It’s grounding on the results anyway a worst case the physician is going to read the sources. So what’s hallucination got to do here?

The poster also asked a question “can you ask it to not hallucinate”. The answer is obviously no! But that was never my implication. I simply said you can ask it to use higher quality sources.

Since you’ve said in asserting BS, I’m asking you politely to show me exactly what part of what I said constitutes as BS with the context I have given.

palmotea 2 hours ago | parent | prev [-]

The point was: will telling it to not hallucinate make it stop hallucinating?

simianwords an hour ago | parent [-]

No, but did I suggest this? I only suggested you can ask ChatGPT to rely on higher quality sources. ChatGPT has a trade off to do when performing a search - it can rely on lower quality sources to answer questions at the risk of these sources being wrong.

Please read what I have written clearly instead of assuming the most absurd interpretation.

2 hours ago | parent | prev [-]
[deleted]