Remix.run Logo
rvnx 8 hours ago

LLMs have their issues too.

In everyday life, you cannot read 20 books about a topic about everything you are curious about, but you can ask 5 subject-experts (“the LLMs”) in 20 seconds

some of them who are going to check on some news websites (most are also biased)

Then you can ask for summaries of pros and cons, and make your own opinions.

Are they hallucinating ? Could be. Are they lying ? Could be. Have they been trained on what their masters said to say ? Could be.

But multiplying the amount of LLMs reduce the risk.

For example, if you ask DeepSeek, Gemini, Grok, Claude, GLM-4.7 or some models that have no guardrails, what they think about XXX, then perhaps there are interesting insights.

jamespo 7 hours ago | parent [-]

This may shock you, but wikipedia provides multiple sources, it even links to them. Where do you think the LLMs are getting their data from?

dfxm12 6 hours ago | parent [-]

To further this, articles also have an edit history and talk page. Even if one disagrees with consensus building or suspects foul play and they're really trying to get to the bottom of something, all the info is there on Wikipedia!

If one just wants a friendly black box to tell them something they want to hear, AI is known to do that.