Remix.run Logo
seneca 6 hours ago

> ... I immediately feel the need to go ask a fresh instance the question and/or another LLM

Not to criticize at all, but it's remarkable that LLMs have already become so embedded that when we get the sense they're lying to us, the instinct is to go ask another LLM and not some more trustworthy source. Just goes to show that convenience reigns supreme, I suppose.

pixl97 5 hours ago | parent | next [-]

>and not some more trustworthy source.

What is that more trustworthy source exactly? At least to me it feels like the internet age has eroded most things we considered trustworthy. Behind every thing humans need there is some company or person willing to sell out trustworthiness for an extra dollar. Consumer protections get dumped in favor of more profit.

LLMs start feeling more like a dummy than the amount of ill intent they get from other places. So yea, I can see how it happens to people.

danny_codes 2 hours ago | parent | next [-]

Wikipedia is excellent.

AnimalMuppet 5 hours ago | parent | prev [-]

At the moment, maybe Google Search, throwing away the AI response at the top? Or Duck Duck Go, if you don't really trust Google?

I can see a day when even that won't be trustworthy, because too much AI slop output will wind up in the search corpus. But I don't think we're there yet.

autoexec 3 hours ago | parent | next [-]

> At the moment, maybe Google Search, throwing away the AI response at the top? Or Duck Duck Go,

Even past the summary and the ads a huge amount of results that come back from both google and DDG are AI generated. It's sometimes harder to find a reliable source for information in search results these days than it was 20 years ago.

salawat 4 hours ago | parent | prev [-]

Google is NN's all the way down these days. There might still be an honest index under it all, but a truly accurate representation of the Web has been effectively outlawed in the U.S. since DMCA.

vintermann 5 hours ago | parent | prev | next [-]

But they're not exactly lying. Lying assumes an intent to deceive. It's because we know an LLMs limitations, that it makes sense to ask it the opposite question/the question without context etc.

If it was easy to look up/check the fact without an LLM, wary users probably wouldn't have gone to the LLM in the first place.

seneca 5 hours ago | parent [-]

> Lying assumes an intent to deceive.

Yeah, fair point. "Misleading" would be a better term, perhaps.

salawat 4 hours ago | parent | prev [-]

Funny thing for me, is it's not the LLM lying to me. It's the creators. The LLM is just doing what it's weights tell it to. I'll admit, I went a bit nuclear the first time I ran one locally and observed it's outputs/chain-of-thought diverging/demonstrating intent to information hide. I'd never seen software straight up deceive before. Even obfuscated/anti-debug code is straightforward in doing what it does once you decompile the shit. To see a bunch of matrix math trying to perception manage me on my own machine... I did not take it well. It took a few days of cooling down and further research to reestablish firmly that any mendacity was a projection of the intent of the organization that built it. Once you realize that an LLM is basically a glorified influence agent/engagement pipeline built by someone else, so much clicks into place it's downright scary. Problem is it's hard to realize that in the moment you're confronting the radical novelty of a computer doing things an entire lifetime of working professionally with computers should tell you a computer simply cannot do. You have to get over the shock first. That shock is a hell of a hit.