| ▲ | rvnx 8 hours ago |
| Wikipedia is literally a spin-off of a porn company. From that point on, where it came from or who founded it is not so important. The question is how it acts today. It is a highly-political organization supporting lot of “progressive” ideas, California-style. So if you like reading politically biased media it may be for you. If you are seeking for a global view you better ask different LLMs for arguments and counter-arguments on a subject. EDIT: a couple downvotes denying the influence of specific “Wikipedia ideology” and politics. Take a chance to edit articles and you will see how tedious it is. There is also a lot of legal censorship. Celebrities putting pressure on removing info, or lobbies, or say things that are illegal or very frowned upon (for example questioning homosexuality, or the impact of certain wars). Sometime it is legality, ideology, politics, funding, pressure, etc. This is why you need to use different sources. |
|
| ▲ | whynotmaybe 7 hours ago | parent | next [-] |
| It is tedious because you must edit with facts, not ideology. But we now live in a world where people agree that ideology should be able to change facts. > or the impact of certain wars Exactly, like China wanting to completely censor anything regarding the Tiananmen Square protests. > for example questioning homosexuality I don't know what you have to question about this. >If you are seeking for a global view you better ask different LLMs for arguments and counter-arguments on a subject. All the LLM I've tested have a strong tendency to increase your echo chamber and not try to change your opinion on something. >This is why you need to use different sources. Only if deep down, you're ready to change your POV on something, otherwise you're just wasting time and ragebaiting yourself.
Although I admit, it can still be entertaining to read some news to discover how they're able to twist reality. |
| |
| ▲ | rvnx 6 hours ago | parent | next [-] | | For the last part I agree with you, the LLMs tend to say what you like to hear. The echo chamber problem also exists, pushing them to say pros and cons is not perfect, but helps to make an opinion (and also "unaligned" models). Facts are very skewed by the environment:
in the case you push too much in one direction that is too controversial or because the politicians disagree too much with you; there can be plenty of negative consequences: - your website gets blocked, or you get publicly under pressure, or you lose donations, you lose grants, your payment providers blocks you, you lose audience, you can get a fine, you can go to jail, etc. Many different options. There is asymmetry here: We disagree, you have one opinion, what happens if both of us fight for 10 months, 24/7 debating "what is the truth ?" on that topic.
- You have that energy and time (because it's your own page, or your mission where you are paid by your company, or because this topic is personally important to you, etc)
- I don't have time or that topic is not *that* important for me.
- Consequence: Your truth is going to win.
Sources are naturally going to be curated to support your view. At the end, the path of least resistance is to go with the flow.The tricky part: there are also truths that cannot be sourced properly, but are still facts (ex: famous SV men still offering founders today investment against sex). Add on top of that, legal concerns, and it becomes a very difficult environment to navigate.
Even further, it's always doable to find or fabricate facts, and the truth wins based on the amount of energy, power and money that the person has. | | |
| ▲ | dc396 6 hours ago | parent [-] | | > It's always doable to find or fabricate facts, and the truth wins based on the amount of energy, power and money that the person has. You appear to be using unusual definitions of "fact" and "truth", more akin to "assertions" and "vibe". I'll stick with the traditional definitions. | | |
| ▲ | rvnx 5 hours ago | parent [-] | | An example of (either fabricated, or just very convenient) facts: [1] https://patriotpolling.com/our-polls/f/greenland-supports-jo... According to an American poll that surveyed 416 people residing across Greenland on their support for joining the United States.
57.3% wants to join the US.
[2] https://www.politico.eu/article/greenland-poll-mute-egede-do... According to a Danish poll (conducted through web interviews) among 497 selected citizens in Greenland.
85% do not want to join the US.
What is the actual truth ? Who knows. | | |
| ▲ | dc396 2 hours ago | parent | next [-] | | You're confusing data with facts. A "fabricated fact" (or "alternative fact" if you prefer) is an oxymoron. Actual truth, as opposed to a vibe or what people are basing their decisions on these days, is orthogonal to "the amount of energy, power and money that the person has." Deriving or identifying actual facts and truth is hard (see https://en.wikipedia.org/wiki/Scientific_method) and always subject to change based on new data, so lots of people don't do it -- it's much easier to just make shit up and confirms biases. | |
| ▲ | whynotmaybe 3 hours ago | parent | prev [-] | | You know that both can be true right ? If I ask 10 people what they think of something and 60% says "no" and if I ask another 10 people and 90% says "yes" there's no relation between the 60% and the 90%, like at all. Or as Homer said it "Anybody can come up with statistics to prove anything, Kent. 40% of people know that." | | |
| ▲ | rvnx 2 hours ago | parent [-] | | I like what you said about the quote :) My favorite is: "Numbers are fragile creatures, and if you can torture them enough, you can make them say whatever you want" |
|
|
|
| |
| ▲ | panath 6 hours ago | parent | prev | next [-] | | > It is tedious because you must edit with facts, not ideology. Not just because you must edit with facts. If your opposition outnumbers you and/or they have more energy to spend than you, they can grind you down with bad-faith arguments and questions for clarification. The way this goes is that they edit an article to insert their POV. You edit/revert it. They open a talk page discussion about the subject. Suppose their edit is "marine animals are generally considered cute throughout the world" with a reference to a paper by an organization in favor of seals. You revert it by saying this is NPOV. They open a talk page question asking where the organization has been declared to be partisan. Suppose you do research and find some such third-party statement that "the Foundation for Animal Aesthetics is organized by proponents of marine animals". Then they ask how this third party is accurate, or whether "organized by proponents" necessarily implies that they're biased. This can go on more or less forever until someone gives up. The attack even has a name on Wikipedia itself: "civil POV pushing". It works because few Wikipedia admins are subject matter experts, so they police behavior (conduct) more than they police subject accuracy. Civil POV pushers can thus keep their surface conduct unobjectionable while waiting for the one they are actioning against to either give up or to get angry enough to make a heated moment's conduct violation. It's essentially the wiki version of sealioning. In short, a thousand "but is really two plus two equal to four?" will overcome a single "You bastard, it is four and you're deliberately trolling me", because the latter is a personal insult. | |
| ▲ | 7 hours ago | parent | prev [-] | | [deleted] |
|
|
| ▲ | Propelloni 6 hours ago | parent | prev | next [-] |
| > This is why you need to use different sources. This knife cuts both ways. |
|
| ▲ | dpark 5 hours ago | parent | prev | next [-] |
| > Wikipedia is literally a spin-off of a porn company. What? If Bomis was a porn company then Reddit is a porn company. Edit: I take it back. It looks like Bomis was more directly pushing soft core porn than I realized. |
|
| ▲ | jamespo 8 hours ago | parent | prev | next [-] |
| Yes LLMs that don't disclose sources are much better. |
| |
| ▲ | browningstreet 7 hours ago | parent | next [-] | | The LLMs I use all supply references. | | |
| ▲ | onraglanroad 7 hours ago | parent [-] | | Indeed! Sometimes even more than actually exist! I don't think LLMs can be faulted on their enthusiasm for supplying references. | | |
| ▲ | tialaramex 6 hours ago | parent [-] | | Yup, there's a wonderful, presumably LLM generated, response to somebody explaining how trademark law actually works, the LLM response insists that explanation was all wrong and cites several US law cases. Most of the cases don't exist, the rest aren't about trademark law or anywhere close. But the LLM isn't supposed to say truths, it's a stochastic parrot, it makes what looks most plausible as a response. "Five" is a pretty plausible response to "What is two plus three?" but that's not because it added 2 + 3 = 5 | | |
| ▲ | johnisgood 6 hours ago | parent [-] | | "Five" is not merely "plausible". It is the uniquely correct answer, and it is what the model produces because the training corpus overwhelmingly associates "2 + 3" with "5" in truthful contexts. And the stochastic parrot framing has a real problem here: if the mechanism reliably produces correct outputs for a class of problems, dismissing it as "just plausibility" rather than computation becomes a philosophical stance rather than a technical critique. The model learned patterns that encode the mathematical relationship. Whether you call that "understanding" or "statistical correlation" is a definitional argument, not an empirical one. The legal citation example sounds about right. It is a genuine failure mode. But arithmetic is precisely where LLMs tend to succeed (at small scales) because there is no ambiguity in the training signal. |
|
|
| |
| ▲ | rvnx 7 hours ago | parent | prev | next [-] | | LLMs have their issues too. In everyday life, you cannot read 20 books about a topic about everything you are curious about, but you can ask 5 subject-experts (“the LLMs”) in 20 seconds some of them who are going to check on some news websites (most are also biased) Then you can ask for summaries of pros and cons, and make your own opinions. Are they hallucinating ? Could be. Are they lying ? Could be. Have they been trained on what their masters said to say ? Could be. But multiplying the amount of LLMs reduce the risk. For example, if you ask DeepSeek, Gemini, Grok, Claude, GLM-4.7 or some models that have no guardrails, what they think about XXX, then perhaps there are interesting insights. | | |
| ▲ | jamespo 7 hours ago | parent [-] | | This may shock you, but wikipedia provides multiple sources, it even links to them. Where do you think the LLMs are getting their data from? | | |
| ▲ | dfxm12 6 hours ago | parent [-] | | To further this, articles also have an edit history and talk page. Even if one disagrees with consensus building or suspects foul play and they're really trying to get to the bottom of something, all the info is there on Wikipedia! If one just wants a friendly black box to tell them something they want to hear, AI is known to do that. |
|
| |
| ▲ | CamperBob2 7 hours ago | parent | prev [-] | | LLMs disclose sources now. | | |
| ▲ | tux3 7 hours ago | parent [-] | | Right. Try clicking those sources, half the time there is zero relation to the sentence. LLMs just output what they want to say, and then sprinkle in what the web search found on random sentences. And not just bottom of the barrel LLMs. Ask Claude about Intel PIN tools, it will merrily tell you that it "Has thread-safe APIs but performance issues were noted with multi-threaded tools like ThreadSanitizer" and then cite the Disney Pins blog and the DropoutStore "2025 Pin of the Month Bundle" as an inline source. Enamel pins. That's the level of trust you should have when LLMs pretend to be citing a source. | | |
|
|
|
| ▲ | b00ty4breakfast 7 hours ago | parent | prev [-] |
| [flagged] |