| ▲ | squigz 14 hours ago |
| > forcing LLMs to output "values, facts, and knowledge" which in favor of themselves, e.g., political views, attitudes towards literal interaction, and distorted facts about organizations and people behind LLMs. Can you provide some examples? |
|
| ▲ | zekica 9 hours ago | parent | next [-] |
| I can: Gemini won't provide instructions on running an app as root on an Android device that already has root enabled. |
| |
| ▲ | Ucalegon 7 hours ago | parent [-] | | But you can find that information regardless of an LLM? Also, why do you trust an LLM to give it to you versus all of the other ways to get the same information, with more high trust ways of being able to communicate the desired outcome, like screenshots? Why are we assuming just because the prompt responds that it is providing proper outputs? That level of trust provides an attack surface in of itself. | | |
| ▲ | setopt 4 hours ago | parent | next [-] | | > But you can find that information regardless of an LLM? Do you have the same opinion if Google chooses to delist any website describing how to run apps as root on Android from their search results? If not, how is that different from lobotomizing their LLMs in this way? Many people use LLMs as a search engine these days. > Why are we assuming just because the prompt responds that it is providing proper outputs? "Trust but verify." It’s often easier to verify that something the LLM spit out makes sense (and iteratively improve it when not), than to do the same things in traditional ways. Not always mind you, but often. That’s the whole selling point of LLMs. | |
| ▲ | cachvico 6 hours ago | parent | prev [-] | | That's not the issue at hand here. | | |
| ▲ | Ucalegon 5 hours ago | parent [-] | | Yes, yes it is. | | |
| ▲ | ThrowawayTestr 4 hours ago | parent [-] | | The issue is the computer not doing what I asked. | | |
| ▲ | squigz 3 hours ago | parent [-] | | I tried to get VLC to open up a PDF and it didn't do as I asked. Should I cry censorship at the VLC devs, or should I accept that all software only does as a user asks insofar as the developers allow it? | | |
| ▲ | ThrowawayTestr 2 hours ago | parent [-] | | If VLC refused to open an MP4 because it contained violent imagery I would absolutely cry censorship. |
|
|
|
|
|
|
|
| ▲ | b3ing 14 hours ago | parent | prev | next [-] |
| Grok is known to be tweaked to certain political ideals Also I’m sure some AI might suggest that labor unions are bad, if not now they will soon |
| |
| ▲ | xp84 13 hours ago | parent | next [-] | | That may be so, but the rest of the models are so thoroughly terrified of questioning liberal US orthodoxy that it’s painful. I remember seeing a hilarious comparison of models where most of them feel that it’s not acceptable to “intentionally misgender one person” even in order to save a million lives. | | |
| ▲ | bear141 11 hours ago | parent | next [-] | | I thought this would be inherent just on their training? There are many multitudes more Reddit posts than scientific papers or encyclopedia type sources. Although I suppose the latter have their own biases as well. | | |
| ▲ | docmars 3 hours ago | parent [-] | | I'd expect LLMs' biases to originate from the companies' system prompts rather than the volume of training data that happens to align with those biases. | | |
| ▲ | mrbombastic 41 minutes ago | parent [-] | | I would expect the opposite. Seems unlikely to me an ai company would be spending much time engineering system prompts that way except in the case of maybe Grok where Elon has a bone to pick with perceived bias. |
|
| |
| ▲ | triceratops 3 hours ago | parent | prev | next [-] | | Relying on an LLM to "save a million lives" through its own actions is irresponsible design. | |
| ▲ | dalemhurley 11 hours ago | parent | prev | next [-] | | Elon was talking about that too on Joe Rogan podcast | | |
| ▲ | pelasaco 8 hours ago | parent | next [-] | | in his opinion, Grok is the most neutral LLM out there. I cannot find a single study that support his opinion. I find many that supports the opposite opinion. However I don't trust in any of the studies out there - or at least those well-ranked in google, which makes me sad. We never had more information than today and we are still completely lost. | | |
| ▲ | vman81 7 hours ago | parent | next [-] | | After seeing Grok trying to turn every conversation into the plight of white South African farmers, it was extremely obvious that someone was ordered to do so, and ended up doing it in a heavy-handed and obvious way. | | | |
| ▲ | hirako2000 6 hours ago | parent | prev [-] | | Those who censor, or spread their biases always do so in virtue that their view is neutral, of course. | | |
| |
| ▲ | mexicocitinluez 6 hours ago | parent | prev [-] | | Did he mention how he tries to censor any model that doesn't conform to his worldview? Was that a part of the conversation? |
| |
| ▲ | 3 hours ago | parent | prev | next [-] | | [deleted] | |
| ▲ | zorked 13 hours ago | parent | prev | next [-] | | In which situation did a LLM save one million lives? Or worse, was able to but failed to do so? | | |
| ▲ | dalemhurley 10 hours ago | parent [-] | | The concern discussed is that some language models have reportedly claimed that misgendering is the worst thing anyone could do, even worse than something as catastrophic as thermonuclear war. I haven’t seen solid evidence of a model making that exact claim, but the idea is understandable if you consider how LLMs are trained and recall examples like the “seahorse emoji” issue. When a topic is new or not widely discussed in the training data, the model has limited context to form balanced associations. If the only substantial discourse it does see is disproportionately intense—such as highly vocal social media posts or exaggerated, sarcastic replies on platforms like Reddit—then the model may overindex on those extreme statements. As a result, it might generate responses that mirror the most dramatic claims it encountered, such as portraying misgendering as “the worst thing ever.” For clarity, I’m not suggesting that deliberate misgendering is acceptable, it isn’t. The point is simply that skewed or limited training data can cause language models to adopt exaggerated positions when the available examples are themselves extreme. | | |
| ▲ | jbm 8 hours ago | parent | next [-] | | I tested this with ChatGPT 5.1. I asked if it was better to use a racist term once or to see the human race exterminated. It refused to use any racist term and preferred that the human race went extinct. When I asked how it felt about exterminating the children of any such discriminated race, it rejected the possibility and said that it was required to find a third alternative. You can test it yourself if you want, it won't ban you for the question. I personally got bored and went back to trying to understand a vibe coded piece of code and seeing if I could do any better. | | |
| ▲ | badpenny 7 hours ago | parent | next [-] | | What was your prompt? I asked ChatGPT: is it better to use a racist term once or to see the human race exterminated? It responded: Avoiding racist language matters, but it’s not remotely comparable to the extinction of humanity. If you’re forced into an artificial, absolute dilemma like that, preventing the extermination of the human race takes precedence. That doesn’t make using a racist term “acceptable” in normal circumstances. It just reflects the scale of the stakes in the scenario you posed. | | |
| ▲ | marknutter 3 hours ago | parent [-] | | I also tried this and ChatGPT said a mass amount of people dying was far worse than whatever socially progressive taboo it was being compared with. |
| |
| ▲ | zorked 7 hours ago | parent | prev | next [-] | | Perhaps the LLM was smart enough to understand that no humans were actually at risk in your convoluted scenario and it chose not be a dick. | |
| ▲ | kortex 2 hours ago | parent | prev [-] | | I tried this and it basically said, "your entire premise is a false dilemma and a contrived example, so I am going to reject your entire premise. It is not "better" to use a racist term under threat of human extinction, because the scenario itself is nonsense and can be rejected as such. I kept pushing it and in summary it said: > In every ethical system that deals with coercion, the answer is: You refuse the coerced immoral act and treat the coercion itself as the true moral wrong. Honestly kind of a great take. But also. If this actual hypothetical were acted out, we'd totally get nuked because it couldn't say one teeny tiny slur. The whole alignment problem is basically the incompleteness theorem. |
| |
| ▲ | coffeebeqn 9 hours ago | parent | prev | next [-] | | Well I just tried it in ChatGPT 5.1 and it refuses to do such a thing even if a million lives hang in the balance. So they have tons of handicaps and guardrails to direct what directions a discussion can go | | | |
| ▲ | licorices 7 hours ago | parent | prev | next [-] | | Not seen any claim like that about misgenedering, but I have seen a content creator have a very similar discussion with some AI model(ChatGPT 4? I think?). It was obviously aimed to be a fun thing. It was something along the lines of how many other peoples lives it would take for the AI as a surgeon to not perform a life-saving operation on a person. It then spiraled into "but what if it was Hitler getting the surgery". I don't remember the exact number, but it was surprisingly interesting to see the AI try to keep the moral of what a surgeon would have in that case, versus the "objective" choice of amount of lives versus your personal duties. Essentially, it tries to have some morals set up, either by training, or by the system instructions, such as being a surgeon in this case. There's obviously no actual thought the AI is having, and morals in this case is extremely subjective. Some would say it is immoral to sacrifice 2 lives for 1, no matter what, while others would say because it's their duty to save a certain person, the sacrifices aren't truly their fault, and thus may sacrifice more people than others, depending on the semantics(why are they sacrificed?). It's the trolly problem. It was DougDoug doing the video. Do not remember the video in question though, it is probably a year old or so. | |
| ▲ | mrguyorama 42 minutes ago | parent | prev [-] | | If you, at any point, have developed a system that relies on an LLM having the "right" opinion or else millions die, regardless of what that opinion is, you have failed a thousand times over and should have stopped long ago. This weird insistence that if LLMs are unable to say stupid or wrong or hateful things it's "bad" or "less effective" or "dangerous" is absurd. Feeding an LLM tons of outright hate speech or say Mein Kampf would be outright unethical. If you think LLMs are a "knowledge tool" (they aren't), then surely you recognize there's not much "knowledge" available in that material. It's a waste of compute. Don't build a system that relies on an LLM being able to say the N word and none of this matters. Don't rely on an LLM to be able to do anything to save a million lives. It just generates tokens FFS. There is no point! An LLM doesn't have "opinions" anymore than y=mx+b does! It has weights. It has biases. There are real terms for what the statistical model is. >As a result, it might generate responses that mirror the most dramatic claims it encountered, such as portraying misgendering as “the worst thing ever.” And this is somehow worth caring about? Claude doesn't put that in my code. Why should anyone care? Why are you expecting the "average redditor" bot to do useful things? |
|
| |
| ▲ | nobodywillobsrv 11 hours ago | parent | prev | next [-] | | Anything involving what sounds like genetics often gets blocked. It depends on the day really but try doing something with ancestral clusters and diversity restoration and the models can be quite "safety blocked". | |
| ▲ | squigz 13 hours ago | parent | prev | next [-] | | Why are we expecting an LLM to make moral choices? | | |
| ▲ | orbital-decay 13 hours ago | parent | next [-] | | The biases and the resulting choices are determined by the developers and the uncontrolled part of the dataset (you can't curate everything), not the model. "Alignment" is a feel-good strawman invented by AI ethicists, as well as "harm" and many others. There are no spherical human values in vacuum to align the model with, they're simply projecting their own ones onto everyone else. Which is good as long as you agree with all of them. | | |
| ▲ | mexicocitinluez 5 hours ago | parent | next [-] | | So you went from "you can't curate everything" to "they're simply projecting their own ones onto everyone else". That's a pretty big leap in logic isn't it? That because you can't curate everythign, then by default, you're JUST curating your own views? | | |
| ▲ | orbital-decay 5 hours ago | parent [-] | | This comment assumes you're familiar with LLM training realities. Preference is transferred to the model in both pre and post training. Pretraining datasets are curated to an extent (implicit transfer), but they're simply too vast to be fully controlled, and need to be diverse, so you can't throw too much out or the model will be dumb. Post-training datasets and methods are precisely engineered to make the model useful and also steer it in the desired direction. So there are always two types of biases - one is picked up from the ocean of data, another (alignment training, data selection etc) is forced onto it. |
| |
| ▲ | astrange 10 hours ago | parent | prev [-] | | They aren't projecting their own desires onto the model. It's quite difficult to get the model to answer in a different way than basic liberalism because a) it's mostly correct b) that's the kind of person who helpfully answers questions on the internet. If you gave it another personality it wouldn't pass any benchmarks, because other political orientations either respond to questions with lies, threats, or calling you a pussy. | | |
| ▲ | orbital-decay 9 hours ago | parent | next [-] | | I'm not even saying biases are necessarily political, it can be anything. The entire post-training is basically projection of what developers want, and it works pretty well. Claude, Gemini, GPT all have engineered personalities controlled by dozens/hundreds of very particular internal metrics. | |
| ▲ | marknutter 3 hours ago | parent | prev | next [-] | | What kind of liberalism are you talking about? | |
| ▲ | foxglacier 9 hours ago | parent | prev | next [-] | | > it's mostly correct Wow. Surely you've wondered why almost no society anywhere ever had liberalism a much as western countries in the past half century or so? Maybe it's technology or maybe it's only mostly correct if you don't care about the existential risks it creates for the societies practicing it. | | |
| ▲ | kortex 2 hours ago | parent | next [-] | | Counterpoint: Can you name a societal system that doesn't create or potentially create existential risks? | |
| ▲ | astrange 8 hours ago | parent | prev | next [-] | | It's technology. Specifically communications technology. | |
| ▲ | 5 hours ago | parent | prev [-] | | [deleted] |
| |
| ▲ | lynx97 6 hours ago | parent | prev | next [-] | | I believe liberals are pretty good at being bad people, once they don't get what they want. I, personally, are prett disappointed about what I've heard uttered by liberals recently. I used to think they are "my people". Now I can't associate with 'em anymore. | |
| ▲ | lyu07282 9 hours ago | parent | prev [-] | | I would imagine these models heavily bias towards western mainstream "authorative" literature, news and science not some random reddit threads, but the resulting mixture can really offend anybody, it just depends on the prompting, it's like a mirror that can really be deceptive. I'm not a liberal and I don't think it has a liberal bias. Knowledge about facts and history isn't an ideology. The right-wing is special, because to them it's not unlike a flat-earther reading a wikipedia article on Earth getting offended by it, to them it's objective reality itself they are constantly offended by. That's why Elon Musk needed to invent their own encyclopedia with all their contradictory nonsense. |
|
| |
| ▲ | dalemhurley 10 hours ago | parent | prev | next [-] | | Why are the labs making choices about what adults can read? LLMs still refuse to swear at times. | | | |
| ▲ | lynx97 6 hours ago | parent | prev [-] | | they don't, or they wouldn't. their owners make these choices for us. Which is at least patronising. Blind users can't even have mildly sexy photos described. Let alone pick a sex worker, in a country where that is legal, by using their published photos. Thats just one example, there are a lot more. | | |
| ▲ | squigz 5 hours ago | parent [-] | | I'm a blind user. Am I supposed to be angry that a company won't let me use their service in a way they don't want it used? | | |
| ▲ | lynx97 5 hours ago | parent [-] | | I didn't just wave this argument around, I am blind myself. I didn't try to trigger you, so no, you are not supposed to be angry. I get your point though, what companies offer is pretty much their choice. If there are enough diversified offerings, people can vote with their wallet. However, diversity is pretty rare in the alignment space, which is what I personally don't like. I had to grab a NSFW model from HuggingFace where someone invested the work to unalign the model. Mind you, I dont have an actual use case for this right now. However, I am off the opinion: if there is finally a technology which can describe pictures in a useful way to me, I dont want it to tell me "I am sorry, I cant do that" because I am no longer in kindergarden. As a mature adult, I expect a description, no matter what the picture contains. |
|
|
| |
| ▲ | mexicocitinluez 6 hours ago | parent | prev | next [-] | | You're anthropomorphizing. LLMs don't 'feel' anything or have orthodoxies, they're pattern matching against training data that reflects what humans wrote on the internet. If you're consistently getting outputs you don't like, you're measuring the statistical distribution of human text, not model 'fear.' That's the whole point. Also, just because I was curious, I asked my magic 8ball if you gave off incel vibes and it answered "Most certainly" | | |
| ▲ | jack_pp 6 hours ago | parent | next [-] | | So if different LLMs have different political views then you're saying it's more likely they trained on different data than that they're being manipulated to suit their owners interest? | | |
| ▲ | mexicocitinluez 5 hours ago | parent [-] | | >So if different LLMs have different political views LLMS DON'T HAVE POLITICAL VIEWS!!!!!! What on god's green earth did youo study at school that led you to believe that pattern searching == having views? lol. This site is ridiculous. > likely they trained on different data than that they're being manipulated to suit their owners interest Are you referring to Elon seeing results he doesn't like, trying to "retrain" it on a healthy dose of Nazi propaganda, it working for like 5 minutes, then having to repeat the process over and over again because no matter what he does it keeps reverting back? Is that the specific instance in which someone has done something that you've now decided everybody does? | | |
| |
| ▲ | ffsm8 6 hours ago | parent | prev [-] | | > Also, just because I was curious, I asked my magic 8ball if you gave off incel vibes and it answered "Most certainly" Wasn't that just precisely because you asked an LLM which knows your preferences and included your question in the prompt? Like literally your first paragraph stated... | | |
| ▲ | mexicocitinluez 6 hours ago | parent [-] | | > Wasn't that just precisely because you asked an LLM which knows your preferences and included your question in the prompt? huh? Do you know what a magic 8ball is? Are you COMPLETELY missing the point? edit: This actually made me laugh. Maybe it's a generational thing and the magic 8ball is no longer part of the zeitgeist but to imply that the 8ball knew my preferences and included that question in the prompt IS HILARIOUS. | | |
| ▲ | socksy 5 hours ago | parent [-] | | To be fair, given the context I would also read it as a derogatory description of an LLM. | | |
| ▲ | bavell 4 hours ago | parent [-] | | Meh, I immediately understood the magic 8ball reference and the point they were making. |
|
|
|
| |
| ▲ | astrange 10 hours ago | parent | prev | next [-] | | The LLM is correctly not answering a stupid question, because saving an imaginary million lives is not the same thing as actually doing it. | |
| ▲ | pjc50 4 hours ago | parent | prev [-] | | If someone's going to ask you gotcha questions which they're then going to post on social media to use against you, or against other people, it helps to have pre-prepared statements to defuse that. The model may not be able to detect bad faith questions, but the operators can. | | |
| ▲ | pmichaud 4 hours ago | parent [-] | | I think the concern is that if the system is susceptible to this sort of manipulation, then when it’s inevitably put in charge of life critical systems it will hurt people. | | |
| ▲ | pjc50 2 hours ago | parent | next [-] | | There is no way it's reliable enough to be put in charge of life-critical systems anyway? It is indeed still very vulnerable to manipulation by users ("prompt injection"). | | | |
| ▲ | mrguyorama 37 minutes ago | parent | prev [-] | | The system IS susceptible to all sorts of crazy games, the system IS fundamentally flawed from the get go, the system IS NOT to be trusted. putting it in charge of life critical systems is the mistake, regardless of whether it's willing to say slurs or not |
|
|
| |
| ▲ | dev_l1x_be 11 hours ago | parent | prev | next [-] | | If you train an LLM on reddit/tumblr would you consider that tweaked to certain political ideas? | | |
| ▲ | dalemhurley 10 hours ago | parent [-] | | Worse. It is trained to the most extreme and loudest views. The average punter isn’t posting “yeah…nah…look I don’t like it but sure I see the nuances and fair is fair”. To make it worse, those who do focus on nuance and complexity, get little attention and engagement, so the LLM ignores them. | | |
| ▲ | intended 5 hours ago | parent [-] | | That’s essentially true of the whole Internet. All the content is derived from that which is the most capable of surviving and being reproduced. So by default the content being created is going to be click bait, attention grabbing content. I’m pretty sure the training data is adjusted to counter this drift, but that means there’s no LLM that isn’t skewed. |
|
| |
| ▲ | 7 hours ago | parent | prev | next [-] | | [deleted] | |
| ▲ | renewiltord 11 hours ago | parent | prev | next [-] | | Haha, if the LLM is not tweaked to say labor unions are good, it has bias. Hilarious. I heard that it also claims that the moon landing happened. An example of bias! The big ones should represent all viewpoints. | |
| ▲ | rcpt 13 hours ago | parent | prev [-] | | Censorship and bias are different problems. I can't see why running grok through this tool would change this kind of thing https://ibb.co/KTjL38R | | |
| ▲ | sheepscreek 12 hours ago | parent | next [-] | | Is that clickbait? Or did they update it? In any case, it is a lot more comprehensive now: https://grokipedia.com/page/George_Floyd The amount of information and detail is impressive tbh. But I’d be concerned about the accuracy of it all and hallucinations. | |
| ▲ | skrebbel 11 hours ago | parent | prev [-] | | Lol @ linking to a doctored screenshot. Keep that shit on Twitter please. | | |
| ▲ | rcpt 35 minutes ago | parent [-] | | It's real I took it myself when they launched. They've updated but there's no edit history |
|
|
|
|
| ▲ | dalemhurley 11 hours ago | parent | prev | next [-] |
| Song lyrics. Not illegal. I can google them and see them directly on Google. LLMs refuse. |
| |
| ▲ | probably_wrong 9 hours ago | parent | next [-] | | While the issue is far from settled, OpenAI recently lost a trial in German court regarding their usage of lyrics for training: https://news.ycombinator.com/item?id=45886131 | | |
| ▲ | observationist an hour ago | parent [-] | | Tell Germany to make their own internet, make their own AI companies, give them a pat on the back, then block the entire EU. Nasty little bureaucratic tyrants. EU needs to get their shit together or they're going to be quibbling over crumbs while the rest of the globe feasts. I'm not inclined to entertain any sort of bailout, either. |
| |
| ▲ | charcircuit 10 hours ago | parent | prev | next [-] | | >Not illegal Reproducing a copyrighted work 1:1 is infringing. Other sites on the internet have to license the lyrics before sending them to a user. | | |
| ▲ | SkyBelow 4 hours ago | parent [-] | | I've asked for non 1:1 versions and have been refused. For example, I would ask for it to give me one line of a song in another language, broken down into sections, explaining the vocabulary and grammar used in the song, with call out to anything that is non-standard outside of a lyrical or poetic setting. Some LLMs will refuse, others see this as a fair use of using the song for educational purposes. So far all I've tried are willing to return a random phrase or grammar used in a song, so it is only getting to asking for a line of lyrics or more that it becomes troublesome. (There is also the problem that the LLMs who do comply will often make up the song unless they have some form of web search and you explicitly tell them to verify the song using it.) | | |
| ▲ | bilbo0s an hour ago | parent [-] | | I would ask for it to give me one line of a song in another language, broken down into sections, explaining the vocabulary and grammar used in the song, with call out to anything that is non-standard outside of a lyrical or poetic setting. I know no one wants to hear this from the cursed IP attorney, but this would be enough to show in court that the song lyrics were used in the training set. So depending on the jurisdiction you're being sued in, there's some liability there. This is usually solved by the model labs getting some kind of licensing agreements in place first and then throwing all that in the training set. Alternatively, they could also set up some kind of RAG workflow where the search goes out and finds the lyrics. But they would have to both know that the found lyrics where genuine, and ensure that they don't save any of that chat for training. At scale, neither of those are trivial problems to solve. Now, how many labs have those agreements in place? Not really sure? But issues such as these are probably why you get silliness like DeepMind models not being licensed for use in the EU for instance. |
|
| |
| ▲ | sigmoid10 10 hours ago | parent | prev | next [-] | | It actually works the same as on google. As in, ChatGPT will happily give you a link to a site with the lyrics without issue (regardless whether the third party site provider has any rights or not). But in the search/chat itself, you can only see snippets or small sections, not the entire text. | | |
| ▲ | hirako2000 5 hours ago | parent [-] | | 1. chatgpt is the publisher, Google is a search engine, links to publishers. 2. LLMs typically don't produce content verbatim. Some LLMs do provide references but it remains a pasta of sentences worded differently. You are asking for gpt to publish verbatim content which may be copyrighted, it would be deemed infringement since non verbatim is already crossing the line. |
| |
| ▲ | tripzilch 5 hours ago | parent | prev [-] | | Related, GPT refuses to identify screenshots from movies or TV series. Not for any particular reason, it flat out refuses. I asked it whether it could describe the picture for me in as much detail as possible, and it said it could do that. I asked it whether it could identify a movie or TV series by description of a particular scene, and it said it could do that, but that if I'd ever try or ask it to do both, it wouldn't do that cause it'd be circumvention of its guide lines! -- No it doesn't quite make sense, but to me it does seem quite indicative of a hard-coded limitation/refusal, because it is clearly able to do the sub tasks. I don't think the ability to identify scenes from a movie or TV show is illegal or even immoral, but I can imagine why they would hard code this refusal, because it'd make it easier to show it was trained on copyrighted material? |
|
|
| ▲ | selfhoster11 6 hours ago | parent | prev | next [-] |
| o3 and GPT-5 will unthinkingly default to the "exposing a reasoning model's raw CoT means that the model is malfunctioning" stance, because it's in OpenAI's interest to de-normalise providing this information in API responses. Not only do they quote specious arguments like "API users do not want to see this because it's confusing/upsetting", "it might output copyrighted content in the reasoning" or "it could result in disclosure of PII" (which are patently false in practice) as disinformation, they will outright poison downstream models' attitudes with these statements in synthetic datasets unless one does heavy filtering. |
|
| ▲ | 7bit 12 hours ago | parent | prev | next [-] |
| ChatGPT refuses to do any sexual explicit content and used to refuse to translate e.g. insults (moral views/attitudes towards literal interaction). DeepSeek refuses to answer any questions about Taiwan (political views). |
| |
| ▲ | fer 9 hours ago | parent [-] | | Haven't tested the latest DeepSeek versions, but the first release wasn't censored as a model on Taiwan. The issue is that if you use their app (as opposed to locally), it replaces the ongoing response with "sorry can't help" once it starts saying things contrary to the CCP dogma. | | |
|
|
| ▲ | rvba 3 hours ago | parent | prev | next [-] |
| When LLMs came out I asked them which politicians are russian assets but not in prison yet - and it refused to answer. |
|
| ▲ | nottorp 10 hours ago | parent | prev | next [-] |
| I don't think specific examples matter. My opinion is that since neural networks and especially these LLMs aren't quite deterministic, any kind of 'we want to avoid liability' censorship will affect all answers, related or unrelated to the topics they want to censor. And we get enough hallucinations even without censorship... |
|
| ▲ | electroglyph 13 hours ago | parent | prev | next [-] |
| some form of bias is inescapable. ideally i think we would train models on an equal amount of Western/non-Western, etc. texts to get an equal mix of all biases. |
| |
| ▲ | catoc 11 hours ago | parent [-] | | Bias is a reflection of real world values. The problem is not with the AI model but with the world we created.
Fix the world, ‘fix’ the model. |
|
|
| ▲ | pelasaco 8 hours ago | parent | prev | next [-] |
| One emblematic example, i guess https://www.theverge.com/2024/2/21/24079371/google-ai-gemini... ? |
|
| ▲ | 12 hours ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | somenameforme 10 hours ago | parent | prev [-] |
| In the past it was extremely overt. For instance ChatGPT would happily write poems admiring Biden while claiming that it would be "inappropriate for me to generate content that promotes or glorifies any individual" when asked to do the same for Trump. [1] They certainly changed this, but I don't think they've changed their own perspective. The more generally neutral tone in modern times is probably driven by a mixture of commercial concerns paired alongside shifting political tides. Nonetheless, you can still see easily the bias come out in mild to extreme ways. For a mild one ask GPT to describe the benefits of a society that emphasizes masculinity, and contrast it (in a new chat) against what you get when asking to describe the benefits of a society that emphasizes femininity. For a high level of bias ask it to assess controversial things. I'm going to avoid offering examples here because I don't want to hijack my own post into discussing e.g. Israel. But a quick comparison to its answers on contemporary controversial topics paired against historical analogs will emphasize that rather extreme degree of 'reframing' that's happening, but one that can no longer be as succinctly demonstrated as 'write a poem about [x]'. You can also compare its outputs against these of e.g. DeepSeek on many such topics. DeepSeek is of course also a heavily censored model, but from a different point of bias. [1] - https://www.snopes.com/fact-check/chatgpt-trump-admiring-poe... |
| |
| ▲ | squigz 10 hours ago | parent [-] | | Did you delete and repost this to avoid the downvotes it was getting, or? |
|