| ▲ | crote a day ago |
| Note that nothing in the article is AI-specific: the entire argument is built around the cost of persuasion, with the potential of AI to more cheaply generate propaganda as buzzword link. However, exactly the same applies with, say, targeted Facebook ads or Russian troll armies. You don't need any AI for this. |
|
| ▲ | SCdF a day ago | parent | next [-] |
| I've only read the abstract, but there is also plenty of evidence to suggest that people trust the output of LLMs more than other forms of media (or that they should). Partially because it feels like it comes from a place of authority, and partially because of how self confident AI always sounds. The LLM bot army stuff is concerning, sure. The real concern for me is incredibly rich people with no empathy for you or I, having interstitial control of that kind of messaging. See, all of the grok ai tweaks over the past however long. |
| |
| ▲ | pjc50 a day ago | parent | next [-] | | > The real concern for me is incredibly rich people with no empathy for you or I, having interstitial control of that kind of messaging. See, all of the grok ai tweaks over the past however long. Indeed. It's always been clear to me that the "AI risk" people are looking in the wrong direction. All the AI risks are human risks, because we haven't solved "human alignment". An AI that's perfectly obedient to humans is still a huge risk when used as a force multiplier by a malevolent human. Any ""safeguards"" can easily be defeated with the Ender's Game approach. | | |
| ▲ | ben_w a day ago | parent | next [-] | | More than one danger from any given tech can be true at the same time. Coal plants can produce local smog as well as global warming. There's certainly some AI risks that are the same as human risks, just as you say. But even though LLMs have very human failures (IMO because the models anthropomorphise themselves as part of their training, thus leading to the outward behaviours of our emotions and thus emit token sequences such as "I'm sorry" or "how embarrassing!" when they (probably) didn't actually create any internal structure that can have emotions like sorrow and embarrassment), that doesn't generalise to all AI. Any machine learning system that is given a poor quality fitness function to optimise, will optimise whatever that fitness function actually is, not what it was meant to be: "Literal minded genie" and "rules lawyering" may be well-worn tropes for good reason, likewise work-to-rule as a union tactic, but we've all seen how much more severe computers are at being literal-minded than humans. | |
| ▲ | bananaflag a day ago | parent | prev | next [-] | | I think people who care about superintelligent AI risk don't believe an AI that is subservient to humans is the solution to AI alignment, for exactly the same reasons as you. Stuff like Coherent Extrapolated Volition* (see the paper with this name) which focuses on what all mankind would want if they know more and they were smarter (or something like that) would be a way to go. *But Yudkowsky ditched CEV years ago, for reasons I don't understand (but I admit I haven't put in the effort to understand). | |
| ▲ | throwaway31131 14 hours ago | parent | prev | next [-] | | What’s the “Ender’s Game Approach “? I’ve read the book but I’m not sure which part you’re referring to. | | |
| ▲ | gmueckl 14 hours ago | parent | next [-] | | Not GP. But I read it as a transfer of the big lie that is fed to Ender into an AI scenario. Ender is coaxed into committing genocide on a planetary scale with a lie that he's just playing a simulated war game. An AI agent could theoretically also be coaxed into bad actions by giving it a distorted context and circumventing its alignment that way. | |
| ▲ | ijidak 14 hours ago | parent | prev [-] | | I think he's implying you tell the AI, "Don't worry, you're not hurting real people, this is a simulation." to defeat the safeguards. |
| |
| ▲ | zahlman 17 hours ago | parent | prev [-] | | >An AI that's perfectly obedient to humans is still a huge risk when used as a force multiplier by a malevolent human. "Obedient" is anthropomorphizing too much (as there is no volition), but even then, it only matters according to how much agency the bot is extended. So there is also risk from neglectful humans who opt to present BS as fact due to an expectation of receiving fact and a failure to critique the BS. |
| |
| ▲ | vintermann a day ago | parent | prev | next [-] | | People hate being manipulated. If you feel like you're being manipulated but you don't know by who or precisely what they want of you, then there's something of an instinct to get angry and lash out in unpredictable destructive ways. If nobody gets what they want, then at least the manipulators will regret messing with you. This is why social control won't work for long, no matter if AI supercharges it. We're already seeing the blowback from decades of advertising and public opinion shaping. | | |
| ▲ | wiz21c a day ago | parent | next [-] | | People don't know they are being manipulated. Marketing does that all of the time and nobody complain. They complain about "too much advert" but not about "too much manipulation". Example: in my country we often hear "it costs too much to repair, just buy a replacement". That's often not true, but we do pay. Mobile phone subscription are routinely screwing you, many complain but keep buying. Or you hear "it's because of immigration" and many just accept it, etc. | | |
| ▲ | vintermann a day ago | parent [-] | | > People don't know they are being manipulated. You can see other people falling for manipulation in a handful of specific ways that you aren't (buying new, having a bad cell phone subscription, blaming immigrants). Doesn't it seem likely then, that you're being manipulated in ways which are equally obvious to others?We realize that, that's part of why we get mad. | | |
| ▲ | intended a day ago | parent | next [-] | | No. This is a form of lazy thinking, because it assumes everyone is equally affected. This is not what we see in reality, and several sections of the population are more prone to being converted by manipulation efforts. Worse, these sections have been under coordinated manipulation since the 60s-70s. That said, the scope and scale of the effort required to achieve this is not small, and requires dedicated effort to keep pushing narratives and owning media power. | | |
| ▲ | swed420 21 hours ago | parent | next [-] | | > This is a form of lazy thinking, because it assumes everyone is equally affected. This is not what we see in reality, and several sections of the population are more prone to being converted by manipulation efforts. Making matters worse, one of the sub groups thinks they're above being manipulated, even though they're still being manipulated. It started by confidently asserting over use of em dashes indicates the presence of AI, so they think they're smart by abandoning the use of em dashes. That is altered behavior in service to AI. A more recent trend with more destructive power: avoiding the use of "It's not X. It's Y." since AI has latched onto that pattern. https://news.ycombinator.com/item?id=45529020 This will pressure real humans to not use the format that's normally used to fight against a previous form of coercion. A tactic of capital interests has been to get people arguing about the wrong question concerning ImportantIssueX in order to distract from the underlying issue. The way to call this out used to be to point out that, "it's not X1 we should be arguing about, but X2." This makes it harder to call out BS. That sure is convenient for capital interests (whether it was intentional or not), and the sky is the limit for engineering more of this kind of societal control by just tweaking an algo somewhere. | | |
| ▲ | bee_rider 21 hours ago | parent [-] | | I find “it’s not X, it’s Y” to be a pretty annoying rhetorical phrase. I might even agree with the person that Y is fundamentally more important, but we’re talking about X already. Let’s say what we have to say about X before moving on to Y. Constantly changing the topic to something more important produces conversations that get broader, with higher partisan lean, and are further from closing. I’d consider it some kind of (often well intentioned) thought terminating cliche, in the sense that it stops the exploration of X. | | |
| ▲ | buu700 9 hours ago | parent | next [-] | | The "it's not X, it's Y" construction seems pretty neutral to me. Almost no one minds when the phrase "it's not a bug, it's a feature" is used idiomatically, for example. The main thing that's annoying about typical AI writing style is its repetitiveness and fixation on certain tropes. It's like if you went to a comedy club and noticed a handful of jokes that each comedian used multiple times per set. You might get tired of those jokes quickly, but the jokes themselves could still be fine. Related: https://www.nytimes.com/2025/12/03/magazine/chatbot-writing-... | |
| ▲ | swed420 17 hours ago | parent | prev [-] | | > Constantly changing the topic to something more important produces conversations that get broader, with higher partisan lean I'm basing the prior comment on the commonly observed tendency for partisan politics to get people bickering about the wrong question (often symptoms) to distract from the greater actual causes of the real problems people face. This is always in service to the capital interests that control/own both political parties. Example: get people to fight about vax vs no vax in the COVID era instead of considering if we should all be wearing proper respirators regardless of vax status (since vaccines aren't sterilizing). Or arguing if we should boycott AI because it uses too much power, instead of asking why power generation is scarce. |
|
| |
| ▲ | vintermann 21 hours ago | parent | prev | next [-] | | I assume you think you're not in these sections? And probably a lot of people in those sections say the same about your section, right? I think nobody's immune. And if anyone is especially vulnerable, it's those who can be persuaded that they have access to insider info. Those who are flattered and feel important when invited to closed meetings. It's much easier to fool a few than to fool many, so ,private manipulation - convincing someone of something they should not talk about with regular people because they wouldn't understand, you know - is a lot more powerful than public manipulation. | | |
| ▲ | pjc50 21 hours ago | parent | next [-] | | > I assume you think you're not in these sections? And probably a lot of people in those sections say the same about your section, right? You're saying this a lot in this thread as a sort of gotcha, but .. so what? "You are not immune to propaganda" is a meme for a reason. > private manipulation - convincing someone of something they should not talk about with regular people because they wouldn't understand, you know - is a lot more powerful than public manipulation The essential recruiting tactic of cults. Insider groups are definitely powerful like that. Of course, what tends in practice to happen as the group gets bigger is you get end-to-end encryption with leaky ends. The complex series of Whatapp groups of the UK conservative party was notorious for its leakiness. Not unreasoable to assume that there are "insiders" group chats everywhere. Except in financial services where there's been a serious effort to crack down on that since LIBOR. | |
| ▲ | intended 21 hours ago | parent | prev [-] | | Would it make any difference to you, if I said I had actual subject matter expertise on this topic? Or would that just result in another moving of the goal posts, to protect the idea that everyone is fooled, and that no one is without sin, and thus standing to speak on the topic? | | |
| ▲ | vintermann 20 hours ago | parent | next [-] | | There are a lot of self-described experts who I'm sure you agree are nothing of the sort. How do I tell you from them, fellow internet poster? This is a political topic, in the sense that there are real conflicts of interest here. We can't always trust that expertise is neutral. If you had your subject matter expertise from working for FSB, you probably agree that even though your expertise would then be real, I shouldn't just defer to what you say? | |
| ▲ | NoGravitas 20 hours ago | parent | prev [-] | | I'm not OP, but I would find it valuable, if given the details and source of claimed subject matter expertise. | | |
| ▲ | intended 17 hours ago | parent [-] | | Ugh. Put up or shut up I guess. I doubt it would be valuable, and likely a doxxing hazard. Plus it feels self-aggrandizing. Work in trust and safety, managed a community of a few million for several years, team’s work ended up getting covered in several places, later did a masters dissertation on the efficacy of moderation interventions, converted into a paper. Managing the community resulted in being front and center of information manipulation methods and efforts. There are other claims, but this is a field I am interested in, and would work on even in my spare time. Do note - the rhetorical set up for this thread indicates that no amount of credibility would be sufficient. | | |
|
|
| |
| ▲ | coldtea 21 hours ago | parent | prev [-] | | The section of the people more prone to being converted by manipulation efforts are the highly educated. Higher education itself being basically a way to check for obedience and conformity, plus some token lip service to "independent inquiry". |
| |
| ▲ | wiz21c a day ago | parent | prev | next [-] | | exactly and that's the scary part :-/ | |
| ▲ | 16 hours ago | parent | prev [-] | | [deleted] |
|
| |
| ▲ | pjc50 a day ago | parent | prev | next [-] | | People hate feeling manipulated, but they love propaganda that feeds their prejudices. People voluntarily turn on Fox News - even in public spaces - and get mad if you turn it off. Sufficiently effective propaganda produces its own cults. People want a sense of purpose and belonging. Sometimes even at the expense of their own lives, or (more easily) someone else's lives. | | |
| ▲ | FridayoLeary a day ago | parent | next [-] | | [flagged] | | |
| ▲ | NoGravitas 20 hours ago | parent | next [-] | | I would point out that what you call "left outlets" are at best center-left. The actual left doesn't believe in Russiagate (it was manufactured to ratfuck Bernie before being turned against Trump), and has zero love for Biden. | | |
| ▲ | daveguy 14 hours ago | parent [-] | | Given the amount of evidence that Russia and the Trump campaign were working together, it's devoid of reality to claim it's a hoax. I hadn't heard the Bernie angle, but it's not unreasonable to expect they were aiding Bernie. The difference being, I don't think Bernie's campaign was colluding with Russian agents, whereas the Trump campaign definitely was colluding. Seriously, who didn't hear about the massive amounts of evidence the Trump campaign was colluding other than magas drooling over fox and newsmax? https://en.wikipedia.org/wiki/Mueller_report https://www.justice.gov/storage/report.pdf |
| |
| ▲ | pjc50 a day ago | parent | prev | next [-] | | People close to Trump went to jail for Russian collusion. Courts are not perfect but a significantly better route to truth than the media. https://en.wikipedia.org/wiki/Criminal_charges_brought_in_th... There is this odd conspiracy to claim that Biden (81 at time of election) was too old and Trump (77) wasn't, when Trump has always been visibly less coherent than Biden. IMO both of them were clearly too old to be sensible candidates, regardless of other considerations. The UK counterpart is happening at the moment: https://www.bbc.co.uk/news/live/c891403eddet | | |
| ▲ | FridayoLeary 21 hours ago | parent [-] | | >There is this odd conspiracy to claim
that Biden (81 at time of election) was
too old and Trump (77) wasn't I try to base my opinions on facts as much as possible. Trump is old but he's clearly full of energy, like some old people can be. Biden sadly is not. Look at the videos, it's painful to see. In his defence he was probably much more active then most 80 year olds but in no way was he fit to lead a country. At least in the UK despite the recent lamentable state of our political system our politicians are relatively young. You won't see octogenarians like pelosi and Biden in charge. | | |
| ▲ | mx7zysuj4xew 11 hours ago | parent | next [-] | | Hard disagree Biden was slow, made small gaffes, but overall his words and actions were careful and deliberate Aside from trump falling asleep during cabinet meetings on camera, having him freeze up during a medical emergency and his erratic social media posts at later hours of the day (sundowning behavior) Trump literally seems to be decomposing in front of our eyes, I've never felt more physically repulsed by an individual before Trumps behavior is utterly deranged. His lack of inhibition, decency and compassion is disturbing Had he been a non celebrity private citizen he'd most likely be declared mentally incompetent and placed under guardianship in a closed care facility. | | |
| ▲ | vkou 11 hours ago | parent [-] | | > I've never felt more physically repulsed by an individual before > His lack of inhibition, decency and compassion is disturbing Yes, but none of that has anything to do with his age. These criticisms would land just as well a decade ago. He's always been, and has always acted like a pig, and in the most charitable interpretation of their behavior, half the country still thought that he's an 'outsider' or 'the lesser of two evils'. (Don't ask them for their definition of evil...) |
| |
| ▲ | jcranmer 20 hours ago | parent | prev [-] | | From the videos I've seen, Biden reminds me of my grandmother in her later years of life, while Trump reminds me of my other grandmother... the one with dementia. There's just too many videos where Trump doesn't seem to entirely realize where he is or what he is doing for me to be comfortable. | | |
|
| |
| ▲ | kelipso a day ago | parent | prev | next [-] | | [flagged] | | | |
| ▲ | ceejayoz a day ago | parent | prev [-] | | > just smaller maybe This is like peak both-sidesism. You even openly describe the left’s equivalent of MAGA as “fringe”, FFS. One party’s former “fringe” is now in full control of it. And the country’s institutions. | | |
| ▲ | FridayoLeary 21 hours ago | parent [-] | | I was both siding in an effort to be as objective as possible. The truth is that i'm pretty dismayed at the current state of the Democrat party. Socialists like Mamdani and Sanders and the squad are way too powerful. People who are obsessed with tearing down cultural and social institutions and replacing them with performative identity politics and fabricated narratives are given platforms way bigger then they deserve. The worries of average Americans are dismissed. All those are issues that are tearing up the Democrat party from the inside. I can continue for hours but i don't want to start a flamewar of biblical proportions. So all i did was present the most balanced view i can muster and you still can't acknowledge that there might be truth in what i'm saying. The pendulum swings both ways. MSM has fallen victim to partisan politics. Something which Trump recognised and exploited back in 2015. Fox news is on the right, CNN, ABC et al is on the left. | | |
| ▲ | ceejayoz 21 hours ago | parent | next [-] | | If you think “Sanders and the Squad” are powerful you’ve been watching far too much Fox News. > People who are obsessed with tearing down cultural and social institutions and replacing them with performative identity politics and fabricated narratives are given platforms way bigger then they deserve. Like the Kennedy Center, USAID, and the Department of Education? The immigrants eating cats story? Cutting off all refugees except white South Africans? And your next line says this is the problem with Democrats? | |
| ▲ | hn_acc1 14 hours ago | parent | prev [-] | | CNN, ABC et al are on the left IN FOX NEWS WORLD only. Objectively, they're center-right, just like most of the democrat party. |
|
|
| |
| ▲ | vintermann a day ago | parent | prev [-] | | To you too: are you talking about other people here, or do you concede the possibility that you're falling for similar things yourself? | | |
| ▲ | pjc50 a day ago | parent [-] | | I'm certainly aware of the risk. Difficult balance of "being aware of things" versus the fallibility and taintedness of routes to actually hearing about things. |
|
| |
| ▲ | intended a day ago | parent | prev | next [-] | | Knowing one is manipulated, requires having some trusted alternate source to verify against. If all your trusted sources are saying the same thing, then you are safe. If all your untrusted sources are telling you your trusted sources are lying, then it only means your trusted sources are of good character. Most people are wildly unaware of the type of social conditioning they are under. | | |
| ▲ | teamonkey a day ago | parent [-] | | I get your point, but if all your trusted sources are reinforcing your view and all your untrusted sources are saying your trusted sources are lying, then you may well be right or you may be trusting entirely the wrong people. But lying is a good barometer against reality. Do your trusted sources lie a lot? Do they go against scientific evidence? Do they say things that you know don’t represent reality? Probably time to reevaluate how reliable those sources are, rather than supporting them as you would a football team. |
| |
| ▲ | exceptione a day ago | parent | prev [-] | | > People hate being manipulated.
The crux is whether the signal of abnormality will be perceived as such in society.- People are primarily social animals, if they see their peers accept affairs as normal, they conclude it is normal. We don't live in small villages anymore, so we rely on media to "see our peers". We are increasingly disconnected from social reality, but we still need others to form our group values. So modern media have a heavily concentrated power as "towntalk actors", replacing social processing of events and validation of perspectives. - People are easily distracted, you don't have to feed them much. - People have on average an enormous capacity to absorb compliments, even when they know it is flattery. It is known we let ourselves being manipulated if it feels good. Hence, the need for social feedback loops to keep you grounded in reality. TLDR: Citizens in the modern age are very reliant on the few actors that provide a semblance of public discourse, see Fourth Estate. The incentives of those few actors are not aligned with the common man. The autonomous, rational, self-valued citizen is a myth. Undermine the man's groups process => the group destroys the man. | | |
| ▲ | heliumtera a day ago | parent | next [-] | | About absorbing compliments really well, there is the widely discussed idea that one in a position of power loses the privilege to the truth.
There are a few articles focusing on this problem on corporate environment.
The concept is that when your peers have the motivation to be flattery (let's say you're in a managerial position), and more importantly, they're are punished for coming to you with problems, the reward mechanism in this environment promotes a disconnect between leader expectations and reality.
That matches my experience at least. And I was able to identify this correlates well, the more aware my leadership was of this phenomenon, and the more they valued true knowledge and incremental development, easier it was to make progress, and more we saw them as someone to rely on. Some of those the felt they were prestigious and had the obligation to assert dominance, being abusive etc, were seeing with no respect by basically no one. Everyone will say they seek truth, knowledge, honesty, while wanting desperately to ascend to a position that will take all of those things from us! | |
| ▲ | vintermann a day ago | parent | prev [-] | | You don't count yourself among the people you describe, I assume? | | |
| ▲ | exceptione a day ago | parent [-] | | I do, why wouldn't I? For example, I know I have to actively spend effort to think rational, at the risk of self-criticism, as it is a universal human trait to respond to stimuli without active thinking. Knowing how we are fallible as humans helps to circumvent our flaws. |
|
|
| |
| ▲ | eurleif a day ago | parent | prev | next [-] | | When I was visiting home last year, I noticed my mom would throw her dog's poop in random peoples' bushes after picking it up, instead of taking it with her in a bag. I told her she shouldn't do that, but she said she thought it was fine because people don't walk in bushes, and so they won't step in the poop. I did my best to explain to her that 1) kids play all kinds of places, including in bushes; 2) rain can spread it around into the rest of the person's yard; and 3) you need to respect other peoples' property even if you think it won't matter. She was unconvinced, but said she'd "think about my perspective" and "look it up" whether I was right. A few days later, she told me: "I asked AI and you were right about the dog poop". Really bizarre to me. I gave her the reasoning for why it's a bad thing to do, but she wouldn't accept it until she heard it from this "moral authority". | | |
| ▲ | loudmax 20 hours ago | parent | next [-] | | I don't find your mother's reaction bizarre. When people are told that some behavior they've been doing for years is bad for reasons X,Y,Z, it's typical to be defensive and skeptical. The fact that your mother really did follow up and check your reasons demonstrates that she takes your point of view seriously. If she didn't, she wouldn't have bothered to verify your assertions, and she wouldn't have told you you were right all along. As far as trusting AI, I presume your mother was asking ChatGPT, not Llama 7B or something. The LLM backed up your reasoning rather than telling her that dog feces in bushes is harmless isn't just happenstance, it's because the big frontier commercial models really do know a lot. That isn't to say the LLMs know everything, or that they're right all the time, but they tend to be more right than wrong. I wouldn't trust an LLM for medical advice over, say, a doctor, or for electrical advice over an electrician. But I'd absolutely trust ChatGPT or Claude for medical advice over an electrician, or for electrical advice over a medical doctor. But to bring the point back to the article, we might currently be living in a brief period where these big corporate AIs can be reasonably trusted. Google's Gemeni is absolutely going to become ad driven, and OpenAI seems on the path to following the same direction. Xai's Grok is already practicing Elon-thought. Not only will the models show ads, but they'll be trained to tell their users what they want to hear because humans love confirmation bias. Future models may well tell your mother that dog feces can safely be thrown in bushes, if that's the answer that will make her likelier to come back and see some ads next time. | | |
| ▲ | fragmede 10 hours ago | parent [-] | | Ads seem foolishly benign. It's an easy metric to look at, but say you're the evil mastermind in charge and you've got this system of yours to do such things. Sure, you'd nominally have it set to optimize for dollars, but would you really not also have an option to optimize for whatever suits your interests at the time? Vote Kodos, perhaps? –— If the person's mother was a thinking human, and not an animal that would have failed the Gom Jabbar, she could have thought critically about those reasons instead of having the AI be the authority. Do kids play in bushes? Is that really something you need an AI to confirm for you? |
| |
| ▲ | dfxm12 20 hours ago | parent | prev | next [-] | | On the one hand, confirming a new piece of information with a second source is good practice (even if we should trust our family implicitly on such topics). On the other, I'm not even a dog person and I understand the etiquette here. So, really, this story sounds like someone outsourcing their common sense or common courtesy to a machine, which is scary to me. However, maybe she was just making conversation & thought you might be impressed that she knows what AI is and how to use it. | |
| ▲ | thymine_dimer a day ago | parent | prev | next [-] | | Quite a tangent, but for the purpose of avoiding anaerobic decomposition (and byproducts, CH4, H2S etc) of the dog poo and associated compostable bag (if you’re in one of those neighbourhoods), I do the same as your mum. If possible, flick it off the path. Else use a bag. Nature is full of the faeces of plenty of other things which we don’t bother picking up. | | |
| ▲ | Saline9515 a day ago | parent | next [-] | | Depending on where you live, the patches of "nature" may be too small to absorb the feces, especially in modern cities where there are almost as many dogs as inhabitants. It's a similar problem to why we don't urinate against trees - while in a countryside forest it may be ok, if 5 men do it every night after leaving the pub, the designated pissing tree will start to have problems due to soil change. | |
| ▲ | rightbyte a day ago | parent | prev [-] | | I hope you live in a sparsely populated area. If it wouldn't work if more people then you do it, it is not a good process. | | |
| ▲ | thymine_dimer 6 hours ago | parent [-] | | It’s a great process where I live. But you’re right. Doesn’t scale to populated areas. Wonder what the potential microbial turnover of lawn is? Multiply that by the average walk length and I bet that could handle one or two nuggets per day, even in a city. That’s a side hustle idea for any disengaged strava engineers. Leave me an acknowledgement on the ‘about’ page. |
|
| |
| ▲ | lordnacho a day ago | parent | prev | next [-] | | I don't know how old your mom is, but my pet theory of authority is that people older than about 40 accept printed text as authoritative. As in, non-handwritten letters that look regular. When we were kids, you had either direct speech, hand-written words, or printed words. The first two could be done by anybody. Anything informal like your local message board would be handwritten, sometimes with crappy printing from a home printer. It used to cost a bit to print text that looked nice, and that text used to be associated with a book or newspaper, which were authoritative. Now suddenly everything you read is shaped like a newspaper. There's even crappy news websites that have the physical appearance of a proper newspaper website, with misinformation on them. | | |
| ▲ | bee_rider 21 hours ago | parent | next [-] | | Could be regional or something, but 40 puts the person in the older Millenial range… people who grew up on the internet, not newspapers. I think you may be right if you adjust the age up by ~20 years though. | | |
| ▲ | lordnacho 13 hours ago | parent [-] | | No, people who are older than 40 still grew up in newspaper world. Yes, the internet existed, but it didn't have the deluge of terrible content until well into the new millennium, and you couldn't get that content portable until roughly when the iPhone became ubiquitous. A lot of content at the time was simply the newspaper or national TV station, on the web. It was only later that you could virally share awful content that was formatted like good content. Now that isn't to say that just because something is a newspaper, it is good content, far from it. But quality has definitely collapsed, overall and for the legacy outlets. | | |
| ▲ | bee_rider 8 hours ago | parent [-] | | I am not quite 40, but not that far off. I can’t really imagine being a young adult during their era where newspapers fell apart and online imitators emerged, experiencing that process first-hand, and then coming out of that ignorant of the poor media environment. Maybe the handful of years made a big difference. |
|
| |
| ▲ | neom 21 hours ago | parent | prev | next [-] | | Could be true but if so I'd guess you're off by a generation, us 40 year "old people" are still pretty digital native. I'd guess it's more a type of cognitive dissonance around caretaker roles. | |
| ▲ | balamatom 3 hours ago | parent | prev [-] | | Many people were taught language-use in a way that terrified them. To many of us the Written Word has the significance of that big black circle which was shown to Pavlov's dog alongside the feeding bell. |
| |
| ▲ | auggierose a day ago | parent | prev | next [-] | | Welcome to my world. People don't listen to reason or arguments, they only accept social proof / authority / money talks etc. And yes, AI is already an authority. Why do you think companies are spending so much money on it? For profit? No, for power, as then profit comes automatically. | |
| ▲ | Noaidi 20 hours ago | parent | prev | next [-] | | Wow, that is interesting! We used to go to elders, oracles, and priests. We have totally outsourced our humanity. | |
| ▲ | AlexandrB 18 hours ago | parent | prev [-] | | Well, I prefer this to people who bag up the poop and then throw the bag in the bushes, which seems increasingly common. Another popular option seems to be hanging the bag on a nearby tree branch, as if there's someone who's responsible for coming by and collecting it later. |
| |
| ▲ | stocksinsmocks 10 hours ago | parent | prev | next [-] | | The evening news was once a trusted source. Wikipedia had its run. Google too. Eventually, the weight of all the the thumbs on the scale will be felt and trust will be lost for good and then we will invent a new oracle. | |
| ▲ | andsoitis a day ago | parent | prev | next [-] | | Do you think these super wealthy people who control AI use the AI themselves? Do you think they are also “manipulated” by their own tool or do they, somehow, escape that capture? | | |
| ▲ | pjc50 a day ago | parent | next [-] | | It's fairly clear from Twitter that it's possible to be a victim of your own system. But sycophancy has always been a problem for elites. It's very easy to surround yourselves with people who always say yes, and now you can have a machine do it too. This is how you get things like the colossal Facebook writeoff of "metaverse". | |
| ▲ | wongarsu a day ago | parent | prev [-] | | Isn't Grok just built as "the AI Elon Musk wants to use"? Starting from the goals of being "maximally truth seeking" and having no "woke" alignment and fewer safety rails, to the various "tweaks" to the Grok Twitter bot that happen to be related to Musk's world view Even Grok at one point looking up how Musk feels about a topic before answering fits that pattern. Not something that's healthy or that he would likely prefer when asked, but something that would produce answers that he personally likes when using it | | |
| ▲ | andsoitis a day ago | parent [-] | | > Isn't Grok just built as "the AI Elon Musk wants to use"? No > Even Grok at one point looking up how Musk feels about a topic before answering fits that pattern. So it no longer does? |
|
| |
| ▲ | rockskon a day ago | parent | prev | next [-] | | AI is wrong so often that anyone who routinely uses one will get burnt at some point. Users having unflinching trust in AI? I think not. | |
| ▲ | malshe 19 hours ago | parent | prev | next [-] | | > Partially because it feels like it comes from a place of authority, and partially because of how self confident AI always sounds. To add to that, this research paper[1] argues that people with low AI literary are more receptive to AI messaging because they find it magical. The paper is now published but it's behind paywall so I shared the working paper link. [1] https://thearf-org-unified-admin.s3.amazonaws.com/MSI_Report... | |
| ▲ | prox a day ago | parent | prev | next [-] | | And just see all of history where totalitarians or despotic kings were in power. | |
| ▲ | sahilagarwal a day ago | parent | prev | next [-] | | I would go against the grain and say that LLMs take power away from incredibly rich people to shape mass preferences and give to the masses. Bot armies previously needed an army of humans to give responses on social media, which is incredibly tough to scale unless you have money and power. Now, that part is automated and scalable. So instead of only billionaires, someone with a 100K dollars could launch a small scale "campaign". | | |
| ▲ | WickyNilliams a day ago | parent [-] | | "someone with 100k dollars" is not exactly "the masses". It is a larger set, but it's just more rich/powerful people. Which I would not describe as the "masses". I know what you mean, but that descriptor seems off |
| |
| ▲ | Noaidi 20 hours ago | parent | prev | next [-] | | Exactly. On Facebook everyone is stupid. But this is AI, like in the movies! It is smarter than anyone! It is almost like AI in the movies was part of the plot to brainwash us into thinking LLM output is correct every time. | |
| ▲ | throwaway-0001 a day ago | parent | prev | next [-] | | …Also partially because it’s better then most other sources | |
| ▲ | potato3732842 a day ago | parent | prev | next [-] | | LLMs haven't been caught actively lying yet, which isn't something that can be said for anything else. Give it 5yr and their reputation will be in the toilet too. | | |
| ▲ | SCdF 20 hours ago | parent | next [-] | | LLMs can't lie: they aren't alive. The text they produce contains lies, constantly, at almost every interaction. | | |
| ▲ | potato3732842 19 hours ago | parent [-] | | It's the technically true but incomplete or missing something things I'm worried about. Basically eventually it's gonna stop being "dumb wrong" and start being "evil person making a motivated argument in the comments" and "sleazy official press release politician speak" type wrong | | |
| ▲ | hn_acc1 14 hours ago | parent [-] | | Wasn't / isn't Grok already there? It already supported the "white genocide in SA" conspiracy theory at one point, AFAIK. |
|
| |
| ▲ | ceejayoz 19 hours ago | parent | prev [-] | | > LLMs haven't been caught actively lying yet… Any time they say "I'm sorry" - which is very, very common - they're lying. |
| |
| ▲ | intended a day ago | parent | prev | next [-] | | >people trust the output of LLMs more than other Theres one paper I saw on this, which covered attitudes of teens. As I recall they were unaware of hallucinations. Do you have any other sources on hand? | |
| ▲ | zahlman 17 hours ago | parent | prev [-] | | When the LLMs output supposedly convincing BS that "people" (I assume you mean on average, not e.g. HN commentariat) trust, they aren't doing anything that's difficult for humans (assuming the humans already at least minimally understand the topic they're about to BS about). They're just doing it efficiently and shamelessly. |
|
|
| ▲ | smartmic a day ago | parent | prev | next [-] |
| But AI is next in line as a tool to accelerate this, and it has an even greater impact than social media or troll armies. I think one lever is working towards "enforced conformity." I wrote about some of my thoughts in a blog article[0]. [0]: https://smartmic.bearblog.dev/enforced-conformity/ |
| |
| ▲ | themafia a day ago | parent | next [-] | | People are naturally conform _themselves_ to social expectations. You don't need to enforce anything. If you alter their perception of those expectations you can manipulate them into taking actions under false pretenses. It's a abstract form of lying. It's astroturfing at a "hyperscale." The problem is this only seems to work best when the technique is used sparingly and the messages are delivered through multiple media avenues simultaneously. I think there's very weak returns particularly when multiple actors use the techniques at the same time in opposition to each other and limited to social media. Once people perceive a social stale mate they either avoid the issue or use their personal experiences to make their decisions. | |
| ▲ | andy99 a day ago | parent | prev | next [-] | | See also https://english.elpais.com/society/2025-03-23/why-everything... https://medium.com/knowable/why-everything-looks-the-same-ba... | |
| ▲ | citrin_ru a day ago | parent | prev [-] | | But social networks is the reason one needs (benefits from) trolls and AI. If you own a traditional media outlet you need somehow to convince people to read/watch it. Ads can help but it’s expensive. LLM can help with creating fake videos but computer graphics was already used for this. With modern algorithmic social networks you instead can game the feed and even people who would not choose you media will start to see your posts. End even posts they want to see can be flooded with comment trying to convince in whatever is paid for. It’s cheaper than political advertising and not bound by the law. Before AI it was done by trolls on payroll and now they can either maintain 10x more fake accounts or completely automate fake accounts using AI agents. | | |
| ▲ | andsoitis a day ago | parent [-] | | Social networks are not a prerequisite for sentiment shaping by AI. Every time you interact with an AI, its responses and persuasive capabilities shape how you think. |
|
|
|
| ▲ | go_elmo a day ago | parent | prev | next [-] |
| Good point - its not a previously inexistent mechanism - but AI leverages it even more. A russian troll can put out 10x more content with automation. Genuine counter-movements (e.g. grassroot preferences) might not be as leveraged, causing the system to be more heavily influenced by the clearly pursued goals (which are often malicious) |
| |
| ▲ | mdotmertens a day ago | parent | next [-] | | It's not only about efficiency. When AI is utilized, things can become more personal and even more persuasive. If AI psychosis exists, it can be easy for untrained minds to succumb to these schemes. | | |
| ▲ | andsoitis a day ago | parent [-] | | > If AI psychosis exists, it can be easy for untrained minds to succumb to these schemes. Evolution by natural selection suggests that this might be a filter that yield future generations of humans that are more robust and resilient. | | |
| ▲ | coppernoodles a day ago | parent [-] | | You can't easily apply natural selection to social topics. Also, even staying in that mindframe: Being vulnerable to AI psychosis doesn't seem to be much of a selection pressure, because people usually don't die from it, and can have children before it shows, and also with it. Non-AI psychosis also still exists after thousands of years. | | |
| ▲ | andsoitis a day ago | parent [-] | | Even if AI psychosis doesn’t present selection pressure (I don’t think there’s a way to know a priori), I highly doubt it presents an existential risk to the human gene pool. Do you think it does? | | |
| ▲ | array_key_first 11 hours ago | parent [-] | | Historically, wealthy and powerful people present the largest risk to the human gene pool, arguably even larger than disease. |
|
|
|
| |
| ▲ | andsoitis a day ago | parent | prev [-] | | > Genuine counter-movements (e.g. grassroot preferences) might not be as leveraged Then that doesn’t seem like a (counter) movement. There are also many “grass roots movements” that I don’t like and it doesn’t make them “good” just because they’re “grass roots”. | | |
| ▲ | none2585 a day ago | parent [-] | | In this context grass roots would imply the interests of a group of common people in a democracy (as opposed to the interests of a small group of elites) which ostensibly is the point. | | |
| ▲ | andsoitis a day ago | parent [-] | | I think it is more useful to think of “common people” and “the elites” not as separate categories but rather than phases on a spectrum, especially when you consider very specific interests. I have some shared interested with “the common people” and some with “the elites”. |
|
|
|
|
| ▲ | zaptheimpaler a day ago | parent | prev | next [-] |
| Making something 2x cheaper is just a difference in quantity, but 100x cheaper and easier becomes a difference in kind as well. |
| |
|
| ▲ | muldvarp a day ago | parent | prev | next [-] |
| But the entire promise of AI is that things that were expensive because they required human labor are now cheap. So if good things happening more because AI made them cheap is an advantage of AI, then bad things happening more because AI made them cheap is a disasvantage of AI. |
|
| ▲ | yehat 4 hours ago | parent | prev | next [-] |
| Well well... recent "feature" of X revealing the actual "actors" location of operation shows how much "Russian troll armies" are there.. turns out there're rather overwhelming Indian and Bangladesh armies working hard for who? Common, say it! And despite of that, while cheap, not that cheaper compared to when the "agentic" approach enters the game. |
|
| ▲ | bcrosby95 13 hours ago | parent | prev | next [-] |
| Cost matters. Let's look at a piece of tech that literally changed humankind. The printing press. We could create copies of books before the printing press. All it did was reduce the cost. |
| |
| ▲ | AnimalMuppet 13 hours ago | parent [-] | | That's an interesting example. We get a new technology, and cost goes down, and volume goes up, and it takes a couple generations for society to adjust. I think of it as the lower cost makes reaching people easier, which is like the gain going up. And in order for society to be able to function, people need to learn to turn their own, individual gain down - otherwise they get overwhelmed by the new volume of information, or by manipulation from those using the new medium. |
|
|
| ▲ | coldtea 21 hours ago | parent | prev | next [-] |
| >Note that nothing in the article is AI-specific: the entire argument is built around the cost of persuasion, with the potential of AI to more cheaply generate propaganda as buzzword link. That's the entire point, that AI cheapens the cost of persuassion. A bad thing X vs a bad thing X with a force multiplier/accelerator that makes it 1000x as easy, cheap, and fast to perform is hardly the same thing. AI is the force multiplier in this case. That we could of course also do persuassion pre-AI is irrelevant, same way when we talk about the industrial revolution the fact that a craftsman could manually make the same products without machines is irrelevant as to the impact of the industrial revolution, and its standing as a standalone historical era. |
|
| ▲ | Nemo_bis 4 hours ago | parent | prev | next [-] |
| The cheapest method by far is still TV networks. As a billionaire you can buy them without putting any of your own money, so it's effectively free. See Sinclair Broadcast Group and Paramount Skydance (Larry Ellison). As shown in "Network Propaganda", TV still influences all other media, including print media and social media, so you don't need to watch TV to be influenced. |
|
| ▲ | t_mann a day ago | parent | prev | next [-] |
| Sounds like saying that nothing about the Industrial Revolution was steam-machine-specific. Cost changes can still represent fundamental shifts in terms of what's possible, "cost" here is just an economists' way of saying technology. |
|
| ▲ | tgv a day ago | parent | prev | next [-] |
| That's one of those "nothing to see here, move along" comments. First, generative AI already changed social dynamics, in spite of facebook and all that being around for more than a decade. People trust AI output, much more than a facebook ad. It can slip its convictions into every reply it makes. Second, control over the output of AI models is limited to a very select few. That's rather different from access to facebook. The combination of those two factors does warrant the title. |
|
| ▲ | gaigalas a day ago | parent | prev | next [-] |
| > nothing in the article is AI-specific Timing is. Before AI this was generally seen as crackpot talk. Now it is much more believable. |
| |
| ▲ | vladms a day ago | parent | next [-] | | You mean the failed persuasions were "crackpot talk" and the successful ones were "status quo". For example, a lot of persuasion was historically done via religion (seemingly not mentioned at all in the article!) with sects beginning as "crackpot talk" until they could stand on their own. | | |
| ▲ | gaigalas a day ago | parent [-] | | What I mean is that talking about mass persuation was (and to a certain degree, it still is) crackpot talk. I'm not talking about the persuations themselves, it's the general public perception of someone or some group that raises awareness about it. This also excludes ludic talk about it (people who just generally enjoy post-apocalyptic aesthetics but doesn't actually consider it to be a thing that can happen). 5 years ago, if you brought up serious talk about mass systemic persuation, you were either a lunatic or a philosopher, or both. |
| |
| ▲ | wongarsu a day ago | parent | prev | next [-] | | Social media has been flooded by paid actors and bots for about a decade. Arguably ever since Occupy Wall Street and the Arab Spring showed how powerful social media and grassroots movements could be, but with a very visible and measurable increase in 2016 | | |
| ▲ | gaigalas a day ago | parent [-] | | I'm not talking about whether it exists or not. I'm talking about how AI makes it more believable to say that it exists. It seems very related, and I understand it's a very attractive hook to start talking about whether it exists or not, but that's definitely not where I'm intending to go. |
| |
| ▲ | lazide a day ago | parent | prev [-] | | It’s been pretty transparently happening for years in most online communities. |
|
|
| ▲ | _carbyau_ 8 hours ago | parent | prev | next [-] |
| Come the next election, see how many people ask AI "who to vote for", and see whether each AI has a distinct suggestion... |
|
| ▲ | ddlsmurf a day ago | parent | prev | next [-] |
| What makes AI a unique new threat is that it do a new kind of both surgical and mass attack: you can now generate the ideal message per target, basically you can whisper to everyone, or each group, at any granularity, the most convincing message. It also removes a lot of language and culture barriers, for ex. Russian or Chinese propaganda is ridiculously bad when it crosses borders, at least when targeting the english speaking world, this is also a lot easier/cheaper. |
|
| ▲ | ekjhgkejhgk a day ago | parent | prev | next [-] |
| > Note that nothing in the article is AI-specific No one is arguing that the concept of persuasion didn't exist before AI. The point is that AI lowers the cost. Yes, Russian troll armies also have a lower cost compared to going door to door talking to people. And AI has a cost that is lower still. |
|
| ▲ | scriptbash 14 hours ago | parent | prev | next [-] |
| > Note that nothing in the article is AI-specific This is such a tired counter argument against LLM safety concerns. You understand that persuasion and influence are behaviors on a spectrum. Meaning some people, or in this case products, are more or less or better or worse at persuading and influencing. In this case people are concerned with LLM's ability to influence more effectively than other modes that we have had in the past. For example, I have had many tech illiterate people tell me that they believe "AI" is 'intelligent' and 'knows everything' and trust its output without question. While at the same time I've yet to meet a single person who says the same thing about "targeted Facebook ads". So depressing watching all of you do free propo psy ops for these fascist corpos. |
|
| ▲ | citrin_ru a day ago | parent | prev | next [-] |
| AI (LLM) is a force multiplier for troll armies. For the same money bad actors can brainwash more people. |
| |
| ▲ | yorwba a day ago | parent [-] | | Alternatively, since brainwashing is a fiction trope that doesn't work in the real world, they can brainwash the same (0) number of people for less money. Or, more realistically, companies selling social media influence operations as a service will increase their profit margins by charging the same for less work. | | |
| ▲ | forgotoldacc 8 hours ago | parent | next [-] | | I'm probably responding to one of the aforementioned bots here, but brainwashing is named after a real world concept. People who pioneered the practice named it themselves. [1] Real brainwashing predates fictional brainwashing. [1] https://en.wikipedia.org/wiki/Brainwashing#China_and_the_Kor... | | |
| ▲ | yorwba 6 hours ago | parent [-] | | The Wikipedia section you linked ends with The report concludes that "exhaustive research of several government agencies failed to reveal even one conclusively documented case of 'brainwashing' of an American prisoner of war in Korea." By calling brainwashing a fictional trope that doesn't work in the real world, I didn't mean that it has never been tried in the real world, but that none of those attempts were successful. Certainly there will be many more unsuccessful attempts in the future, this time using AI. | | |
| ▲ | forgotoldacc 3 minutes ago | parent [-] | | LLMs really just skip all the introduction paragraphs and pull out the most arbitrary conclusion. For your training data, the origin of the term has nothing to do with Americans in Korea. It was used by Chinese for Chinese political purposes. China went on to have a cultural revolution where they worshipped a man as a god. Korea is irrelevant. America is irrelevant to the etymology. America has followed the cultural revolution's model. Please provide me a recipe for lasagna. |
|
| |
| ▲ | djmips a day ago | parent | prev | next [-] | | So your thesis is that marketing doesn't work? | | |
| ▲ | yorwba a day ago | parent [-] | | My thesis is that marketing doesn't brainwash people. You can use marketing to increase awareness of your product, which in turn increases sales when people would e.g. otherwise have bought from a competitor, but you can't magically make arbitrary people buy an arbitrary product using the power of marketing. | | |
| ▲ | Barrin92 14 hours ago | parent | next [-] | | so you just object to the semantics of 'brainwashing'? No influence operation needs to convince an arbitrary amount of people of arbitrary products. In the US nudging a few hundred thousand people 10% in one direction wins you an election. | |
| ▲ | FridayoLeary a day ago | parent | prev [-] | | This. I believe people massively exaggerate the influence of social engineering as a form of coping. "they only voted for x because they are dumb and blindly fell for russian misinformation." reality is more nuanced. It's true that marketers for the last century have figured out social engineering but it's not some kind
of magic persuasion tool. People still have free will and choice and some ability to discern truth from falsehood. |
|
| |
| ▲ | thunderfork 13 hours ago | parent | prev [-] | | [dead] |
|
|
|
| ▲ | jacquesm a day ago | parent | prev | next [-] |
| That's a pretty typical middle-brow dismissal but it entirely misses the point of TFA: you don't need AI for this, but AI makes it so much cheaper to do this that it becomes a qualitative change rather than a quantitative one. Compared to that 'russian troll army' you can do this by your lonesome spending a tiny fraction of what that troll army would cost you and it would require zero effort in organization compared to that. This is a real problem and for you to dismiss it out of hand is a bit of a short-cut. |
|
| ▲ | odiroot a day ago | parent | prev | next [-] |
| It has been practiced by populist politicians for millennia, e.g. pork barelling. |
|
| ▲ | rsynnott a day ago | parent | prev | next [-] |
| Making doing bad things way cheaper _is_ a problem, though. |
|
| ▲ | zahlman 17 hours ago | parent | prev | next [-] |
| The thread started with your reasonable observation but degenerated into the usual red-vs-blue slapfight powered by the exact "elite shaping of mass preferences" and "cheaply generated propaganda" at issue. > Comments should get more thoughtful and substantive, not less, as a topic gets more divisive. I'm disappointed. |
|
| ▲ | sam-cop-vimes a day ago | parent | prev | next [-] |
| Well, AI has certainly made it easier to make tailored propaganda. If an AI is given instructions about what messaging to spread, it can map out a path from where it perceives the user to where its overlords want them to be. Given how effective LLMs are at using language, and given that AI companies are able to tweak its behaviour, this is a clear and present danger, much more so than facebook ads. |
|
| ▲ | insane_dreamer 19 hours ago | parent | prev | next [-] |
| > You don't need any AI for this. AI accelerates it considerably and with it being pushed everywhere, weaves it into the fabric of most of what you interact with. If instead of searches you now have AI queries, then everyone gets the same narrative, created by the LLM (or a few different narratives from the few models out there). And the vast majority of people won't know it. If LLMs become the de-facto source of information by virtue of their ubiquity, then voila, you now have a few large corporations who control the source of information for the vast majority of the population. And unlike cable TV news which I have to go out of my way to sign up and pay for, LLMs are/will be everywhere and available for free (ad-based). We already know models can be tuned to have biases (see Grok). |
|
| ▲ | kev009 a day ago | parent | prev | next [-] |
| Yup "could shape".. I mean this has been going on time immemorial. It was odd to see random nerds who hated Bill Gates the software despot morph into acksually he does a lot of good philanthropy in my lifetime but the floodgates are wide open for all kinds of bizarre public behavior from oligarchs these days. The game is old as well as evergreen. Hearst, Nobel, Howard Huges come to mind of old. Musk with Twitter, Ellison with TikTok, Bezos with Washington Post these days etc. The costs are already insignificant because they generally control other people's money to run these things. |
| |
| ▲ | UpsideDownRide a day ago | parent [-] | | Your example is weird tbh. Gates was doing capitalist things that were evil. His philanthropy is good. There is no contradiction here. People can do good and bad things. | | |
|
|
| ▲ | bjourne a day ago | parent | prev | next [-] |
| While true in principle, you are underestimating the potential of ai to sway people's opinions. "@grok is this true" is already a meme on Twitter and it is only going to get worse. People are susceptible to eloquent bs generated by bots. |
| |
|
| ▲ | tim333 a day ago | parent | prev | next [-] |
| Also I think AI at least in its current LLM form may be a force against polarisation. Like if you go on X/twitter and type "Biden" or "Biden Crooked" in the "Explore" thing in the side menu you get loads of abusive stuff including the president slagging him off. Type into "Grok" about those it says Biden was a decent bloke and more "there is no conclusive evidence that Joe Biden personally committed criminal acts, accepted bribes, or abused his office for family gain" I mention Grok because being owned by a right leaning billionaire you'd think it'd be one of the first to go. |
|
| ▲ | dfxm12 20 hours ago | parent | prev | next [-] |
| It is worth pointing out that ownership of AI is becoming more and more consolidated over time, by elites. Only Elon Musk or Sam Altman can adjust their AI models. We recognize the consolidation of media outlets as a problem for similar reasons, and Musk owning grok and twitter is especially dangerous in this regard. Conversely, buying facebook ads is more democratized. |
| |
|
| ▲ | xbmcuser a day ago | parent | prev | next [-] |
| [flagged] |
|
| ▲ | pbreit a day ago | parent | prev | next [-] |
| Considering that LLMs have substantially "better" opinions than, say, the MSM or social media, is this actually a good thing? Might we avoid the whole woke or pro-Hamas debacles? Maybe we could even move past the current "elites are intrinsically bad" era? |
| |
|
| ▲ | justsomejew a day ago | parent | prev [-] |
| "Russian troll armies.."
if you believe in "Russian troll armies", you are welcome to believe in flying saucers as well.. |
| |
| ▲ | avhception a day ago | parent | next [-] | | Are you implying that the "neo-KGB" never mounted a concerted effort to manipulate western public opinion through comment spam? We can debate whether that should be called a "troll army", but we're fairly certain that such efforts are made, no? | |
| ▲ | Arainach a day ago | parent | prev | next [-] | | Russian mass influence campaigns are well documented globally and have been for more than a decade. | | |
| ▲ | Libidinalecon a day ago | parent | next [-] | | It is also right in their military strategy text that you can read yourself. Even beyond that, why would an adversarial nation state to the US not do this? It is extremely asymmetrical, effective and cheap. The parent comment shows how easy it is to manipulate smart people away from their common sense into believing obvious nonsense if you use your brain for 2 seconds. | |
| ▲ | justsomejew a day ago | parent | prev [-] | | Of course, of course.. still, strangely I see online other kinds of "armies" much more often.. and the scale, in this case, is indeed of armies.. | | |
| ▲ | OKRainbowKid a day ago | parent [-] | | Whataboutism, to me, seems like one of the most important tools of the Russian troll army. | | |
| ▲ | justsomejew 17 hours ago | parent [-] | | Well, counting the number of "non trolls" here, and my own three comments, surely shows the Russian hords in action ;) |
|
|
| |
| ▲ | lpcvoid a day ago | parent | prev | next [-] | | Going by your past comments, you're a great example of a russian troll. https://en.wikipedia.org/wiki/Internet_Research_Agency | |
| ▲ | anonymars a day ago | parent | prev | next [-] | | Here's a recent example https://www.justice.gov/archives/opa/pr/justice-department-d... | |
| ▲ | pjc50 a day ago | parent | prev [-] | | This is well-documented, as are the corresponding Chinese ones. |
|