| ▲ | btown 10 hours ago |
| One of the biggest pieces of "writing on the wall" for this IMO was when, in the April 15 2025 Preparedness Framework update, they dropped persuasion/manipulation from their Tracked Categories. https://openai.com/index/updating-our-preparedness-framework... https://fortune.com/2025/04/16/openai-safety-framework-manip... > OpenAI said it will stop assessing its AI models prior to releasing them for the risk that they could persuade or manipulate people, possibly helping to swing elections or create highly effective propaganda campaigns. > The company said it would now address those risks through its terms of service, restricting the use of its AI models in political campaigns and lobbying, and monitoring how people are using the models once they are released for signs of violations. To see persuasion/manipulation as simply a multiplier on other invention capabilities, and something that can be patched on a model already in use, is a very specific statement on what AI safety means. Certainly, an AI that can design weapons of mass destruction could be an existential threat to humanity. But so, too, is a system that subtly manipulates an entire world to lose its ability to perceive reality. |
|
| ▲ | imiric 3 hours ago | parent | next [-] |
| > Certainly, an AI that can design weapons of mass destruction could be an existential threat to humanity. But so, too, is a system that subtly manipulates an entire world to lose its ability to perceive reality. So, like, social media and adtech? Judging by how little humanity is preoccupied with global manipulation campaigns via technology we've been using for decades now, there's little chance that this new tech will change that. It can only enable manipulation to grow in scale and effectiveness. The hype and momentum have never been greater, and many people have a lot to gain from it. The people who have seized power using earlier tech are now in a good position to expand their reach and wealth, which they will undoubtedly do. FWIW I don't think the threats are existential to humanity, although that is certainly possible. It's far more likely that a few people will get very, very rich, many people will be much worse off, and most people will endure and fight their way to get to the top. The world will just be a much shittier place for 99.99% of humanity. |
|
| ▲ | webdoodle 8 hours ago | parent | prev | next [-] |
| Right on point. That is the true purpose of this 'new' push into A.I. Human moderators sometimes realize the censorship they are doing is wrong, and will slow walk or blatantly ignore censorship orders. A.I. will diligently delete anything it's told too. But the real risk is that they can use it to upscale the Cambridge Analytica personality profiles for everyone, and create custom agents for every target that feeds them whatever content they need too manipulate there thinking and ultimately behavior. AKA MkUltra mind control. |
| |
| ▲ | komali2 7 hours ago | parent [-] | | What's frustrating is our society hasn't grappled with how to deal with that kind of psychological attack. People or corporations will find an "edge" that gives them an unbelievable amount of control over someone, to the point that it almost seems magic, like a spell has been cast. See any suicidal cult, or one that causes people to drain their bank account, or one that leads to the largest breach of American intelligence security in history, or one that convinces people to break into the capitol to try to lynch the VP. Yet even if we persecute the cult leader, we still keep people entirely responsible for their own actions, and as a society accept none of the responsibility for failing to protect people from these sorts of psychological attacks. I don't have a solution, I just wish this was studied more from a perspective of justice and sociology. How can we protect people from this? Is it possible to do so in a way that maintains some of the values of free speech and personal freedom that Americans value? After all, all Cambridge Analytica did was "say" very specifically convincing things on a massive, yet targeted, scale. |
|
|
| ▲ | Razengan 7 hours ago | parent | prev [-] |
| > manipulates an entire world to lose its ability to perceive reality. > ability to perceive reality. I mean, come on.. that's on you. Not to "victim blame"; the fault's in the people who deceive, but if you get deceived repeatedly, several times, and there are people calling out the deception, so you're aware you're being deceived, but you still choose to be lazy and not learn shit on your own (i.e. do your own research) and just want everything to be "told" to you… that's on you. |
| |
| ▲ | estearum 5 hours ago | parent [-] | | Everything you think you "know" is information just put in front of you (most of it indirect, much of it several dozen or thousands of layers of indirection deep) To the extent you have a grasp on reality, it's credit primarily to the information environment you found yourself in and not because you're an extra special intellectual powerhouse. This is not an insult, but an observation of how brains obviously have to work. | | |
| ▲ | helloplanets 4 hours ago | parent | next [-] | | > much of it several dozen or thousands of layers of indirection deep Assuming we're just talking about information on the internet: What are you reading if the original source is several dozen layers deep? In my experience, it's usually one or two layers deep. If it's more, that's a huge red flag. | |
| ▲ | anonymous908213 5 hours ago | parent | prev [-] | | Your ability to check your information environment against reality is frequently within your control and can be used to establish trustworthiness for the things that you cannot personally verify. And it is a choice to choose to trust things that you cannot verify, one that you do not have to make, even though it is unfortunately commonly made. For example, let's take the Uyghur situation in China. I have no ability to check reality there, as I do not live in and have no intention of ever visiting China. My information environment is what the Chinese government reports and what various media outlets and NGOs report. As it turns out, both the Chinese government and media and NGOs report on other things that I can check against reality, eg. events that happen in my country, and I know that they both routinely report falsehoods that do not accord with my observed reality. As a result, I have zero trust in either the Chinese government or media and NGOs when it comes to things that I cannot personally verify, especially when I know both parties have self-interest incentives to report things that are not true. Therefore, the conclusion is obvious: I do not know and can not know what is happening around Uyghurs in China, and do not have a strong opinion on the subject, despite the attempts of various parties to put information in front of me with the intention to get me to champion their viewpoint. This really does not make me an extra special intellectual powerhouse, one would hope. I'd think this is the bare minimum. The fact that there are many people who do not meet this bare minimum is something that reflects poorly on them rather than highly on me. On the other hand, I trust what, for instance, the Encyclopedia Britannica has to say about hard science, because in the course of my education I was taught to conduct experiments and confirm reality for myself. I have never once found what is written about hard science in Britannica to not be in accord with my observed reality, and on top of that there is little incentive for the Britannica to print scientific falsehoods that could be easily disproven, so it has earned my trust and I will believe the things written in it even if I have not personally conducted experiments to verify all of it. Anyone can check their information sources against reality, regardless of their intelligence. It is a choice to believe information that is put in front of you without checking it. Sometimes a choice that is warranted once trust is earned, but all too often a choice that is highly unwarranted. | | |
| ▲ | imiric 2 hours ago | parent [-] | | I don't necessarily disagree with what you said, but you're not taking a few things into account. First of all, most people don't think critically, and may not even know how. They consume information provided to them, instinctively trust people they have a social, emotional, or political bond with, are easily persuaded, and rarely question the world around them. This is not surprising or a character flaw—it's deeply engrained in our psyche since birth. Some people learn the skill of critical thinking over time, and are able to do what you said, but this is not common. This ability can even be detrimental if taken too far in the other direction, which is how you get cynicism, misanthropy, conspiracy theories, etc. So it needs to be balanced well to be healthy. Secondly, psychological manipulation is very effective. We've known this for millennia, but we really understood it in the past century from its military and industrial use. Propaganda and its cousin advertising work very well at large scales precisely because most people are easily persuaded. They don't need to influence everyone, but enough people to buy their product, or to change their thoughts and behavior to align with a particular agenda. So now that we have invented technology that most people can't function without, and made it incredibly addictive, it has become the perfect medium for psyops. All of these things combined make it extremely difficult for anyone, including skeptics, to get a clear sense of reality. If most of your information sources are corrupt, you need to become an expert information sleuth, and possibly sacrifice modern conveniences and technology for it. Most people, even if capable, are unwilling to make that effort and sacrifice. |
|
|
|