|
| ▲ | mossTechnician a day ago | parent | next [-] |
| To me, the more interesting divergence in discussion is on its capabilities. AI industry insiders (including "safety" groups like ControlAI) talk about the dangers only in terms of its power: "Scheming", job loss, breaking containment, the New Cold War with China. Critics outside the industry talk in terms of its lack of power: Inaccuracy, erroneous translation of user intent, failure to deliver on its promises and investment, environmental cost from the former, and ultimately the danger of people in power (e.g. law enforcement, military officials) treating its output as valid and unbiased, or simply laundering their wishes through it. |
|
| ▲ | scratchyone a day ago | parent | prev | next [-] |
| 100% agreed. That's part of the issue imo, these companies pretend their new models are "too dangerous" to seem like they care about the world, yet they have no qualms deploying existing models in warfare or bragging about impending mass-unemployment. |
|
| ▲ | palmotea a day ago | parent | prev | next [-] |
| > That's true but in reality I think people are far more afraid of AI in terms of how it is being used in warfare and policing. Automatic target detection and deployment of drones, or even how it might simply make their role at work redundant etc I think the last one should be first on the list: regular people are afraid AI will negatively affect their economic security (i.e. knowledge and service workers will get the rust-belt factory worker treatment). And the potential of giving knowledge and service workers the rust-belt factory worker treatment is exactly what makes Wall Street excited about AI and has the AI company leaders salivating about the profit they can make. Warfare, policing, bio-engineered viruses are theoretical and far down the list. |
| |
| ▲ | wongarsu a day ago | parent | next [-] | | Not to mention that "automatic target detection" was primarily enabled by the ~2016-2020 AI hype/boom around image recognition, not the 2022-current hype/boom around LLMs | |
| ▲ | detectivestory a day ago | parent | prev [-] | | Its already being used in warfare though. | | |
| ▲ | palmotea a day ago | parent [-] | | > Its already being used in warfare though. What I mean is theoretical to the common person. They don't have killbot drones hunting them down, and are unlikely to have that experience anytime soon. But most people have jobs, most people would be hard-hit if they lost theirs, lots of people lose theirs, and our elites are just itching to make that happen. I'm certainly most worried about AI: my employer started an ongoing silent layoff campaign about the same time they started enforcing AI usage. I don't think those are unconnected. |
|
|
|
| ▲ | notrealyme123 a day ago | parent | prev | next [-] |
| I am to be honest not sure what I am more scared. AI shaping warfare Vs. Using AI to justify outrageous warfare |
| |
| ▲ | MSFT_Edging a day ago | parent | next [-] | | We sadly don't need AI to justify outrageous warfare. You just need to remember when the US invaded Iraq over WMDs, including a full investigation into the WMDs that never found any. We then invaded anyway, to the detriment of everyone except defense contractors. | |
| ▲ | scratchyone a day ago | parent | prev | next [-] | | Don't worry, these companies will make sure we get to experience both nightmare futures. | |
| ▲ | yieldcrv a day ago | parent | prev [-] | | that’s not a war crime, that’s boundary setting, and honestly, that’s rare would you like me to list the applicable sections of the Geneva convention? |
|
|
| ▲ | chasd00 a day ago | parent | prev | next [-] |
| AI has been used in defense for a while now, a modern tomahawk cruise missile and its associated targeting systems is a good example. I think most people fear AI taking their job and only source of income. |
| |
|
| ▲ | sublinear a day ago | parent | prev [-] |
| These were all already very valid concerns long before this era of "AI" or computational power. The broader public is just now barely beginning to understand because all they have to do is ask a chatbot. AI does not enable new capabilities, but it does aggregate an idea into a rough sketch and do it quickly on-demand. None of this really means it will play out that way. The devil is in the details. What it does mean is much more nuanced attention on the politics and money because that's where the power always was. |
| |
| ▲ | detectivestory a day ago | parent [-] | | AI does enable new capabilities when it comes to constant mass surveillance, and automated weaponry. | | |
| ▲ | sublinear a day ago | parent [-] | | No it doesn't. We already have all of that right now and have had it for decades. The big investment into Project Stargate is all about managing risk. The government contractor and security clearance situation is out of control. As well, every human mistake is costly and time consuming to address. If you instead blame it on AI, you can skip the court proceedings and postmortems. The other part of this is likely an attempt to surface information with summaries and shorten the chain of command. This is just a power grab and a dangerous dismissal of necessary implementation detail. It's a tantrum being thrown by ignorant people at the top being displaced. We live in an ever-complicated world that demands more experienced leadership than we have available. AI is their hail mary pass. LLMs are being abused as a political battering ram. They are not the technological breakthrough advertised. The AI label is borderline absurd. AGI even moreso. NLP is an accessibility tool at best. |
|
|