| ▲ | claytonaalves 9 hours ago |
| I'm impressed with how we moved from "AI is dangerous", "Skynet", "don't give AI internet access or we are doomed", "don't let AI escape" to "Hey AI, here is internet, do whatever you want". |
|
| ▲ | deepsquirrelnet 8 hours ago | parent | next [-] |
| The DoDs recent beef with Anthropic over their right to restrict how Claude can be used is revealing. > Though Anthropic has maintained that it does not and will not allow its AI systems to be directly used in lethal autonomous weapons or for domestic surveillance Autonomous AI weapons is one of the things the DoD appears to be pursuing. So bring back the Skynet people, because that’s where we apparently are. 1. https://www.nbcnews.com/tech/security/anthropic-ai-defense-w... |
| |
| ▲ | chasd00 6 hours ago | parent | next [-] | | hasn't Ukraine already proved out autonomous weapons on the battlefield? There was a NYT podcast a couple years ago where the interviewed higher up in the Ukraine military and they said it's already in place with fpv drones, loitering, target identification, attack, the whole 9 yards. You don't need an LLM to do autonomous weapons, a modern Tomahawk cruise missile is pretty autonomous. The only change to a modern tomahawk would be adding parameters of what the target looks like and tasking the missile with identifying a target. The missile pretty much does everything else already ( flying, routing, etc ). | | |
| ▲ | slibhb 6 hours ago | parent | next [-] | | Yes. They published a great article about it: https://www.nytimes.com/2025/12/31/magazine/ukraine-ai-drone... As I remember it the basic idea is that the new generation of drones is piloted close enough to targets and then the AI takes over for "the last mile". This gets around jamming, which otherwise would make it hard for dones to connect with their targets. | |
| ▲ | testdelacc1 6 hours ago | parent | prev [-] | | A drone told to target a tank needs to identify the shape it’s looking at within milliseconds. That’s not happening with an LLM, certainly. |
| |
| ▲ | nradov 6 hours ago | parent | prev | next [-] | | The DoD was pursuing autonomous AI weapons decades ago, and succeeded as of 1979 with the Mk 60 Captor Mine. https://www.vp4association.com/aircraft-information-2/32-2/m... The worries over Skynet and other sci-fi apocalypse scenarios are so silly. | | |
| ▲ | deepsquirrelnet 6 hours ago | parent [-] | | Self awareness is silly, but the capacity for a powerful minority to oppress a sizeable population without recruiting human soldiers might not be that far off. |
| |
| ▲ | nightski 8 hours ago | parent | prev | next [-] | | If you ever doubted it you were fooling yourself. It is inevitable. | | |
| ▲ | samiv 7 hours ago | parent | next [-] | | It's ok we'll just send a robot back in time to help destroy the chip that starts it. | | | |
| ▲ | tartoran 7 hours ago | parent | prev [-] | | If we all sit back and lament that it’s inevitable surely it could happen. | | |
| |
| ▲ | georgemcbay 6 hours ago | parent | prev | next [-] | | > Autonomous AI weapons is one of the things the DoD appears to be pursuing. So bring back the Skynet people, because that’s where we apparently are. This situation legitimately worries me, but it isn't even really the SkyNet scenario that I am worried about. To self-quote a reply to another thread I made recently (https://news.ycombinator.com/item?id=47083145#47083641): When AI dooms humanity it probably won't be because of the sort of malignant misalignment people worry about, but rather just some silly logic blunder combined with the system being directly in control of something it shouldn't have been given control over. I think we have less to worry about from a future SkyNet-like AGI system than we do just a modern or near future LLM with all of its limitations making a very bad oopsie with significant real-world consequences because it was allowed to control a system capable of real-world damage. I would have probably worried about this situation less in times past when I believed there were adults making these decisions and the "Secretary of War" of the US wasn't someone known primarily as an ego-driven TV host with a drinking problem. | | |
| ▲ | breppp 5 hours ago | parent [-] | | Statistically more probable this kind of blunder will happen in a small disaster before a large disaster and then regulated e.g. 50 people die due to water poisoning issue rather than 10 billion die in a claude code powered nuclear apocalypse |
| |
| ▲ | bigyabai 5 hours ago | parent | prev | next [-] | | It turned out that the Pentagon just ignored Anthropic's demands anyways: https://www.wsj.com/politics/national-security/pentagon-used... I really doubt that Anthropic is in any kind of position to make those decisions regardless of how they feel. | | |
| ▲ | deepsquirrelnet an hour ago | parent [-] | | I don’t disagree, but they should be. Last I knew, the government doesn’t control the means of production… and the current US regime loves to boast about it. Confusing right? |
| |
| ▲ | zer00eyz 7 hours ago | parent | prev [-] | | > Autonomous AI weapons In theory, you can do this today, in your garage. Buy a quad as a kit. (cheap) Figure out how to arm it (the trivial part). Grab yolo, tuned for people detection. Grab any of the off the shelf facial recognition libraries. You can mostly run this on phone hardware, and if you're stripping out the radios then possibly for days. The shim you have to write: software to fly the drone into the person... and thats probably around somewhere out there as well. The tech to build "Screamers" (see: https://en.wikipedia.org/wiki/Screamers_(1995_film) ) already exists, is open source and can be very low power (see: https://www.youtube.com/shorts/O_lz0b792ew ) -- | | |
| ▲ | chasd00 6 hours ago | parent | next [-] | | > software to fly the drone into the person... and thats probably around somewhere out there as well. ardupilot + waypoint nav would do it for fixed locations. The camera identifies a target, gets the gps cooridnates and sets a waypoint. I would be shocked if there wasn't extensions available (maybe not officially) for flying to a "moving location". I'm in the high power rocketry hobby and the knowledge to add control surfaces and processing to autonomously fly a rocket to a location is plenty available. No one does it because it's a bad look for a hobby that already raises eyebrows. | | |
| ▲ | tim333 6 hours ago | parent | next [-] | | The Ukrainian drones that took out Russia's long range bombers used ArduPilot and AI. (https://en.wikipedia.org/wiki/Operation_Spiderweb) | |
| ▲ | phba 6 hours ago | parent | prev [-] | | > a hobby that already raises eyebrows Sounds very interesting, but may I ask how this actually works as a hobby? Is it purely theoretical like analyzing and modeling, or do you build real rockets? | | |
| ▲ | chasd00 an hour ago | parent | next [-] | | Build and fly. It’s interesting because it attracts a lot of engineers. So you have groups who are experts in propulsion that make their own solid (and now liquid bi-prop) motors. You also have groups that focus on electronics and make flight controllers, gps trackers etc. then you have software people who make build/fly simulators and things like OpenRocket. There’s regional and national events that are sort of like festivals. Some have FAA waivers to fly to around 50k ft. There’s one at Blackrock Nevada where you can fly to space if you want. A handful of amateurs have made it to the karman line too. | |
| ▲ | capncleaver 5 hours ago | parent | prev [-] | | Not whom you are replying to, nor a rocket hobbyist myself, but yes, they do build and launch rockets for fun, eg VC Steve Jurvetson out at black rock: https://www.flickr.com/photos/jurvetson/54815036982/ | | |
|
| |
| ▲ | wordpad 7 hours ago | parent | prev [-] | | Didn't screamers evolve sophisticated intelligence? Is that what happens if we use claw and let it write its own skills and update it's own objectives? | | |
| ▲ | gs17 5 hours ago | parent [-] | | Scarier, in the original story, the robots were called "claws". |
|
|
|
|
| ▲ | sph 8 hours ago | parent | prev | next [-] |
| This is exactly why artificial super-intelligences are scary. Not necessarily because of its potential actions, but because humans are stupid, and would readily sell their souls and release it into the wild just for an ounce of greed or popularity. And people who don't see it as an existential problem either don't know how deep human stupidity can run, or are exactly those that would greedily seek a quick profit before the earth is turned into a paperclip factory. |
| |
| ▲ | xrd 8 hours ago | parent | next [-] | | I love this. Another way of saying it: the problem we should be focused on is not how smart the AI is getting. The problem we should be focused on is how dumb people are getting (or have been for all of eternity) and how they will facilitate and block their own chance of survival. That seems uniquely human but I'm not a ethnobiologist. A corollary to that is that the only real chance for survival is that a plurality of humans need to have a baseline of understanding of these threats, or else the dumb majority will enable the entire eradication of humans. Seems like a variation of Darwin's law, but I always thought that was for single examples. This is applied to the entirety of humanity. | | |
| ▲ | andsoitis 6 hours ago | parent | next [-] | | > The problem we should be focused on is how dumb people are getting (or have been for all of eternity) Over the arc of time, I’m not sure that an accurate characterization is that humans have been getting dumber and dumber. If that were true, we must have been super geniuses 3000 years ago! I think what is true is that the human condition and age old questions are still with us and we’re still on the path to trying to figure out ourselves and the cosmos. | | |
| ▲ | xrd 5 hours ago | parent | next [-] | | Totally anecdotal but I think phones have made us less present, or said another way, less capable of using our brains effectively. It isn't exactly dumb but it feels very close. I definitely think we are smarter if you are using IQ, but are we less reactive and less tribal? I'm not so sure. | |
| ▲ | qup 5 hours ago | parent | prev [-] | | Modern dumb people have more ability to affect things. Modern technology, equal rights, voting rights give them access to more control than they've ever had. That's my theory, anyway. |
| |
| ▲ | bwfan123 7 hours ago | parent | prev | next [-] | | Majority of us are meme-copying automatons who are easily pwned by LLMs. Few of us have learned to exercise critical thinking and understanding from the first assumptions - the kind of thing we are expected to be learn in schools - also the kind of thing that still separates us from machines. A charitable view is that there is a spectrum in there. Now, with AI and social media, there will be an acceleration of this movement to the stupid end of the spectrum. | |
| ▲ | GTP 6 hours ago | parent | prev | next [-] | | > That seems uniquely human but I'm not a ethnobiologist. In my opinion, this is a uniquely human thing because we're smart enough to develop technologies with planet-level impact, but we aren't smart enough to use them well. Other animals are less intelligent, but for this very reason, they lack the ability to do self-harm on the same scale as we can. | |
| ▲ | phi-go 7 hours ago | parent | prev [-] | | Isn't defining what should not be done by anyone a problem that laws (as in legislation) are for? Though, it's not that I expect that those laws would come in time. |
| |
| ▲ | bckr 7 hours ago | parent | prev | next [-] | | Look, we’ve had nukes for almost 100 years now. Do you really think our ancient alien zookeepers are gonna let us wipe with AI? Semi /j | |
| ▲ | GistNoesis 7 hours ago | parent | prev [-] | | It's even worse than that. The positives outcomes are structurally being closed. The race to the bottom means that you can't even profit from it. Even if you release something that have plenty of positive aspects, it can and is immediately corrupted and turned against you. At the same time you have created desperate people/companies and given them huge capabilities for very low cost and the necessity to stir things up. So for every good door that someone open, it pushes ten other companies/people to either open random potentially bad doors or die. Regulating is also out of the question because otherwise either people who don't respect regulations get ahead or the regulators win and we are under their control. If you still see some positive door, I don't think sharing them would lead to good outcomes. But at the same time the bad doors are being shared and therefore enjoy network effects. There is some silent threshold which probably has already been crossed, which drastically change the sign of the expected return of the technology. |
|
|
| ▲ | arbuge 8 hours ago | parent | prev | next [-] |
| Humans are inherently curious creatures. The excitement of discovery is a strong driving force that overrides many others, and it can be found across the IQ spectrum. Perhaps not in equal measure across that spectrum, but omnipresent nonetheless. |
| |
| ▲ | wolvesechoes 8 hours ago | parent [-] | | > Humans are inherently curious creatures. You misspelled greedy. | | |
| ▲ | falcor84 8 hours ago | parent [-] | | While the two are closely related, I see a clear distinction between the two drives on their projection onto the explore-exploit axis |
|
|
|
| ▲ | theptip an hour ago | parent | prev | next [-] |
| > we moved from "AI is dangerous" There was never consensus on this. IME the vast majority of people never bought in to this view. Those of us who were making that prediction early on called it exactly like it is: people will hand over their credentials to completely untrustworthy agents and set them loose, people will prompt them to act maximally agentic, and some will even prompt them to roleplay evil murderbots, just for lulz. Most of the dangerous scenarios are orthogonal to the talking points around “are they conscious”, “do they have desires/goals”, etc. - we are making them simulate personas who do, and that’s enough. |
|
| ▲ | bko 9 hours ago | parent | prev | next [-] |
| There was a small group of doomers and scifi obsessed terminally online ppl that said all these things. Everyone else said its a better Google and can help them write silly haikus. Coders thought it can write a lot of boilerplate code. |
|
| ▲ | GuB-42 6 hours ago | parent | prev | next [-] |
| We didn't "moved from", both points of view exist. Depending on the news, attention may shifts from one to another. Anyways, I don't expect Skynet to happen. AI-augmented stupidity may be a problem though. |
|
| ▲ | alansaber 9 hours ago | parent | prev | next [-] |
| Because even really bad autonomous automation is pretty cool. The marketing has always been aimed at the general public who know nothing |
| |
| ▲ | sho_hn 9 hours ago | parent [-] | | It's not the general public who know nothing that develop and release software. I am not specifically talking about this issue, but do remember that very little bad happens in the world without the active or even willing participation of engineers. We make the tools and structures. |
|
|
| ▲ | wiseowise 9 hours ago | parent | prev | next [-] |
| > “we” Bunch of Twitter lunatics and schizos are not “we”. |
| |
| ▲ | snigsnog 13 minutes ago | parent | next [-] | | X* | |
| ▲ | squidbeak 8 hours ago | parent | prev | next [-] | | People excited by a new tech's possibilities aren't lunatics and psychos. | | |
| ▲ | trehalose 8 hours ago | parent | next [-] | | The ones who give it free reign to run any code it finds on the internet on their own personal computers with no security precautions are maybe getting a little too excited about it. | | |
| ▲ | simonw 8 hours ago | parent [-] | | That's one of the main reasons there's a small run on buying Mac Minis. |
| |
| ▲ | raincole 8 hours ago | parent | prev [-] | | They mean the > "AI is dangerous", "Skynet", "don't give AI internet access or we are doomed", "don't let AI escape" group. Not the other one. |
| |
| ▲ | UqWBcuFx6NV4r 9 hours ago | parent | prev [-] | | I am equally if not more grateful than HN is just as unrepresentative. |
|
|
| ▲ | mrtksn 8 hours ago | parent | prev | next [-] |
| I would have said Doomers never win but in this case it was probably just PR strategy to give the impression that AI can do more than it can actually do. The doomers were the makers of AI, that’s enough to tell what a BS is the doomerism :) |
|
| ▲ | 7 hours ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | singpolyma3 9 hours ago | parent | prev | next [-] |
| I mean. The assumption that we would obviously choose to do this is what led to all that SciFi to begin with. No one ever doubted someone would make this choice. |
|
| ▲ | api 6 hours ago | parent | prev | next [-] |
| Other than some very askew bizarro rationalists, I don’t think that many people take AI hard takeoff doomerism seriously at face value. Much of the cheerleading for doomerism was large AI companies trying to get regulatory moats erected to shut down open weights AI and other competitors. It was an effort to scare politicians into allowing massive regulatory capture. Turns out AI models do not have strong moats. Making models is more akin to the silicon fab business where your margin is an extreme power law function of how bleeding edge you are. Get a little behind and you are now commodity. General wide breadth frontier models are at least partly interchangeable and if you have issues just adjust their prompts to make them behave as needed. The better the model is the more it can assist in its own commodification. |
|
| ▲ | sixtyj 9 hours ago | parent | prev | next [-] |
| And be nice and careful, please. :) Claw to user: Give me your card credentials and bank account. I will be very careful because I have read my skills.md Mac Minis should be offered with some warning, as it is on pack of cigarettes :) Not everybody installs some claw that runs in sandbox/container. |
| |
| ▲ | qup 8 hours ago | parent [-] | | Isn't the Mac mini the container? | | |
| ▲ | simonw 8 hours ago | parent [-] | | It is... but then many people hook it up to their personal iCloud account and give it access to their email, at which point the container isn't really helping! |
|
|
|
| ▲ | AndrewKemendo 7 hours ago | parent | prev | next [-] |
| Even if hordes of humanoids with “ice” vests start walking through the streets shooting people, the average American is still not going to wake up and do anything |
|
| ▲ | jryan49 9 hours ago | parent | prev [-] |
| I mean we know at this point it's not super intelligent AGI yet, so I guess we don't care. |
| |
| ▲ | nradov 6 hours ago | parent [-] | | There is no scientific basis to expect that the current approach to AI involving LLMs could ever scale up to super intelligent AGI. Another major breakthrough will be needed first, possibly an entirely new hardware architecture. No one can predict when that will come or what it will look like. |
|