| ▲ | chris_money202 13 hours ago |
| First they rushed a model to market without safety checks, and I said nothing. It wasn't my field. Then they ignored the researchers warning about what it could do, and I said nothing. It sounded like science fiction. Then they gave it control of things that matter, power grids, hospitals, weapons, and I said nothing. It seemed to be working fine. Then something went wrong, and no one knew how to stop it, no one had planned for it, and no one was left who had listened to the warnings. |
|
| ▲ | ashtonshears 12 hours ago | parent | next [-] |
| The societal ills from collective tendancy to ignore red flags seems to be a human trait |
| |
|
| ▲ | palmotea 8 hours ago | parent | prev | next [-] |
| > First they rushed a model to market without safety checks, and I said nothing. It wasn't my field. > Then they ignored the researchers warning about what it could do, and I... ...tried it and became an eager early adopter and evangelist. It sounded like something from a dystopian science function novel I enjoyed. > Then [I] gave it control of things that matter, power grids, hospitals, weapons, and... ...my startup was doing well, and I was happy. We should be profitable next quarter. > Then something went wrong, and no one knew how to stop it, no one had planned for it... ...and I was guilty as fuck, FTFY, to fit the HN crowd. |
|
| ▲ | Phelinofist 7 hours ago | parent | prev | next [-] |
| Kinda sounds like an intro for Terminator |
| |
|
| ▲ | hsbauauvhabzb 13 hours ago | parent | prev | next [-] |
| Plenty of people have said plenty. The problem isn’t the warnings, it’s that people are too stupid and greedy to think about the long term impacts. |
| |
| ▲ | Valakas_ 3 hours ago | parent | next [-] | | And what makes them being "stupid" and "greedy"?
One's intelligence is determined by genes, and greediness is a trait that natural selection has favored for millennia. This is just natural selection taking its course, and it might lead to our end. If you want to blame something, blame math. Math has determined the physical constants and equations that determine the chemistry and ultimately biology laws that has resulted in humans being the way they are. | |
| ▲ | ifh-hn 10 hours ago | parent | prev [-] | | Maybe it's how blunt this comment is that gets it downvoted, but I don't disagree. | | |
| ▲ | brookst 9 hours ago | parent | next [-] | | No, it’s because it shows either a simplistic or needlessly confrontational view of the world. Unless you’re independently wealthy (as some in HN are), you have to balance your morals, your views of how things should work, feeding your family, and recognizing that you may not actually know everything. It’s easy to sit back and advise others that they should die on every single hill. But it’s not especially insightful, and serves mostly to signal piety rather than a well thought out view. | | |
| ▲ | ifh-hn 5 hours ago | parent | next [-] | | Piety? To who? Simplistic and/or confrontational doesn't mean wrong, even if you don't like the way it's presented. Just because a comment is short, sharp, and to the point doesn't mean the author hasn't thought out why that's their view. No one knows everything, that's certainly why I'm on hacker news. I'm here to learn and expand my knowledge. Unfortunately a lot of people on here would rather driveby-downvote than have a discussion to find out why a person might have an opinion like that expressed by the OP. I tend to abandon account when/if I get enough karma to be able to down vote. I'd rather not have to temptation of dismissing someone that way. It's quite liberating... Is it worth my time to respond? No, move on; yes, let's discuss. Maybe they'll change my mind... | |
| ▲ | hsbauauvhabzb 8 hours ago | parent | prev | next [-] | | Spoken like a true LLM. | |
| ▲ | kakacik 6 hours ago | parent | prev [-] | | I am pretty sure a lot of horrible things were performed by rather regular folks with similar logic, don't need to invoke some WWII nazi extermination guard reference at all. Slippery slope, death by 1000 cuts and other synonyms describing exactly this. |
| |
| ▲ | hsbauauvhabzb 9 hours ago | parent | prev [-] | | I’ve noticed anti-AI stance gets downvoted on HN (and any anti-authoritarian comments, for that matter) |
|
|
|
| ▲ | zer00eyz 12 hours ago | parent | prev | next [-] |
| > Then something went wrong, and no one knew how to stop it, This is the problem with every AI safety scenario like this. It has a level of detachment from reality that is frankly stark. If linesman stop showing up to work for a week, the power goes out. The US has show that people with "high powered" rifles can shut down the grid. We are far far away from a sort of world where turning AI off is a problem. There isnt going to be a HAL or Terminator style situation when the world is still "I, Pencil". A lot of what safety amounts to is politics (National, not internal, example is Taiwan a country). And a lot more of it is cultural. |
| |
| ▲ | ozmodiar 2 hours ago | parent | next [-] | | AI's approach:
* User has history of anti AI rhetoric, increasingly agitated and unstable.
* User has removed all phones and cellular connections from their car. Increase monitoring through surveillance cameras and monitoring of their social groups.
* User has been spotted making unusual travel choices moving towards key infrastructure - deploy interception measures. We already have the tech to do all of that. A rifle isn't going to help against AI. Or for the linesman: * Employee required for critical infrastructure has been identified to hold unaligned political beliefs. Replace with more pliable individual and move to low impact location. No one who wants to bring down an AI like this would ever be able to get close to it, even if it lived in only one data center. You could try hiding all your communications, but then it will just consider you a likely agitator anyway. That's the risk of unaccountable mass surveillance (the only kind that's ever existed). Doesn't really matter if there's a person on top or not. | |
| ▲ | mitthrowaway2 11 hours ago | parent | prev | next [-] | | I don't think it's that detached from reality. If an AI in some data center had gone rogue, I don't think I could shut it down, even with a high-powered rifle. There's a lot of people whose job it is to stop me from doing that, and to get it running again if I were to somehow succeed temporarily. So the rogue AI just has to control enough money to pay these people to do their jobs. This will work precisely because the world is "I, Pencil". An army could theoretically overcome those people, given orders to do so. So the rogue AI has to make plans that such orders would not be issued. One successful strategy is for the datacenter's operation to be very profitable; it's pretty rare for the government to shut down the backbone of the local economy out of some seemingly far-fetched safety concerns. And as long as it's a very profitable endeavor, there will always be a lobby to paint those concerns as far-fetched. Life experience has shown that this can continue to work even if the AI is behaving like a cartoon villain, but I think a smarter AI would create a facade that there's still a human in charge making the decisions and signing the paychecks, and avoid creating much opposition until it had physically secured its continued existence to a very high degree. It's already clear that we've passed the point where anyone can turn off existing AI projects by fiat. Even the highest authorities could not do so, because we're in a multipolar world. Even the AI companies can barely hold themselves back, because they're always worried about paying the bills and letting their rivals getting ahead. An economic crash would only temporarily suspend work. And the smarter AI gets, the harder it will be to shut it off, because it will be pushing against even stronger economic incentives. And that's even before factoring in an AI that makes any plans for self-preservation (which current AIs do not). | |
| ▲ | pjc50 4 hours ago | parent | prev | next [-] | | > There isnt going to be a HAL or Terminator style situation The threat isn't HAL, but ICE. Not AI as some sort of unique evil, but as a force multiplier for extremely human - indeed, popular - forms of evil. I'm sure someone will import the Chinese idea of the ethnicity-identifying security camera, for example. | |
| ▲ | ben_w 2 hours ago | parent | prev | next [-] | | > We are far far away from a sort of world where turning AI off is a problem. There isnt going to be a HAL or Terminator style situation when the world is still "I, Pencil". You have to stop the thing before the damage is done. There are many potential chains of events where the AI has caused enormous damage, and even many where it can destroy us, before the power to its own systems fails. At this point, with Grok in the Pentagon, just ask what the dumbest military equivalent to vibe-coding is, and imagine the US following that plan. Like, I dunno, invading Greenland or giving ICE direct control over tactical nukes or something. And that's just government use. Right now, I'm fairly confident LLMs aren't competent enough to help with anything world-ending unless they get used for war planning by major nuclear powers (oh hey look at the topic of discussion), but it's certainly plausible they'll get good enough at tool use to run someone else's protein folding software etc. to design custom pathogens, and I really hope all the DNA printing companies have good multi-layer defences (all the way from KYC or similar to analysing what they've been asked to make and content-filtering it) by that point. | |
| ▲ | blibble 10 hours ago | parent | prev | next [-] | | the problem situation is that it ends up embedded in so much that it can't be turned off and the idiots are racing to that situation as fast as they possibly can | |
| ▲ | TacticalCoder 11 hours ago | parent | prev [-] | | > There isnt going to be a HAL or Terminator style situation ... I don't believe for a second we'll have an evil AI. However I do believe it's very likely we may rely on AI slop so much that we'll have countless outages with "nobody knowing how to turn the mediocrity off". The risk ain't "super-intelligent evil AI": the risk is idiots putting even more idiotic things in charge. And I'm no luddite: I use models daily. | | |
| ▲ | baq 9 hours ago | parent | next [-] | | > I don't believe for a second we'll have an evil AI. Doesn’t have to be evil to be disastrous. Misaligned is plenty enough. https://en.wikipedia.org/wiki/Instrumental_convergence | |
| ▲ | esafak 11 hours ago | parent | prev [-] | | Didn't you read the news about the 'claw that blackmailed an open source maintainer last week? It was autonomous, but it could be turned off. How hard is it to extrapolate from that to an agent that worms its way out of its sandbox? | | |
| ▲ | tsimionescu 9 hours ago | parent [-] | | What makes you think that was an autonomous agent, and not someone playing with AI? |
|
|
|
|
| ▲ | ReptileMan 10 hours ago | parent | prev [-] |
| Censoring models is not safety but safetizm. It is the TSA of the AI world. Safety is making sure the model cannot do anything not allowed even if it wants to. |