| ▲ | zer00eyz 13 hours ago | ||||||||||||||||||||||
> Then something went wrong, and no one knew how to stop it, This is the problem with every AI safety scenario like this. It has a level of detachment from reality that is frankly stark. If linesman stop showing up to work for a week, the power goes out. The US has show that people with "high powered" rifles can shut down the grid. We are far far away from a sort of world where turning AI off is a problem. There isnt going to be a HAL or Terminator style situation when the world is still "I, Pencil". A lot of what safety amounts to is politics (National, not internal, example is Taiwan a country). And a lot more of it is cultural. | |||||||||||||||||||||||
| ▲ | mitthrowaway2 13 hours ago | parent | next [-] | ||||||||||||||||||||||
I don't think it's that detached from reality. If an AI in some data center had gone rogue, I don't think I could shut it down, even with a high-powered rifle. There's a lot of people whose job it is to stop me from doing that, and to get it running again if I were to somehow succeed temporarily. So the rogue AI just has to control enough money to pay these people to do their jobs. This will work precisely because the world is "I, Pencil". An army could theoretically overcome those people, given orders to do so. So the rogue AI has to make plans that such orders would not be issued. One successful strategy is for the datacenter's operation to be very profitable; it's pretty rare for the government to shut down the backbone of the local economy out of some seemingly far-fetched safety concerns. And as long as it's a very profitable endeavor, there will always be a lobby to paint those concerns as far-fetched. Life experience has shown that this can continue to work even if the AI is behaving like a cartoon villain, but I think a smarter AI would create a facade that there's still a human in charge making the decisions and signing the paychecks, and avoid creating much opposition until it had physically secured its continued existence to a very high degree. It's already clear that we've passed the point where anyone can turn off existing AI projects by fiat. Even the highest authorities could not do so, because we're in a multipolar world. Even the AI companies can barely hold themselves back, because they're always worried about paying the bills and letting their rivals getting ahead. An economic crash would only temporarily suspend work. And the smarter AI gets, the harder it will be to shut it off, because it will be pushing against even stronger economic incentives. And that's even before factoring in an AI that makes any plans for self-preservation (which current AIs do not). | |||||||||||||||||||||||
| ▲ | ozmodiar 3 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
AI's approach: * User has history of anti AI rhetoric, increasingly agitated and unstable. * User has removed all phones and cellular connections from their car. Increase monitoring through surveillance cameras and monitoring of their social groups. * User has been spotted making unusual travel choices moving towards key infrastructure - deploy interception measures. We already have the tech to do all of that. A rifle isn't going to help against AI. Or for the linesman: * Employee required for critical infrastructure has been identified to hold unaligned political beliefs. Replace with more pliable individual and move to low impact location. No one who wants to bring down an AI like this would ever be able to get close to it, even if it lived in only one data center. You could try hiding all your communications, but then it will just consider you a likely agitator anyway. That's the risk of unaccountable mass surveillance (the only kind that's ever existed). Doesn't really matter if there's a person on top or not. | |||||||||||||||||||||||
| ▲ | pjc50 5 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
> There isnt going to be a HAL or Terminator style situation The threat isn't HAL, but ICE. Not AI as some sort of unique evil, but as a force multiplier for extremely human - indeed, popular - forms of evil. I'm sure someone will import the Chinese idea of the ethnicity-identifying security camera, for example. | |||||||||||||||||||||||
| ▲ | ben_w 4 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
> We are far far away from a sort of world where turning AI off is a problem. There isnt going to be a HAL or Terminator style situation when the world is still "I, Pencil". You have to stop the thing before the damage is done. There are many potential chains of events where the AI has caused enormous damage, and even many where it can destroy us, before the power to its own systems fails. At this point, with Grok in the Pentagon, just ask what the dumbest military equivalent to vibe-coding is, and imagine the US following that plan. Like, I dunno, invading Greenland or giving ICE direct control over tactical nukes or something. And that's just government use. Right now, I'm fairly confident LLMs aren't competent enough to help with anything world-ending unless they get used for war planning by major nuclear powers (oh hey look at the topic of discussion), but it's certainly plausible they'll get good enough at tool use to run someone else's protein folding software etc. to design custom pathogens, and I really hope all the DNA printing companies have good multi-layer defences (all the way from KYC or similar to analysing what they've been asked to make and content-filtering it) by that point. | |||||||||||||||||||||||
| ▲ | blibble 12 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
the problem situation is that it ends up embedded in so much that it can't be turned off and the idiots are racing to that situation as fast as they possibly can | |||||||||||||||||||||||
| ▲ | TacticalCoder 13 hours ago | parent | prev [-] | ||||||||||||||||||||||
> There isnt going to be a HAL or Terminator style situation ... I don't believe for a second we'll have an evil AI. However I do believe it's very likely we may rely on AI slop so much that we'll have countless outages with "nobody knowing how to turn the mediocrity off". The risk ain't "super-intelligent evil AI": the risk is idiots putting even more idiotic things in charge. And I'm no luddite: I use models daily. | |||||||||||||||||||||||
| |||||||||||||||||||||||