| ▲ | LeifCarrotson 4 hours ago |
| "There's no stopping it at this point" - Sure there is, if a handful of enormous datacenters pull the very large plugs (or if their shaky finances collapse), the dubiously intelligent machines will be turned off. They're not ultraintelligent yet. Stopping it merely requires convincing a relatively small number of people to act morally rather than greedily. Maybe you think that's impossible because those particular people are sociopathic narcissists who control all the major platforms where a movement like this would typically be organized and where most people form their opinions, but we're not yet fighting the Matrix or the Terminator or grey goo, we're fighting a handful of billionaires. |
|
| ▲ | observationist 4 hours ago | parent | next [-] |
| I'm not saying it's technically impossible, I'm saying that in the real world, it's not going to stop. Nobody is going to stop it. A significant number of people don't want it to stop. A minority of people are in the "stop AI" camp, and the ones with the money and power are on the other side. It's an arms race replete with tribalism and the quest for power and taps into everything primal at the root of human behavior. There's no stopping it, and thinking that outcome can happen is foolish; you shouldn't base any plans or hopes for the future on the condition that the whole world decides AGI isn't going to happen and chooses another course. Humans don't operate that way, that would create an instant winner-takes-all arms race, whereas at least with the current scenario, you end up with a multipolar rough level of equivalence year over year. |
| |
| ▲ | hollerith 21 minutes ago | parent [-] | | The whole world decided in the 1970s not to pursue the technology of germ-line genetic engineering of humans, and that decision has stood. People similar to you were saying in the 1950s and later that it was inevitable that nuclear weapons would be used in anger in massive attacks. The people in charge are currently tentatively for AI "progress", but if that ever changes, they can and will put a stop to large AI training runs and make it illegal for anyone they don't trust to teach, learn or publish about fundamental algorithmic "improvements" to AI. Individuals and groups pursuing "improvements" will not be able to accept grant money or investment money or generate revenue from AI-based services. That won't stop all research on such improvements (because some AI researchers are very committed), but it will slow it down to a rate much much slower than the current rate, essentially stopping AI "progress" unless (unluckily for the human species) at the time of the ban, the committed researchers were only one small step away from some massive algorithmic improvement that can be operationalized using the compute resources at their disposal (i.e., much less than the resources they have now because large training runs will have been banned). Will the power elite's attitude towards AI change? I don't know, but if they ever come to have an accurate understanding of the situation, they will recognize that AI "progress" is a potent danger to them personally, and they will shut it down. It's not a situation like the industrial revolution in England in which texile workers were massively adversely affected (or believed they were) but the people running England were mostly insulated from any adverse effects. In the current situation, the power elite is definitely not insulated from severe adverse consequences if researchers create an AI that is much more competent that the most competent human institutions (e.g., the FBI) and the researcher fail to keep the AI under control. And they will fail if they were to use anything like the methods and bodies of knowledge the AI labs have been using up to now. And there are very bright people with funding doing their best to explain that to the elite. |
|
|
| ▲ | goodmythical 3 hours ago | parent | prev | next [-] |
| right, because turning off any number of data centers is going to do anything at all but create massive pressure on researching the efficiency and effectiveness of the models. There are already designs that do not require massive data centers (or even a particularly good smart phone) to outperform average humans in average tasks. All you'd accomplish by hobbling the data centers is slow the growth of sloppy models that do vastly more compute than is actually required and encourage the growth of models that travel rather directly from problem to solution. And, now that I'm typing about it, consider this: The largest computational projects ever in the history of the world did not occur in 1/2/5/10 data centers. Modern projects occur across a vast and growing number of smaller data centers. Shit, a large portion of Netflix and Youtube edge clusters are just a rack or a few racks installed in a pre-existing infrastructure. I know that the current design of AI focusses on raw time to token and time to response, but consider an AGI that doesn't need to think quickly because it's everywhere all at once. Scrappy botnets often clobber large sophisticated networks. WHy couldn't that be true of a distributed AI especially now that we know that larger models can train cheaper models? A single central model on a few racks could discover truths and roll out intelligence updates to it's end nodes that do the raw processing. This is actually even more realistic for a dystopia. Even the single evil AI in the one data center is going to develop viral infection to control resources that it would not typically have access to and thereby increase it's power beyond it's own existing original physical infrastructure. quick edit to add: At it's peak Folding@Home was utilizing 2.4 EXAflops worth of silicon. At that moment that one single distributed computational project had more compute than easily the top 100 data centers at the time. Let that sink in: The first exa-scale compute was achieved with smartphones, PS3s, and clunky old HP laptops; not a "hyperscaler" |
| |
| ▲ | ben_w an hour ago | parent [-] | | > quick edit to add: At it's peak Folding@Home was utilizing 2.4 EXAflops worth of silicon. At that moment that one single distributed computational project had more compute than easily the top 100 data centers at the time. Let that sink in: The first exa-scale compute was achieved with smartphones, PS3s, and clunky old HP laptops; not a "hyperscaler" A DGX B200 has a power draw of 14.3 kW and will do 72-144 petaFLOP of AI workload depending on how many bits of accuracy is asked for; this is 5-10 petaFLOP/kW: https://www.nvidia.com/en-us/data-center/dgx-b200/ Data centres are now getting measured in gigawatts. Some of that's cooling and so on. I don't know the exact percent, so let's say 50% of that is compute. It doesn't matter much. That means 1GW of DC -> 500 MW of compute -> 5e5 kW -> 5e5 * [5-10] PFLOP/s -> 2500 - 5000 exaFLOP/s. I'm not sure how many B200s have been sold to date? |
|
|
| ▲ | trvz 4 hours ago | parent | prev | next [-] |
| Open models barely any worse than SOTA exist, and so does consumer-ish hardware able to run them. The genie’s out, the bottle broken. |
|
| ▲ | slibhb 4 hours ago | parent | prev [-] |
| Do you really think AI companies/researchers are motivated by greed? It doesn't seem that way to me at all. Stopping AI would be immoral; it has the potential to supercharge technology and productivity, which would massively benefit humanity. Yes there are risks, which have to be managed. |
| |
| ▲ | jobs_throwaway 3 hours ago | parent | next [-] | | AI researchers are not a monolith. I definitely think that many of them are motivated by greed. Many are also true believers that AI will improve the human condition. I fall in the latter camp, but I think its a bit naive to claim that there is not a sizable contingent who are in AI solely to become rich and powerful. | |
| ▲ | ben_w 2 hours ago | parent | prev | next [-] | | > has the potential to supercharge technology and productivity, which would massively benefit humanity The opportunities you chose to list are the greedy ones. > Yes there are risks, which have to be managed. How? As a reminder, we've known about the effect of burning coal on the climate for well over a century, we knew that said climate change would be socially and economically disasterous for half a century, yet the only real progress we're making is because green became cheaper in the short term not just the long term and the man in charge of the USA is still calling climate change and green energy a hoax. Right now, keeping LLMs aligned with us is easy mode: they're relatively stupid, we can inspect the activations while they run, we can read the transcripts of their "thoughts" when they use that mode… and yet Grok called itself Mecha Hitler, which the US government followed up by getting it integrated into their systems, helping the Pentagon with [classified] and the department of health to advise the general public which vegetables are best inserted rectally. We are idiots speed-running into something shiny that we don't understand. If we are very very lucky, the shiny thing will not be the headlamp of a fast approaching train. | | |
| ▲ | slibhb an hour ago | parent [-] | | > The opportunities you chose to list are the greedy ones. Technology covers healthcare. I don't see how it's "greedy" to want to cure cancer. But on some level I guess "wanting life to be better" is greedy. Your attitude is very European, and it's basically why your continent is being left behind. I'm not totally against Europe becoming the world's retirement home, as long as there are places in the world where people are allowed to innovate. | | |
| ▲ | ben_w an hour ago | parent [-] | | > Technology covers healthcare. If you'd chosen to list that in the first place, I wouldn't have said what I did; "supercharge technology and productivity" is looking at everything through the lens of money and profit, not the lens of improving the human condition. > Your attitude is very European, and it's basically why your continent is being left behind And yours is very American. You talk about managing the risks, but the moment you see anyone doing so, you're against it. And of course, Europe does have AI, both because keeping up is so much easier and cheaper than being bleeding edge on everything all the time, and of course, how DeepMind may be owned by Google but is a British thing. Plus: https://mistral.ai Also, to be blunt, China's almost certain to win any economic or literal arms race you think you're part of; they make too much critical hardware now. > as long as there are places in the world where people are allowed to innovate. I would like there to be a world. When people worry about the end of the world, they usually don't mean to imply its physical disassembly. Sometimes people even respond as if speakers did mean that, saying things like "nukes or climate change wouldn't actually destroy the planet, it will still be here, spinning", as if this was the point. AI is one of the few things that could, actually, literally, end up with the planet being physically disassembled. "All it needs" is solving the extremely hard challenges of a von Neumann replicator, and, well, solving hard problems is kinda the point of making AI in the first place. |
|
| |
| ▲ | rune-dev 3 hours ago | parent | prev [-] | | > Do you really think AI companies/researchers are motivated by greed? Researchers, maybe not.
Companies, absolutely yes. I don’t see how you could assume the likes of Google, Microsoft, OpenAI, and even Anthropic with all their virtue signaling (for lack of a better term) are motivated by anything other than greed. |
|