| ▲ | keiferski 18 hours ago |
| One of the negative consequences of the “modern secular age” is that many very intelligent, thoughtful people feel justified in brushing away millennia of philosophical and religious thought because they deem it outdated or no longer relevant. (The book A Secular Age is a great read on this, btw, I think I’ve recommended it here on HN at least half a dozen times.) And so a result of this is that they fail to notice the same recurring psychological patterns that underly thoughts about how the world is, and how it will be in the future - and then adjust their positions because of this awareness. For example - this AI inevitabilism stuff is not dissimilar to many ideas originally from the Reformation, like predestination. The notion that history is just on some inevitable pre-planned path is not a new idea, except now the actor has changed from God to technology. On a psychological level it’s the same thing: an offloading of freedom and responsibility to a powerful, vaguely defined force that may or may not exist outside the collective minds of human society. |
|
| ▲ | evantbyrne 11 hours ago | parent | next [-] |
| I'm pretty bearish on the idea that AGI is going to take off anytime soon, but I read a significant amount of theology growing up and I would not describe the popular essays from e.g., LessWrong as religious in nature. I also would not describe them as appearing poorly read. The whole "look they just have a new god!" is a common trope in religious apologetics that is usually just meant to distract from the author's own poorly constructed beliefs. Perhaps such a comparison is apt for some people in the inevitable AGI camp, but their worst arguments are not where we should be focusing. |
| |
| ▲ | gspencley 10 hours ago | parent | next [-] | | Philosophy and religion are not mutually inclusive, though one can certainly describe a religious belief as being a philosophical belief. Even a scientifically inclined atheist has philosophical ideas grounding their world view. The idea that the universe exists as an objective absolute with immutable laws of nature is a metaphysical idea. The idea that nature can be observed and that reason is a valid tool for acquiring knowledge about nature is an epistemological idea. Ethics is another field of philosophy and it would be a mistake to assume a universal system of ethics that has been constant throughout all cultures across all of human history. So while I certainly agree that there is a very common hand-wave of "look the atheists have just replaced God with a new 'god' by a different name", you don't have to focus on religion, theology and faith based belief systems to identify different categories of philosophical ideas and how they have shaped different cultures, their beliefs and behaviours throughout history. A student of philosophy would identify the concept of "my truth" as being an idea put forward by Emmanuel Kant, for example, even though the person saying that doesn't know that that's the root of the idea that reality is subjective. Similarly, the empirically grounded scientist would be recognized as following in the footsteps of Aristotle. The pious bible thumper parroting ideas published by Plato. The point is that philosophy is not the same thing as religion and philosophy directly shapes how people think, what they believe and therefore how they act and behave. And it's kind of uncanny how an understanding of philosophy can place historical events in context and what kinds of predictive capabilities it has when it comes to human behaviour in the aggregate. | | |
| ▲ | staunton 2 hours ago | parent [-] | | This sounds very educated but I don't really see what it has to do with the comment you're responding to (or with AI). |
| |
| ▲ | miningape 10 hours ago | parent | prev | next [-] | | While it's a fair criticism, just because someone doesn't believe in a god doesn't mean the religious hardware in their brain has been turned off. It's still there and operational - I don't think it's a surprise that this hardware's attention would then be automatically tuned to a different topic. I think you can also see this in the intensification of political discussion, which has a similar intensity to religious discussions 100-200+ years ago (i.e. Protestant reformation). Indicating that this "religious hardware" has shifted domains to the realm of politics. I believe this shift can also be seen through the intense actions and rhetoric we saw in the mid-20th century. You can also look at all of these new age "religions" (spiritualism, horoscopes, etc.) as that religious hardware searching for something to operate on in the absence of traditional religion. | | |
| ▲ | buu700 8 hours ago | parent | next [-] | | I agree that modern hyper-online moralist progressivism and QAnonism are just fresh coats of paint on religion, but that isn't similar to AI. AI isn't a worldview; it's an extremely powerful tool which some people happen to be stronger at using than others, like computers or fighter jets. For people who empirically observe that they've been successful at extracting massive amounts of value from the tool, it's easy to predict a future in which aggregate economic output in their field by those who are similarly successful will dwarf that of those who aren't. For others, it's understandable that their mismatched experience would lead to skepticism of the former group, if not outright comfort in the idea that such productivity claims are dishonest or delusional. And then of course there are certainly those who are actually lying or deluded about fitting in the former group. Every major technology or other popular thing has some subset of its fandom which goes too far in promotion of the thing to a degree that borders on evangelical (operating systems, text editors, video game consoles, TV shows, diets, companies, etc.), but that really has nothing to do with the thing itself. Speaking for myself, anecdotally, I've recently been able to deliver a product end-to-end on a timeline and level of quality/completeness/maturity that would have been totally impossible just a few years ago. The fact that something has been brought into existence in substantially less time and at orders of magnitude lower cost than would have been required a few years ago is an undeniable observation of the reality in front of me, not theological dogma. It is, however, a much more cognitively intense way to build a product — with AI performing all the menial labor parts of development, you're boxed into focusing on the complex parts in a far more concentrated time period than would otherwise be required. In other words, you no longer get the "break" of manually coding out all the things you've decided need to be done and making every single granular decision involved. You're working at a higher level of abstraction and your written output for prompting is far more information-dense than code. The skills required are also a superset of those required for manual development; you could be the strongest pre-LLM programmer in the world, but if you're lacking in areas like human language/communication, project/product management, the ability to build an intuition for "AI psychology", or thinking outside the box in how you use your tools, adapting to AI is going to be a struggle. It's like an industry full of mechanics building artisan vehicles by hand suddenly finding themselves foisted with budgets to design and implement assembly lines; they still need to know how to build cars, but the nature of the job has now fundamentally changed, so it's unsurprising that many or even most who'd signed up for the original job would fail to excel in the new job and rationalize that by deciding the old ways are the best. It's not fair, and it's not anyone's fault, but it's important for us all to be honest and clear-eyed about what's really happening here. Society as a whole will ultimately enjoy some degree of greater abundance of resources, but in the process a lot of people are going to lose income and find hard-won skills devalued. The next generation's version of coal miners being told to "learn to code" will be coders being told to "learn to pilot AI". | | |
| ▲ | tsimionescu 5 hours ago | parent [-] | | > It's not fair, and it's not anyone's fault, but it's important for us all to be honest and clear-eyed about what's really happening here. Or we can just refuse this future and act as a society to prevent it from happening. We absolutely have that power, if we choose to organize and use it. | | |
| ▲ | buu700 5 hours ago | parent [-] | | Sure, but how so? If I'm understanding your argument correctly, it sounds like you may be implying that we should escalate the war on general-purpose computing and outlaw generative AI. If we were to consider that, then to what end? If you accept my framing of the long-term implications of LLMs on the industry, then what you're suggesting is effectively that we should deprive society of greater prosperity for the benefit of a small minority. Personally, I'd rather improve democratization of entrepreneurship (among other things) than artificially prop up software engineering salaries. And let's say the US did all that. What then? We neuter our economy and expect our adversaries to just follow suit? More likely it hobbles our ability to compete and ultimately ushers in an era of global hegemony under the CCP. |
|
| |
| ▲ | svieira 10 hours ago | parent | prev [-] | | Which then leads you to the question "who installed the hardware"? | | |
| ▲ | cootsnuck 9 hours ago | parent [-] | | No, that lead you to that question. It leads me to the question, "Is it really 'religious hardware' or the same ol' 'make meaning out of patterns' hardware we've had for millenia that has allowed us to make shared language, make social constructs, mutually believe legal fictions that hold together massive societies, etc.?" | | |
| ▲ | ryandv 6 hours ago | parent | next [-] | | > It leads me to the question, "Is it really 'religious hardware' or the same ol' 'make meaning out of patterns' hardware They are the same thing. Call it "religion" or "meaning making," both activities can be subsumed by the more encompassing concept and less-loaded term of "psycho-technology," [0] or non-physical tools for the mind. Language is such a psycho-technology, as are social constructs such as law; legal fictions are given memorable names and personified into "religious" figures, such as Libra from astrology or Themis/Lady Justice from Greek mythology. Ancient shamans and priests were proto-wetware engineers, designing software for your brain and providing tools for making meaning out of the world. In modern day we now have psychologists, "social commentators" (for lack of a better term and interpreted as broadly as possible), and, yes, software engineers, amongst other disciplines, playing a similar role. [0] https://www.meaningcrisis.co/episode-1-introduction/ | |
| ▲ | jffhn 8 hours ago | parent | prev | next [-] | | Or: the hardware that generates beliefs about how things should be - whether based on religious or ideological dogma -, as opposed to science which is not prescriptive and can only describe how things are. | |
| ▲ | yubblegum 8 hours ago | parent | prev [-] | | Your entire outlook is based on an assumption. The assumption that 'emergence of meaning' is a 2nd order epiphenomena of an organic structure. The 1st order epiphenomena in your view is of course consciousness itself. None of these assumptions can be proven, yet like the ancients looking at the sky and seeing a moving sun but missing a larger bit of the big picture you now have a 'theory of mind' that satisfies your rational impluses given a poor diet of facts and knowledge. But hey, once you manage to 'get into orbit' you get access to more facts and then the old 'installed hardware' theory of yours starts breaking down. The rational position regarding these matters is to admit "we do not have sufficient information and knowledge to make conclusive determinations based on reason alone". Who knows, one day Humanity may make it to the orbit and realize the 'simple and self apparent idea' of "everything revoles around the Earth" is false. | | |
| ▲ | dmbche 6 hours ago | parent [-] | | I've enjoyed reading the books of Peter Watts (Blindsight, free on their backlog, sci-fi), on seemingly this subject | | |
|
|
|
| |
| ▲ | madrox 2 hours ago | parent | prev | next [-] | | I've read LessWrong very differently from you. The entire thrust of that society is that humanity is going to create the AI god. | |
| ▲ | keiferski 5 hours ago | parent | prev | next [-] | | I didn’t say that “it’s just a new god,” I said: The notion that history is just on some inevitable pre-planned path is not a new idea, except now the actor has changed from God to technology. This is a more nuanced sentence. | | |
| ▲ | evantbyrne 5 hours ago | parent [-] | | Before that quoted sentence you drew a line from the reformation to people believing that AI is inevitable, then went on to imply these people may even believe such a thing will happen without the involvement of people. These are generalizations which don't fit a lot of the literature and make their best ideas look a bit sillier than they are. It is situations like these that make me think that analogies are better suited as a debate tactic than a method of study. |
| |
| ▲ | andai 11 hours ago | parent | prev | next [-] | | Maybe not a god, but we're intentionally designing artificial minds greater than ours, and we intend to give them control of the entire planet. While also expecting them to somehow remain subservient to us (or is that part just lip service)? | | |
| ▲ | yladiz 10 hours ago | parent [-] | | I’m sorry, but are you arguing that an LLM is anywhere near a human mind? Or are you arguing about some other AI? | | |
| ▲ | tsunamifury 8 hours ago | parent [-] | | If you understand the cultural concepts of Adam Curtis’s All Watched Over by Machines of Loving Grace, then yes we do keep trying to make gods out of inanimate things. And it’s the atheists who continuously do it, claiming they don’t believe in God just markets or ai etc. It’s an irony of ironies. |
|
| |
| ▲ | authorfly 8 hours ago | parent | prev | next [-] | | Would you say LessWrong posts are dogmatic? | |
| ▲ | tsunamifury 8 hours ago | parent | prev [-] | | I jsut want to comment here that this is the classic arrogant, underread “I reject half of humanities thoughts” foolishness that OP is referring to. I mean the lack of self awareness you have here is amazing. | | |
| ▲ | evantbyrne 6 hours ago | parent [-] | | To the contrary. I sped through my compsci capstone coursework first year of college and spent most of the rest of my time in philosophy, psychology, and sociology classrooms. The "hey if you squint this thing it looks like religion for the non-religious" perspective is just one I've heard countless times. It is perfectly valid to have a fact based discussion on whether there is a biological desire for religiosity, but drawing a long line from that to broadly critique someone's well-articulated ideas is pretty sloppy. | | |
| ▲ | tsunamifury 3 hours ago | parent [-] | | Quoting your college classes is the first sign of inexperience but I’ll
Share some modern concepts. In Adam Curtis‘s all watched over by machines of loving Grace, he makes a pretty long and complete argument that humanity has a rich history of turning over its decision-making to inanimate objects in a desire to discover ideologies we can’t form ourselves in growing complexity of our interconnectivity. He tells a history of them constantly failing because the core ideology of “cybernetics” is underlying them all and fails to be adaptive enough to match our DNA/Body/mind combined cognitive system. Especially when scaled to large groups. He makes the second point that humanity and many thinkers constantly also resort to the false notion of “naturalism” as the ideal state of humanity, when in reality there is no natural state of anything, except maybe complexity and chaos. Giving yourself up to something. Specially something that doesn’t work is very much “believing in a false god.” | | |
| ▲ | evantbyrne 2 hours ago | parent [-] | | You seem to be lost. While referencing a TV show may or may not be a rebuttal to a very specific kind of worldview, it is out of place as a response to my post to which you've failed to actually directly reference at all. I'm addressing this point at you personally because we can all see your comments: being nasty to atheists on the internet will never be a substitute for hard evidence for your ideology. | | |
| ▲ | tsunamifury 2 hours ago | parent [-] | | you seem to be profoundly confused Adam Curtis is a leading thinker in documentarian of our time and widely recognized in continental philosophy. The fact that you tried to dismiss him as a TV show shows you seem to be completely naïve about the topic you’re speaking about. Second, I’m not being nasty to atheists and speaking specifically about not having false gods which if anything is a somewhat atheistic perspective Honestly, what are you trying to say? | | |
| ▲ | evantbyrne 2 hours ago | parent [-] | | Like I said, we can all read your comments. Needs no further elaboration. If I receive a second recommendation for Curtis then I might be inclined to check it out. Take it easy. |
|
|
|
|
|
|
|
| ▲ | endymion-light 16 hours ago | parent | prev | next [-] |
| Techno Calvinists vs Luddite Reformists is a very funny image. Agree - although it's an interesting view, I think it's far more related to a lack of idealogy and writing where this has emerged from. I find it more akin to a distorted renaissance. There's such a large population of really intelligent tech people that have zero real care for philisophical or religious thought, but still want to create and make new things. This leads them down the first path of grafting for more and more money. Soon, a good proportion of them realise the futility of chasing cash beyond a certain extent. The problem is this belief that they are beyond these issues that have been dealt with since Mesopotamia. Which leads to these weird distorted idealogies, creating art from regurgitated art, creating apps that are made to become worse over time. There's a kind of rush to wealth, ignoring the joy of making things to further humanity. I think LLMs and AI is a genie out of a bottle, it's inevitable, but it's more like linear perpsective in drawing or the printing press rather than electricity. Except because of the current culture we live in, it's as if leonardo spent his life attempting to sell different variations of linear perspective tutorial rather than creating, drawing and making. |
| |
| ▲ | tsunamifury 8 hours ago | parent [-] | | in Adam Curtis‘s all watched over by machines of loving Grace, he makes a pretty long and complete argument that humanity has a rich history of turning over its decision-making to inanimate objects in a desire to discover ideologies we can’t form ourselves in growing complexity of our interconnectivity. He tells a history of them constantly failing because the core ideology of “cybernetics” is underlying them all and fails to be adaptive enough to match our DNA/Body/mind combined cognitive system. Especially when scaled to large groups. He makes the second point that humanity and many thinkers constantly also resort to the false notion of “naturalism” as the ideal state of humanity, when in reality there is no natural state of anything, except maybe complexity and chaos. |
|
|
| ▲ | SwoopsFromAbove 18 hours ago | parent | prev | next [-] |
| 100%. Not a new phenomenon at all, just the latest bogeyman for the inevitabilists to point to in their predestination arguments. My aim is only to point it out - people are quite comfortable rejecting predestination arguments coming from eg. physics or religion, but are still awed by “AI is inevitable”. |
| |
| ▲ | ikr678 17 hours ago | parent [-] | | It's inevitable not because of any inherent quality of the tech, but because investors are demanding it be so and creating the incentives for 'inevitability'. I also think EV vehicles are an 'inevitability' but I am much less offended by the EV future, as they still have to outcompete IC's, there are transitional options (hybrids), there are public transport alternatives, and at least local regulations appear to be keeping pace with the technical change. AI inevitabilty so far seems to be only inevitable because I can't actually opt out of it when it gets pushed on me. | | |
| ▲ | mountainb 10 hours ago | parent [-] | | To use John Adams' separation of republics into the categories of "the many, the few, and the one," the few in our current day are unusually conflict-adverse both among each other and with respect to the people. When faced with the current crisis, they look at the options for investment and they see some that will involve a lot of conflict with the many (changing the industrial employment arrangement, rearranging state entitlements), and they see see some that avoid conflict or change. Our few as they are got that way by outsourcing anything physical and material as much as possible and making everything "into computer." So they promote a self serving spiritual belief that because overinvesting in computers got them to their elevated positions, that even more computer is what the world needs more than anything else. This approach also mollifies the many in a way that would be easily recognizable in any century to any classically educated person. Our few do not really know what the many are there for, but they figure that they might as well extract from the many through e.g. sports gambling apps and LLM girlfriends. |
|
|
|
| ▲ | roadside_picnic 7 hours ago | parent | prev | next [-] |
| The article's main point is that "inevitabilism" is a rhetorical tactic used to frame the conversation in such a way you can easily dismiss any criticism as denying reality. So drawing comparisons to reformation ideology wouldn't be particularly meaningful. There's a also a bit of irony that you're presenting the secular view of predestination. As someone who once had a multi-volume set of "Institutes of the Christian Religion" next to him on his bookshelf, the protestant conception of predestination had very little to do with "offloading of freedom and responsibility" both in theory and in practice. Predestination is founded on the concept that God's grace is given not earned (unlike the previous Catholic system which had multiple ways that merit, including cash donations, could be converted into salvation), since no human could earn salvation without the grace of God. But the lesson from this is not "so don't worry about it!", quite the opposite. Calvin's main extension to this was that (paraphrasing) "It's not through good works that we are saved, but through our good works we have evidence of our salvation". You wanted to see the evidence of your salvation, so you did try to do good works, but without the belief that your efforts would ever be enough. This ultimately created a culture of hard work with out the expectation of reward. This is part of the focus of Max Weber's "The Protestant Ethic and the Spirit of Capitalism" which argued that this ability to "work without immediate reward" is precisely what enabled Capitalism to take such a strong foot hold in the early United States. So even if the article were arguing for "inevitabilism" the framework is still quite distinct from that established in Protestantism. |
| |
| ▲ | regus 3 hours ago | parent [-] | | > God's grace is given not earned (unlike the previous Catholic system ... Catholicism does not hold that you can earn grace. Grace is a gift from God that is freely given. > including cash donations, could be converted into salvation I assume you are referring to selling indulgences. Indulgences are not something that can give you salvation. |
|
|
| ▲ | card_zero 18 hours ago | parent | prev | next [-] |
| Or historicism generally. Hegel, "inexorable laws of historical destiny", that sort of thing. |
|
| ▲ | theSherwood 15 hours ago | parent | prev | next [-] |
| I think this is a case of bad pattern matching, to be frank. Two cosmetically similar things don't necessarily have a shared cause. When you see billions in investment to make something happen (AI) because of obvious incentives, it's very reasonable to see that as something that's likely to happen; something you might be foolish to bet against. This is qualitatively different from the kind of predestination present in many religions where adherents have assurance of the predestined outcome often despite human efforts and incentives. A belief in a predestined outcome is very different from extrapolating current trends into the future. |
| |
| ▲ | martindbp 13 hours ago | parent [-] | | Yes, nobody is claiming it's inevitable based on nothing, it's based on first principles thinking: economics, incentives, game theory, human psychology. Trying to recast this in terms of "predestination" gives me strong wordcel vibes. | | |
| ▲ | bonoboTP 13 hours ago | parent | next [-] | | It's a bit like pattern matching the Cold War fears of a nuclear exchange and nuclear winter to the flood myths or apocalyptic narratives across the ages, and hence dismissing it as "ah, seen this kind of talk before", totally ignoring that Hiroshima and Nagasaki actually happened, later tests actually happened, etc. It's indeed a symptom of working in an environment where everything is just discourse about discourse, and prestige is given to some surprising novel packaging or merger of narratives, and all that is produced is words that argue with other words, and it's all about criticizing how one author undermines some other author too much or not enough and so on. From that point of view, sure, nothing new under the sun. It's all too well to complain about the boy crying wolf, but when you see the pack of wolves entering the village, it's no longer just about words. Now, anyone is of course free to dispute the empirical arguments, but I see many very self-satisfied prestigious thinkers who think they don't have to stoop so low as to actually look at models and how people use them in reality, it can all just be dismissed based on ick factors and name calling like "slop". Few are saying that these things are eschatological inevitabilities. They are saying that there are incentive gradients that point in a certain direction and it cannot be moved out from that groove without massive and fragile coordination, due to game theoretical reasonings, given a certain material state of the world right now out there, outside the page of the "text". | | |
| ▲ | keiferski 13 hours ago | parent [-] | | I think you’re missing the point of the blog post and the point of my grandparent comment, which is that there is a pervasive attitude amongst technologists that “it’s just gonna happen anyway and therefore whether I work on something negative for the world or not makes no difference, and therefore I have no role as an ethical agent.” It’s a way to avoid responsibility and freedom. We are not discussing the likelihood of some particular scenario based on models and numbers and statistics and predictions by Very Smart Important People. | | |
| ▲ | theSherwood 12 hours ago | parent | next [-] | | I agree that "very likely" is not "inevitable". It is possible for the advance of AI to stop, but difficult. I agree that doesn't absolve people of responsibility for what they do. But I disagree with the comparison to religious predestination. | |
| ▲ | bonoboTP 13 hours ago | parent | prev [-] | | I'm not sure how common that is... I'd guess most who work on it think that there's a positive future with LLMs also. I mean they likely wouldn't say "I work on something negative for the world". | | |
| ▲ | keiferski 12 hours ago | parent [-] | | I think the vast majority of people are there because it’s interesting work and they’re being paid exceptionally well. That’s the extent to which 95/100 of employees engage with the ethics of their work. |
|
|
| |
| ▲ | welferkj 13 hours ago | parent | prev [-] | | Nobody serious is claiming theological predesination is based on "nothing", either. Talk about poor pattern matching. | | |
| ▲ | theSherwood 12 hours ago | parent [-] | | You are, of course, entitled to your religious convictions. But to most people outside of your religious community, the evidence for some specific theological claim (such as predestination) looks an awful lot like "nothing". In contrast, claims about the trajectory of AI (whether you agree with the claims or not) are based on easily-verifiable, public knowledge about the recent history of AI development. | | |
| ▲ | welferkj 11 hours ago | parent [-] | | It is not a "specific theological claim" either, rather a school of theological discourse. You're literally doing free-form association now and pretending to have novel insights into centuries of work on the issue. | | |
| ▲ | theSherwood 11 hours ago | parent [-] | | I'm not pretending to any novel insights. Most of us who don't have much use for theology are generally unimpressed by its discourse. Not novel at all. And the "centuries of work" without concrete developments that exist outside of the minds of those invested in the discourse is one reason why many of us are unimpressed. In contrast, AI development is resulting in concrete changes that are easily verified by anyone and on much shorter time scales. | | |
| ▲ | bonoboTP 10 hours ago | parent [-] | | Relatedly, it would be bordering on impossible to convince Iran about the validity of Augustine, Aquinas or Calvin, but it was fairly easy with nuclear physics. Theology isn't "based on nothing", but the convincing power of the quantum physics books happens to be radically different from Summa Theologiae, even if both are just books written by educated people based on a lot of thought and prior work. |
|
|
|
|
|
|
|
| ▲ | xpe 10 hours ago | parent | prev | next [-] |
| > many very intelligent, thoughtful people feel justified in brushing away millennia of philosophical and religious thought because they deem it outdated Why lump philosophy and religion together? I distinguish between philosophical thought and religious thought, to the extent the former is conditionally framed. |
|
| ▲ | charles_f 9 hours ago | parent | prev | next [-] |
| > One of the negative consequences of the “modern secular age” is that many very intelligent, thoughtful people feel justified in brushing away millennia of philosophical and religious thought because they deem it outdated or no longer relevant. Isn't that a societal trait though? See English Christians attitude towards vikings, requiring baptism (or the prima signatio, kinda baptism-light) before they could deal with them, because they were savage. Or colons forcing natives to adopt Christianity, because what they had before was "primitive". There was wisdom and thought in both, but in both case the Christian side "brushed it away". Or capitalism and communism in the cold war. It feels like everyone with a belief system tries to force it onto others. |
|
| ▲ | guelo 16 hours ago | parent | prev | next [-] |
| Sorry I don't buy your argument. (First I disagree with A Secular Age's thesis that secularism is a new force. Christian and Muslim churches were jailing and killing nonbelievers from the beginning. People weren't dumber than we are today, all the absurdity and self-serving hypocrisy that turns a lot of people off to authoritarian religion were as evident to them as they are to us.) The idea is not that AI is on a pre-planned path, it's just that technological progress will continue, and from our vantage point today predicting improving AI is a no brainer. Technology has been accelerating since the invention of fire. Invention is a positive feedback loop where previous inventions enable new inventions at an accelerating pace. Even when large civilizations of the past collapsed and libraries of knowledge were lost and we entered dark ages human ingenuity did not rest and eventually the feedback loop started up again. It's just not stoppable. I highly recommend Scott Alexander's essay Meditations On Moloch on why tech will always move forward, even when the results are disastrous to humans. |
| |
| ▲ | keiferski 16 hours ago | parent | next [-] | | That isn’t the argument of the book, so I don’t think you actually read it, or even the Wikipedia page? The rest of your comment doesn’t really seem related to my argument at all. I didn’t say technological process stops or slows down, I pointed out how the thought patterns are often the same across time, and the inability and unwillingness to recognize this is psychologically lazy, to over simplify. And there are indeed examples of technological acceleration or dispersal which was deliberately curtailed – especially with weapons. | | |
| ▲ | TeMPOraL 14 hours ago | parent [-] | | > I pointed out how the thought patterns are often the same across time, and the inability and unwillingness to recognize this is psychologically lazy, to over simplify. It's not lazy to follow thought patterns that yield correct predictions. And that's the bedrock on which "AI hype" grows and persists - because these tools are actually useful, right now, today, across wide variety of work and life tasks, and we are barely even trying. > And there are indeed examples of technological acceleration or dispersal which was deliberately curtailed – especially with weapons. Name three. (I do expect you to be able to name three, but that should also highlight how unusual that is, and how questionable the effectiveness of that is in practice when you dig into details.) Also I challenge you to find but one restriction that actually denies countries useful capabilities that they cannot reproduce through other means. | | |
| ▲ | keiferski 13 hours ago | parent [-] | | Doesn’t seem that rare to me – chemical, biological, nuclear weapons are all either not acceptable to use or not even acceptable to possess. Global governments go to extreme lengths to prevent the proliferation of nuclear weapons. If there were no working restrictions on the development of the tech and the acquisition of needed materials, every country and large military organization would probably have a nuclear weapons program. Other examples are: human cloning, GMOs or food modification (depends on the country; some definitely have restricted this on their food supply), certain medical procedures like lobotomies. I don’t quite understand your last sentence there, but if I understand you correctly, it would seem to me like Ukraine or Libya are pretty obvious examples of countries that faced nuclear restrictions and could not reproduce their benefits through other means. | | |
| ▲ | stale2002 6 hours ago | parent | next [-] | | I can't make a nuclear or chemical weapon on my gaming graphics card from 5 years ago. The same is not true about LLMs. No, LLMs aren't going to be stopped when anyone with a computer from the last couple years is able to run them on their desktop. (There are smaller LLMs that can be even run on your mobile phone!). The laws required to stop this would be draconian. It would require full government monitoring of all computers. And any country or group that "defects" by allowing people to use LLMs, would gain a massive benefit. | | |
| ▲ | ben_w 3 hours ago | parent | next [-] | | > I can't make a nuclear or chemical weapon on my gaming graphics card from 5 years ago. You make be surprised to learn that you can make a chemical weapon on your gaming graphics card from 5 years ago. It's just that it will void the warranty well before you have a meaningful quantity of chlorine gas from the salt water you dunked it in while switched on. | |
| ▲ | TeMPOraL 6 hours ago | parent | prev [-] | | Yup. The government of the world could shut down all LLM providers tomorrow, and it wouldn't change a thing - LLMs fundamentally are programs, not a service. There are models lagging 6-12 months behind current SOTA, that you can just download and run on your own GPU today; most research is in the open too, so nothing stops people from continuing it and training new models locally.z At this point, AI research is not possible to stop without killing humanity as technological civilization - and it's not even possible to slow it down much, short of taking extreme measures Eliezer Yudkowsky was talking about years ago: yes, it would literally take a multinational treaty on stopping advanced compute, and aggressively enforcing it - including (but not limited to) by preemptively bombing rogue data centers as they pop up around the world. |
| |
| ▲ | TeMPOraL 6 hours ago | parent | prev [-] | | > Global governments go to extreme lengths to prevent the proliferation of nuclear weapons. If there were no working restrictions on the development of the tech and the acquisition of needed materials, every country and large military organization would probably have a nuclear weapons program. Nuclear is special due to MAD doctrine; restrictions are aggressively enforced for safety reasons and to preserve status quo, much more so than for moral reasons - and believe me, every country would love to have a nuclear weapons program, simply because, to put it frankly, you're not fully independent without nukes. Nuclear deterrent is what buys you strategic autonomy. It's really the one weird case where those who got there first decided to deny their advantage to others, and most others just begrudgingly accept this state of affairs - as unfair as it is, it's the local equilibrium in global safety. But that's nukes, nukes are special. AI is sometimes painted like the second invention that could become special in this way, but I personally doubt it - to me, AI is much more like biological weapons than nuclear ones: it doesn't work as a deterrent (so no MAD), but is ideal for turning a research mishap into an extinction-level event. > Other examples are: human cloning, GMOs or food modification (depends on the country; some definitely have restricted this on their food supply), certain medical procedures like lobotomies. Human cloning - I'd be inclined to grant you that one, though I haven't checked what's up with China recently. GMO restrictions are local policy issues, and don't affect R&D on a global scale all that much. Lobotomy - fair. But then it didn't stop the field of neurosurgery at all. > I don’t quite understand your last sentence there, but if I understand you correctly, it would seem to me like Ukraine or Libya are pretty obvious examples of countries that faced nuclear restrictions and could not reproduce their benefits through other means. Right, the invasion of Ukraine is exactly why no nuclear-capable country will even consider giving nukes up. This advantage cannot be reproduced through other means in enough situations. But I did mean it more generally, so let me rephrase it: Demand begets supply. If there's a strong demand for some capability, but the means of providing it are questionable, then whether or not they can be successfully suppressed depends on whether there are other ways of meeting the demand. Nuclear weapons are, again, special - they have no substitute, but almost everyone gains more from keeping the "nuclear club" closed than from joining it. But even as there are international limits, just observe how far nations go to skirt them to keep the R&D going (look no further than NIF - aka. "let's see far we can push nuclear weapons research if we substitute live tests with lasers and a lot of computer simulations"). Biological and chemical weapons are effectively banned (+/- recent news about Russia), but don't provide unique and useful capabilities on a battlefield, so there's not much demand for them. (Chemical weapons showing up in the news now only strengthens the overall point: it's easy to refrain from using/developing things you don't need - but then restrictions and treaties fly out the window the moment you're losing and run out of alternatives.) Same for full-human cloning - but there is demand for transplantable organs, as well as better substrate for pharmaceutical testing; the former can be met cheaper through market and black market means, while the latter is driving several fields of research that are adjacent to human cloning, but more focused on meeting the actual demand and coincidentally avoid most of the ethical concerns raised. And so on, and so on. Circling back to AI, what I'm saying is, AI is already providing too much direct, object-level utility that cannot be substituted by other means (itself being a cheaper substitute for human labor). The demand is already there, so it's near-impossible to stop the tide at this point. You simply won't get people to agree on this. |
|
|
| |
| ▲ | jowea 12 hours ago | parent | prev [-] | | I add to this that we have plenty of examples of societies that don't keep up with technological advancement, or "history" more broadly get left behind. Competition in a globalized world makes some things inevitable. I'm not agreeing in full with the most AI will change everything arguments, but those last couple of paragraphs of TFA sounds to me like standing athwart history, yelling "Stop!". | | |
| ▲ | m0llusk 9 hours ago | parent [-] | | Communism used to be thought of in this way. It enabled societies to cast off old limitations and make remarkable progress. Until it didn't and Communists found themselves and their modernized society stuck well behind the rest of the world. Perhaps LLMs are a similar trap that will generate many lines of code and imagined images but leave us all stupid and with impaired executive function. |
|
|
|
| ▲ | ygritte 17 hours ago | parent | prev | next [-] |
| > the actor has changed from God to technology Agreed. You could say that technology has become a god to those people. |
| |
| ▲ | xpe 11 hours ago | parent [-] | | What technology? Agriculture? The steam engine? The automobile? Modern medicine? Cryptography? The Internet? LLMs? Nanotechnology? Who are these people? Jonas Salk, widely credited as the inventor of the polio vaccine? Sam Altman, fundraiser extraordinaire? Peter Thiel, exalter of The World-Saving Founders? Ray Kurzweil? Technocrats? Other techno-optimists? Perhaps transhumanists? There are many variations, and they differ by quite a lot. What kind of god? Carl Sagan has a nice interview where he asks a question-asker to define what they mean by “god”. A blind watchmaker? Someone who can hear your prayers? A wrathful smoter of the wicked and (sometimes) the loyal (sorry, Job!)? A very confusing 3-tuple, one element of which birthed another, who died somehow but was resurrected? The essence of nature? The laws of physics? An abstract notion of love? Yeah. These three letters are too vague to be useful unless unpacked or situated in a mutually understood context. It often fosters a flimsy consensus or a shallow disagreement. |
|
|
| ▲ | nonameiguess 10 hours ago | parent | prev | next [-] |
| It actually seems more to me like dialectical materialism, which started centuries ago and was already secular. It bears more in character to the differences that other commenters have already voiced, in that human actors not only believed in its inevitability, but attempted to bring it about themselves. Multiple global superpowers implemented forced industrialization, cultural reformation, and command economies to bring it about. The difference this time isn't sacred versus secular. It's public versus private. Whereas the purveyors of communism were governments, this is being done by corporations. Well-funded private organizations are led by decision makers who believe strongly this is the future, it is inevitable, and their only hope is to get there first. The actor didn't change from God to technology. It changed from labor to capital. I make no comment on whether they will prove to be more correct than the believers in communism, but the analogy is obvious either way. |
| |
| ▲ | leshow 9 hours ago | parent [-] | | I kinda feel this way too. Reading some of the blog posts by AI "luminaries" I'm struck by how Stalinist they sound. They hold out some utopia that exists in their minds, and they are ready to feed people into the meat grinder to try and make it a reality. Stalin said that this generation would suffer so that the next lived in utopia, and that's kind of the same pitch they are making. I think if we actually cared about making a better world, you'd take steps where each successive step is a positive one. Free healthcare, clean energy investments, etc.. | | |
| ▲ | dragonwriter 9 hours ago | parent [-] | | > I think if we actually cared about making a better world, you'd take steps where each successive step is a positive one. Yeah, but lots of people don't care about that, they care about acheiving their visions of power, and they need an excuse to justify other people suffering for them. They aren’t seeking long term improvements at the cost of short term suffering, they are using a mirage of utopia over the hill to sell people a deal which is only suffering, now and for however long they can be kept in line. |
|
|
|
| ▲ | jprokay13 8 hours ago | parent | prev | next [-] |
| Why look to the past when you can rediscover it from “first principles?” /s |
|
| ▲ | isqueiros 17 hours ago | parent | prev [-] |
| This is one of those types of comments to change one's whole world view. > The notion that history is just on some inevitable pre-planned path is not a new idea, except now the actor has changed from God to technology. I'm gonna fucking frame that. It goes hard |
| |
| ▲ | daliboru 13 hours ago | parent [-] | | This entire conversation is a masterpiece! Just picture this convo somewhere in nature, at night, by a fire. |
|