Remix.run Logo
Why the Technological Singularity May Be a "Big Nothing"
7 points by starchild3001 a day ago | 8 comments

(A counterpoint to the prevailing narrative that singularity will be highly discontinuous and disruptive)

The concept of the technological singularity, a hypothetical future event where artificial superintelligence surpasses human intelligence and leads to unforeseeable changes in human civilization, has been a topic of fascination and concern for many years. However, there are several reasons to believe that the singularity may not be as disruptive or as significant as some predict. In this essay, we will explore five key reasons why the technological singularity might be a "big nothing."

1. Resistance to Adopting Superintelligence One of the main reasons why the singularity may not have a profound impact on daily life is that people will likely be unwilling to recognize and listen to a "so-called" superintelligent being. Humans have a natural tendency to be skeptical of authority figures, especially those that claim to have superior knowledge or abilities. Even if a superintelligent AI were to emerge, many people would likely question its credibility and resist its influence in their daily decision-making processes.

History has shown that people often prefer to rely on their own judgment and intuition rather than blindly following the advice of experts or authority figures. This tendency is likely to be even more pronounced when it comes to an artificial intelligence, as people may view it as a threat to their autonomy and way of life. As a result, the impact of superintelligence on society may be limited by people's willingness to accept and integrate its guidance into their lives.

2. Difficulty in Identifying Superintelligence Another reason why the singularity may not be as significant as some believe is that it will be very challenging to define and recognize superintelligence. Intelligence is a complex and multifaceted concept that encompasses a wide range of abilities, including reasoning, problem-solving, learning, and creativity. Even among humans, there is no universally accepted definition or measure of intelligence, and it is often difficult to compare the intelligence of individuals across different domains or contexts.

Given this complexity, it will be even more challenging to determine whether an artificial intelligence has truly achieved superintelligence. Even if an AI system demonstrates remarkable abilities in specific tasks or domains, it may not necessarily be considered superintelligent by everyone. There will likely be ongoing debates and disagreements among experts and the general public about whether a particular AI system qualifies as superintelligent, which could limit its impact and influence on society.

3. Limitations of Artificial Intelligence Fundamental differences between machine intelligence and human intelligence may permanently limit the scope and applicability of AI. This can be understood through analogies. For example planes can fly, but they aren't a replacement for birds. Similarly, submarines can swim, but they aren't a replacement for fish. Likewise a machine built to mimic human intelligence may never be a perfect replacement of human intelligence or intellect, due to structural incompatibilities and differences (biological organism vs silicon machine). The gap may remain this way for the foreseeable future.

Contemporary AI systems predominantly rely on narrow, domain-specific algorithms trained on vast datasets. They lack the general intelligence and versatility that humans possess, which enables us to learn from experience, apply knowledge across diverse domains, and navigate novel scenarios. The degree to which silicon machines can emulate human capabilities remains uncertain, even if they eventually surpass us in specific areas such as information retrieval, logical reasoning, textual Q&A, analysis, scientific research and discovery.

<To be continued>

atleastoptimal 21 hours ago | parent | next [-]

> Even if a superintelligent AI were to emerge, many people would likely question its credibility and resist its influence in their daily decision-making processes.

Lol is this for real? If superintelligent AI existed it wouldn't need humans to give it permission, if it wanted it could outmaneuver any human system operating around it.

bilsbie a day ago | parent | prev | next [-]

Big evidence for this idea is that no businesses or governments are that interested in intelligence. If intelligence is so valuable wouldn’t someone be searching out and hiring 99.9% percentile high iq individuals from all over the world?

Or another way to look at it, we already have way more good ideas than we know what to do with. Putting them into practice is the bottleneck.

dv_dt a day ago | parent [-]

We are capital deployment variability limited with too much capital concentrated in the hands of too few decision makers. Inequality is an indirect measure of this and economic data generally correlates higher inequality with lower economic growth.

I think inequality causes a local maximization pattern of capital/company optimization making more profits for individual companies but dampening broader competition and exploration of alternatives which in turn is what kills higher growth

fewer larger companies also increases singularity blindness because of fewer larger investment directives

andyjohnson0 19 hours ago | parent | prev | next [-]

Meta, but I wish people would think twice before using Ask as a blogging platform.

imvetri a day ago | parent | prev | next [-]

You are a singularity. How can an AI singularity surpass you?

adyashakti a day ago | parent | prev | next [-]

you're likely correct; but, my friend, that view doesn't drive maximizing stakeholder value. on with the hype!

starchild3001 a day ago | parent | prev | next [-]

4. Ethical and Regulatory Challenges The development and deployment of superintelligent AI systems will likely face significant ethical and regulatory challenges that could limit their impact on society. There are many concerns about the potential risks and negative consequences of advanced AI, such as job displacement, privacy violations, and the misuse of AI for malicious purposes.

To mitigate these risks, there will likely be a need for robust ethical frameworks, safety protocols, and regulatory oversight to ensure that superintelligent AI systems are developed and used in a responsible and beneficial manner. However, establishing and enforcing these frameworks will be a complex and challenging process that may slow down the development and adoption of superintelligent AI.

Moreover, there may be public resistance and backlash against the use of superintelligent AI in certain domains, such as decision-making roles that have significant consequences for individuals and society. This resistance could further limit the impact and influence of superintelligent AI on daily life.

5. Gradual Integration and Adaptation Finally, even if superintelligent AI does emerge, its impact on society may be more gradual and less disruptive than some predict. Throughout history, humans have shown a remarkable ability to adapt to and integrate new technologies into their lives. From the invention of the printing press to the rise of the internet, technological advancements have often been met with initial resistance and skepticism before eventually becoming an integral part of daily life.

Similarly, the integration of superintelligent AI into society may be a gradual process that unfolds over many years or even decades. Rather than a sudden and dramatic singularity event, the impact of superintelligent AI may be more incremental, with people slowly learning to work alongside and benefit from these advanced systems.

Moreover, as superintelligent AI becomes more prevalent, humans may adapt by developing new skills, roles, and ways of living that complement rather than compete with these systems. This gradual adaptation could help to mitigate some of the potential negative consequences of superintelligent AI and ensure that its benefits are more evenly distributed across society.

In conclusion, while the idea of a technological singularity driven by superintelligent AI is certainly intriguing, there are several reasons to believe that its impact on society may be less significant and disruptive than some predict. From resistance to recognizing and listening to superintelligent systems to the challenges of defining and achieving true superintelligence, there are many factors that could limit the influence of advanced AI on daily life. Moreover, the gradual integration and adaptation of superintelligent AI into society may help to mitigate some of the potential risks and negative consequences associated with this technology. As such, while the development of superintelligent AI is certainly an important and exciting area of research and innovation, it may not necessarily lead to the kind of dramatic and world-changing singularity event that some envision.

(This article was written in collaboration with an AI. Its title, the first two arguments and major edits to the third idea came from the human author. The topic and arguments are highly inspired by Vernor Vinge, who passed away this past week, and his very influential essay.)

sunscream89 a day ago | parent | prev [-]

You make some good points, and a few sound like your LLM carrying on.

Firstly, let us consider a “singularium”. Not one unified whole and universal singularity, like a god being born and the air turns to music and light for everyone around the world at once.

Come on, it’s a bubble that contains everything it envelops.

So somewhere in the world right now, there are any number of singulariums affected by superior intelligence or potential of some kind or another and you are merely oblivious and could not access if you were aware.

I hope I’m chipping away at the delusion that you or what you think you know of the established world is a kind of bubble. Some don’t have access to your superior intelligence and technology, you don’t have access to that of the wealthy, and somewhere somehow there are those among this world who have what can only be called more superior intellects, knowledge, and technology.

If we break it down in this way, that everyone is already living their own lot in life of potential and self deception, we may consider super intelligence beyond AGI.

You have the same difficulty seeing those in your world for their superior advancements over the convention as you would relating to something living in one of gargantuan data centers.

I actually find it amazing ordinary people don’t know their own brains may be honed into a “super intelligence” over time. It doesn’t make you levitate or shoot flames out of the ass, however it does reframe your views of reality and empower you even in silence.