Remix.run Logo
fny 2 days ago

Why is everyone so damn obsessed with the singularity? You don't need superintelligence to disrupt humanity. We easily have enough advancement to change the economy dramatically as is. The adoption isn't there yet.

jerf 2 days ago | parent | next [-]

Even after I explained the exact usage I was invoking, the attractive nuisance of all the science fiction that has gotten attached to the term still prevented you and Quarrelsome from reading my post as written.

I really wish the term hadn't been mangled so much. Though the originator of the term bears a non-trivial amount of the responsibility for it, having written some rather good science fiction on the topic himself. The original meaning from the paper is quite useful and nothing has stepped up to replace it.

All the singularity means as I explicitly used it here is you entirely lose the ability to predict the future. It is relative to who is using it... we are all well past the Caveman Singularity, where no (metaphorical) caveman could possibly predict anything about our world. If we stabilize where we are now I feel like I have at least a grasp on the next ten years. If we continue at this pace I don't. That doesn't mean I believe AI will inevitably do this or that... it means I can't predict anymore, which is really the exact opposite. AI doesn't have to get to "superintelligence" to wreck up predictions.

tim333 2 days ago | parent | next [-]

>the originator of the term ... rather good science fiction

I guess you are thinking of Vernor Vinge but the term first came up with John von Neumann in the 1950s:

>...on the accelerating progress of technology and changes in human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue

Folcon 2 days ago | parent | prev [-]

The most interesting factor of the dynamic around things like near singularity is the things that I feel are coupled to it

Basically the ability to reason about first and second order effects

IE, before the cellphone was invented you could have predicted the it, things like star trek envisaged a world of portable communication

What impact the cellphone had was predictable to some people, on the one hand increased convenience of communication as well as the end of making a call and wondering who was going to pick up, which was a definite consideration pre-mobile when you called a place and not a person, now we just assume that when we call someone we'll get them and not their family

The second order effects were less obvious, ease of access to someone meant being always accessible, so now everyone could be contacted whenever someone wanted them, it changed the dynamics of life for many, not to mention the effects of different technologies combining, the personal computer and the mobile phone becoming one in the form of the smartphone gave everyone a computer in their pocket, let alone adding the internet into the mix

Each of these changes were completely unpredictable to the people pre-cellphone, once again, compare modern day trek and the originals

I still vividly remember the moment one of the characters in discovery asked the computer to give her a mirror, the same behaviour of countless people now using the fact that their selfie camera functionally gives them a portable mirror in the form of their phone, that was unpredictable

So that's one form of being unable to predict the future

But there's another interesting dynamic I think, which is each direction of technical development is accelerating, which means that we may soon hit the point that only a subject matter expert will be able to predict or perhaps even be aware of what happens in any particular field, so we may get a period where before we can't predict the future, we may have some strange middle ground where we're constantly surprised by the developments we see around ourselves and when we look into it find this new discovery has been around months or years

I certainly have experienced that once or twice, however I'm wondering if that may become the new normal

lamasery 2 days ago | parent | prev | next [-]

> The adoption isn't there yet.

It's worth noting that after ~50 years[edit: to preempt nitpicking, yes I know we've been using computers productively quite a bit longer than that, but that's roughly the time when the computerized office started to really gain traction across the whole economy in developed countries], we've only extracted a tiny proportion of the hypothetical value of computers, period, as far as benefits to the economy and potential for automation.

I actually think a lot of the real value of LLMs is "just" going to be making accessing a little (only a little!) more of that existing unrealized benefit feasible for the median worker.

My expectation is that we'll also harness only a tiny proportion of the hypothetical value of LLMs. We're just not good enough at organizing work to approach the level of benefit folks think of when they speculate about how transformational these things will be. A big deal? Yes. As big a deal as some suppose? Probably not.

[edit: in positive ways, I mean. I think we're going to see huge boosts in productivity to anti-social enterprises. I'd not want to bet on whether the development of LLMs are going to be net-positive or net-harmful to humanity, not due to the "singularity" or "alignment" or whatever, but because of the sorts of things they're most-useful for]

arbitrary_name a day ago | parent [-]

it's an interesting question: how much more productive would we all be if we were all as savvy/literate/productive with computers as some hypothetical comparator (I'm not sure programmers are the right comparison to make)?

for example, i am in operations and strategy, but have always wanted to be more technical because i could see the value for many many tasks. however, the learning curve was steep and so learning and doing other things drove better returns for me.

now, LLMs make learning basic concepts and executing simple tasks extremely easy, and i am realizing a higher level of productivity then previously; i used codex to do a test data migration and then evaluate the data quality. i could simply not have done this previously, but it is a meaningful change for me, that i can execute on this.

there is no maintenance burden: i don't have to keep the code alive. it simply sped up an otherwise manual and non repeated task.

i think that's what's so interesting and concerning about this technology: i think power and productivity will flow more broadly across the workforce. this will result in relative winners and losers, and some who will experience no real change at all.

similarly to the costs and benefits of mobile devices diffusing technology access; it changed some things, it created winners and losers and yet our daily lives are recognizable to someone from 50 or even more years ago.

balamatom 2 days ago | parent | prev | next [-]

>Why is everyone so damn obsessed with the singularity?

Because they are captives (to a system of incentives that is already "superintelligent" in comparison to any individual) who are hoping for salvation (something to make them free against their will; since it is their will which is captured).

Singularity, then, is the point at which the system itself "finally becomes able to imagine what it is like to be a person", and decides to stop torturing people. IMO, this is unlikely to work out like that.

gilfaethwy 2 days ago | parent | prev | next [-]

We've had enough advancement to change the economy for many decades, but the powers that be have insisted that, despite the lack of need, we continue to toil doing completely unnecessary work, because that's what's required to extend their fiefdoms.

Not that the singularity has any relevance here, either - except maybe that the robots take over, and the billionaires have missed the boat? I don't know.

Quarrelsome 2 days ago | parent | prev | next [-]

Moreover the singularity makes this crass assumption that a single player takes all. It seems to ignore a future of many, many AI players, or many, many human + AI players instead.

Furthermore, regardless of how smart one thing is, it cannot win towards infinite games of poker against 7 billion humans, who as a race are cognitively extremely diverse and adaptive.

kaibee 2 days ago | parent | next [-]

> regardless of how smart one thing is, it cannot win towards infinite games of poker against 7 billion humans,

AI isn't one thing though. Really its kind of a natural evolution of 'higher order life'. I think that something like a 'organization', (corps, governments, etc) once large enough is at least as alive as a tardigrade. And for the people who are its cells, it is as comprehensible as the tardigrade is to any of its individual cells. So why wouldn't organizations over all of human history eventually 'evolve' a better information processing system than humans making mouth sounds at each other? (writing was really the first step on this). Really if you look at the last 12,000 years of human society as actually being the first 12,000 years of the evolutionary history of 'organizations', it kinda makes a lot of sense. And so much of it was exploring the environment, trying replication strategies, etc. And we have a lot of different organizations now, like an evolutionary explosion, where life finds various niches to exploit.

/schitzoposting

Quarrelsome 2 days ago | parent | next [-]

> AI isn't one thing though.

What's the single in "singularity" doing then?

My issue is I feel like some people treat intelligence as an integer value and make the crass assumption that "perfect intelligence" beats all other intelligences and just think that's quite a thick way to think about it. A fool can beat an expert over the course of towards infinite hands because they happen to do something unexpected. Everything is a trade off and there's no such thing as perfect, every player has to take risk.

fatata123 2 days ago | parent | prev [-]

[dead]

fzzzy 2 days ago | parent | prev | next [-]

The singularity does no such thing.

Quarrelsome 2 days ago | parent [-]

well that's certainly cleared it all up.

ikrenji 2 days ago | parent | prev [-]

that's kind of optimistic. for example a misaligned super AI might engineer a virus that wipes out most of the 7 billion humans. that would put a damper on the adaptability of the human race...

Quarrelsome 2 days ago | parent [-]

and then might overfit the lack of danger in that aftermath, leading to those fragmented humans doing something to overthrow it. For all we know this AI might get bored and decide to make a cure, or turn itself off, or anything really.

tim333 2 days ago | parent | prev | next [-]

>Why is everyone so damn obsessed with the singularity?

I don't think most are - it tends to regarded as rather cranky stuff, and a lot of people who use the term are a bit cranky.

Even so AI maybe overtaking human intelligence is an interesting thing in human history.

afthonos 2 days ago | parent [-]

An interesting thing in AI history. For human history, it’s epochal.

CamperBob2 2 days ago | parent | prev | next [-]

Why is everyone so damn obsessed with the singularity? You don't need superintelligence to disrupt humanity.

And at the same time, we don't take advantage of the intelligence we already have.

guelo 2 days ago | parent | prev [-]

Because it's happening no matter how much you'd rather ignore it or scoff at it.