Remix.run Logo
measurablefunc 4 days ago

If you use non-constructive reasoning¹ then you can argue for basically any outcome & even convince yourself that it is inevitable. The basic example is as follows, there is no scientific or physical principle that can prevent the birth of someone much worse than Hitler & therefore if people keep having children one of those children will inevitably be someone who will cause unimaginable death & destruction. My recommendation is to avoid non-constructive inevitability arguments using our current ignorant state of understanding of physical laws as the main premise b/c it's possible to reach any conclusion from that premise & convince yourself that the conclusion is inevitable.

¹https://gemini.google.com/share/d9b505fef250

wvbdmp 4 days ago | parent | next [-]

I agree that the mere theoretical possibility isn’t sufficient for the argument, but you’re missing the much less refutable component: that the inevitability is actively driven by universal incentives of competition.

But as I alluded to earlier, we’re working towards plenty of other collapse scenarios, so who knows which we’ll realize first…

measurablefunc 4 days ago | parent | next [-]

My current guess is ecological collapse & increasing frequency of system shocks & disasters. Basically Blade Runner 2049 + Children of Men type of outcome.

marcus_holmes 4 days ago | parent | prev [-]

None of them.

Humans have always believed that we are headed for imminent total disaster. In my youth it was WW3 and the impending nuclear armageddon that was inevitable. Or not, as it turned out. I hear the same language being used now about a whole bunch of other things. Including, of course, the evangelist Rapture that is going to happen any day now, but never does.

You can see the same thing at work in discussions about AI - there's passion in the voices of people predicting that AI will destroy humanity. Something in our makeup revels in the thought that we'll be the last generation of humans, that the future is gone and everything will come to a crashing stop.

This is human psychology at work.

dcanelhas 4 days ago | parent | next [-]

If you look at timescales large enough you will find that plenty of extinction level events actually do happen (the anthropocene is right here).

We are living in a historically excepcional time of geological, environmental, ecological stability. I think that saying that nothing ever happens is like standing downrange to a stream of projectiles and counting all the near misses as evidence for your future safety. It's a bold call to inaction.

marcus_holmes 3 days ago | parent [-]

Obviously this is all true. There was an event in the 5th century that meant we had no summer and all crops failed for 5 years, we all almost starved then. And that was only the most recent of these types of events.

It's not that it can't happen. It obviously can. I'm more talking about the human belief that it will happen, and in our lifetime. It probably won't.

potsandpans 4 days ago | parent | prev [-]

"nothing ever happens."

The observation is, humans tend to think that annihilation is inevitable, it hasn't happened yet so therefore it will never be inevitable.

In fact, _anything_ could happen. Past performance does not guarantee future results.

If you need cognitive behavioral therapy, fine.

But to casually cite nuclear holocaust as something people irrationally believed in as a possibility is dishonest. That was (and still is) a real possible outcome.

Whats somewhat funny here is is if youre wrong, it doesnt matter. But that isnt the same as being right.

> Something in our makeup revels in the thought that we'll be the last generation of humans, that the future is gone and everything will come to a crashing stop

And yet there _will_ (eventually) be one generation that is right.

chrisco255 4 days ago | parent [-]

> And yet there _will_ (eventually) be one generation that is right.

Most likely outcome would be that humans evolve into something altogether different rather than go extinct.

toss1 4 days ago | parent [-]

The Fermi Paradox might want to have a word here...

Particularly considering the law of large numbers in play where incalculable large chances have so far shown only one sign of technologically-capable life —— ours, and zero signs of any other example of a tech species evolving into something else or even passing the Great Filter.

chrisco255 3 days ago | parent [-]

The Fermi Paradox overestimates the likelihood of intelligent life outside of earth. We haven't even found hard evidence of life anywhere outside of our planet. There's not even a verifiably hospitable planet for water-based lifeforms anywhere within dozens of lightyears from earth. Even if a hospitable planet exists within a range we can one day get to, unless it has the same volcanic properties and makeup as earth, it's most probable that life itself never even developed there.

Even where life may have developed, it's incredibly unlikely that sentient intelligence developed. There was never any guarantee that sentience would develop on Earth and about a million unlikely events had to converge in order for that to occur. It's not a natural consequence of evolution, it's an accident of Earth's unique history and several near-extinction level events and drastic climate changes had to occur to make it possible.

The "law of large numbers" is nothing when the odds of sentient intelligence developing are so close to zero. If such a thing occurred or occurs in the future at some location other than Earth, it's reasonably likely that it's outside of our own galaxy or so far from us that we will never meet them. The speed of light is a hell of a thing.

jackphilson 4 days ago | parent | prev | next [-]

Irrelevant but I like this pattern of using Gemini (or AI outputs in general) as sources. Please continue to do so and I encourage any readers to also adopt this pattern. I will also try to implement this pattern.

measurablefunc 4 days ago | parent [-]

The sources are in the report. Gemini provides actual references for all the claims made. You'd know that if you actually looked but lack of intellectual rigor is expected when people are afraid of actually scrutinizing their beliefs of non-constructive inevitability.

jrave 4 days ago | parent [-]

maybe you misread the post you‘re answering to here or are you suspecting sarcasm? the poster commended your usage of the footnote with the gemini convo as far as i can tell?

measurablefunc 4 days ago | parent [-]

Laid it on a little too thick to be sincere & more generally I don't comment on internet forums to be complimented on my response style. Address the substance of my arguments or just save yourself the keystrokes.

jackphilson 4 days ago | parent | next [-]

It was a compliment and I was hoping to nudge the behavior of other HN comments.

andrepd 4 days ago | parent | prev [-]

If you really can't see the irony of using AI to make up your thoughts on AI then perhaps there's keystrokes to be saved on your end as well.

measurablefunc 4 days ago | parent [-]

I recommend you address the content & substance of the argument in any further responses to my posts or if you can't do that then figure out a more productive way to spend your time. I'm sure there is lots of work to be done in automated theorem proving.

imtringued 4 days ago | parent | prev | next [-]

This isn't just an AI thing. There are a lot of of non-constructive ideologies like communism where simply getting rid of "oppressors" will magically unleash the promised utopia. When you give these people a constructive way to accomplish their goals, they will refuse, call you names and show their true colors. Their criticism is inherently abstract and can never have a concrete form, which also makes it untouchable by outside criticism.

jjk166 4 days ago | parent | prev [-]

I'm pretty sure a lot of work has gone into making institutions resistant to a potential future super-Hitler. Whether those efforts will be effective or not, it is a very real concern, and it would be absurd to ignore it on the grounds of "there is probably some limit to tyranny we're not yet aware of which is not too far beyond what we've previously experienced." I would argue a lot more effort should have gone into preventing the original Hitler, whose rise to power was repeatedly met with the chorus refrain "How much worse can it get?"