Remix.run Logo
JohnMakin 8 days ago

One of a few issues I have with groups like these, is that they often confidently and aggressively spew a set of beliefs that on their face logically follow from one another, until you realize they are built on a set of axioms that are either entirely untested or outright nonsense. This is common everywhere, but I feel especially pronounced in communities like this. It also involves quite a bit of navel gazing that makes me feel a little sick participating in.

The smartest people I have ever known have been profoundly unsure of their beliefs and what they know. I immediately become suspicious of anyone who is very certain of something, especially if they derived it on their own.

jl6 8 days ago | parent | next [-]

I don’t think it’s just (or even particularly) bad axioms, I think it’s that people tend to build up “logical” conclusions where they think each step is a watertight necessity that follows inevitably from its antecedents, but actually each step is a little bit leaky, leading to runaway growth in false confidence.

Not that non-rationalists are any better at reasoning, but non-rationalists do at least benefit from some intellectual humility.

dan_quixote 8 days ago | parent | next [-]

As a former mechanical engineer, I visualize this phenomenon like a "tolerance stackup". Effectively meaning that for each part you add to the chain, you accumulate error. If you're not damn careful, your assembly of parts (or conclusions) will fail to measure up to expectations.

godelski 7 days ago | parent | next [-]

I like this approach. Also having dipped my toes in the engineering world (professionally) I think it naturally follows that you should be constantly rechecking your designs. Those tolerances were fine to begin with, but are they now that things have changed? It also makes you think about failure modes. What can make this all come down and if it does what way will it fail? Which is really useful because you can then leverage this to design things to fail in certain ways and now you got a testable hypothesis. It won't create proof, but it at least helps in finding flaws.

isleyaardvark 7 days ago | parent [-]

The example I heard was to picture the Challenger shuttle, and the O-rings used worked 99% of the time. Well, what happens to the failure rate when you have 6 O-rings in a booster rocket, and you only need one to fail for disaster? Now you only have a 94% success rate.

godelski 6 days ago | parent [-]

IIRC the Challenger o-ring problem was much more deterministic. That the flaw was known and caused by the design not considering the actual operational temperature range. Which, I think there's a good lesson to learn there (and from several NASA failure): the little things matter. It's idiotic to ignore a $10 fix if the damage would cost billions of dollars.

But I still think your point is spot on and that's really what matters haha

ctkhn 7 days ago | parent | prev | next [-]

Basically the same as how dead reckoning your location works worse the longer you've been traveling?

toasterlovin 7 days ago | parent [-]

Dead reckoning is a great analogy for coming to conclusions based on reason alone. Always useful to check in with reality.

ethbr1 7 days ago | parent [-]

And always worth keeping an eye on the maximum possible divergence from reality you're currently at, based on how far you've reasoned from truth, and how less-than-sure each step was.

Maybe you're right. But there's a non-zero chance you're also max wrong. (Which itself can be bounded, if you don't wander too far)

toasterlovin 7 days ago | parent [-]

My preferred argument against the AI doom hypothesis is exactly this: it has 8 or so independent prerequisites with unknown probabilities. Since you multiply the probabilities of each prerequisite to get the overall probability, you end up with a relatively low overall probability even when the probability of each prerequisite is relatively high, and if just a few of the prerequisites have small probabilities, the overall probability basically can’t be anything other than very small.

Given this structure to the problem, if you find yourself espousing a p(doom) of 80%, you’re probably not thinking about the issue properly. If in 10 years some of those prerequisites have turned out to be true, then you can start getting worried and be justified about it. But from where we are now there’s just no way.

robocat 7 days ago | parent | prev | next [-]

I saw an article recently that talked about stringing likely inferences together but ending up with an unreliable outcome because enough 0.9 probabilities one after the other lead to an unlikely conclusion.

Edit: Couldn't find the article, but AI referenced Baysian "Chain of reasoning fallacy".

godelski 7 days ago | parent | next [-]

I think you have this oversimplified. Stringing together inferences can take us in either direction. It really depends on how things are being done and this isn't always so obvious or simple. But just to show both directions I'll give two simple examples (real world holds many more complexities)

It is all about what is being modeled and how the inferences string together. If these are being multiplied, then yes, this is going to decreases as xy < x and xy < y for every x,y < 1.

But a good counter example is the classic Bayesian Inference example[0]. Suppose you have a test that detects vampirism with 95% accuracy (Pr(+|vampire) = 0.95) and has a false positive rate of 1% (Pr(+|mortal) = 0.01). But vampirism is rare, affecting only 0.1% of the population. This ends up meaning a positive test only gives us a 8.7% likelihood of a subject being a vampire (Pr(vampire|+). The solution here is that we repeat the testing. On our second test Pr(vampire) changes from 0.001 to 0.087 and Pr(vampire|+) goes to 89% and a third getting us to about 99%.

[0] Our equation is

                  Pr(+|vampire)Pr(vampire)
  Pr(vampire|+) = ------------------------
                           Pr(+)
And the crux is Pr(+) = Pr(+|vampire)Pr(vampire) + Pr(+|mortal)(1-Pr(vampire))
p1necone 7 days ago | parent | next [-]

Worth noting that solution only works if the false positives are totally random, which is probably not true of many real world cases and would be pretty hard to work out.

godelski 7 days ago | parent [-]

Definitely. Real world adds lots of complexities and nuances, but I was just trying to make the point that it matters how those inferences compound. That we can't just conclude that compounding inferences decreases likelihood

Dylan16807 7 days ago | parent [-]

Well they were talking about a chain, A->B, B->C, C->D.

You're talking about multiple pieces of evidence for the same statement. Your tests don't depend on any of the previous tests also being right.

godelski 7 days ago | parent [-]

Be careful with your description there, are you sure it doesn't apply to the Bayesian example (which was... illustrative...? And not supposed to be every possible example?)? We calculated f(f(f(x))), so I wouldn't say that this "doesn't depend on the previous 'test'". Take your chain, we can represent it with h(g(f(x))) (or (f∘g∘h)(x)). That clearly fits your case for when f=g=h. Don't lose sight of the abstractions.

Dylan16807 7 days ago | parent [-]

So in your example you can apply just one test result at a time, in any order. And the more pieces of evidence you apply, the stronger your argument gets.

f = "The test(s) say the patient is a vampire, with a .01 false positive rate."

f∘f∘f = "The test(s) say the patient is a vampire, with a .000001 false positive rate."

In the chain example f or g or h on its own is useless. Only f∘g∘h is relevant. And f∘g∘h is a lot weaker than f or g or h appears on its own.

This is what a logic chain looks like, adapted for vampirism to make it easier to compare:

f: "The test says situation 1 is true, with a 10% false positive rate."

g: "If situation 1 then situation 2 is true, with a 10% false positive rate."

h: "If situation 2 then the patient is a vampire, with a 10% false positive rate."

f∘g∘h = "The test says the patient is a vampire, with a 27% false positive rate."

So there are two key differences. One is the "if"s that make the false positives build up. The other is that only h tells you anything about vampires. f and g are mere setup, so they can only weaken h. At best f and g would have 100% reliability and h would be its original strength, 10% false positive. The false positive rate of h will never be decreased by adding more chain links, only increased. If you want a smaller false positive rate you need a separate piece of evidence. Like how your example has three similar but separate pieces of evidence.

godelski 7 days ago | parent [-]

Again, my only argument was that you can have both situations occur. We could still construct a f∘g∘h to increase probability if we want. I'm not saying it cannot go down, I'm saying there's no absolute rule you can follow.

Dylan16807 7 days ago | parent [-]

I don't think you can make a chain of logic f∘g∘h where the probability of the combined function is higher than the probability of f or g or h on their own.

Chain of logic meaning that only the last function updates the probability you care about, and the preceeding functions give you intermediate information that is only useful to feed into the next function.

It is an absolute rule you can follow, as long as you're applying it the way it was intended, to a specific organization of functions. It's not any kind of combining, it's A->B->C->D combining. As opposed to multiple pieces that each independently imply D.

Just because you can use ∘ in both situations doesn't make them the same. Whether x∘y∘z is chaining depends on what x and y and z do. If all of them update the same probability, that's not chaining. If removing any of them would leave you with no information about your target probability, then it's chaining.

TL;DR: ∘ doesn't tell you if something is a chain, you're conflating chains with non-chains, the rule is useful when it comes to chains

godelski 6 days ago | parent [-]

I'm not disagreeing with you. You understand that, right?

The parent was talking about stringing together inferences. My argument *was how you string them together matters*. That's all. I said "context matters."

I tried to reiterate this in my previous comment. So let's try one more time. Again, I'm not going to argue you're wrong. I'm going to argue that more context is needed to determine if likelihood increases or decreases. I need to stress this before moving on.

Let's go one more comment back, when I'm asking you if you're sure that this doesn't apply to the Bayesian case too. My point here was that, again, context matters. Are these dependent or independent? My whole point is that we don't know which direction things will go in without additional context. I __am not__ making the point that it always gets better like in the Bayesian example. The Bayesian case was _an example_. I also gave an example for the other case. So why focus on one of these and ignore the other?

  > ∘ doesn't tell you if something is a chain
∘ is the composition operator (at least in this context and you also interpreted it that way). So yes, yes it does. It is the act of chaining together functions. Hell, we even have "the chain rule" for this. Go look at the wiki if you don't believe me, or any calculus book. You can go into more math and you'll see the language change to use maps to specify the transition process.

  >  It's not any kind of combining, it's A->B->C->D combining.
Yes, yes it does. The *events* are independent but the *states* are dependent. Each test does not depend on the previous test, making the tests independent, but our marginal is! Hell, you see this in basic Markov Chains too. The decision process does not depend on other nodes in the chain but the state does. If you want to draw our Bayesian example as a chain you can do so. It's going to be really fucking big because you're going to need to calculate all potential outcomes making it both infinitely wide and infinitely deep, but you can. The inference process allows us to skip all those computations and lets us focus on only performing calculations for states we transition into.

Just ask yourself, how did you get to state B? *You drew arrows for a reason*. But arrows only tell us about a transition occurring, they do not tell us about that transition process. They lack context.

  > you're conflating chains with non-chains
No, you're being too strict in your definition of "chain". Which, brings us back to my first comment.

Look, we can still view both situations from the perspective of Markov Chains. We can speak about this with whatever language we want but if you want chains let's use something that is clearly a chain. Our classic MC is the easy case, right? Our state only depends on the previous state, right? P(x_{t}|x_{t-1}). Great, just like the Bayesian case (our state is dependent but our transition function is independent). So we can also have higher order MCs, depending on any n previous state. We can extend our transition function too. P(x_{t}|x_{t-1},...,x_0) = Q. We don't have to restrict ourselves to Q(x_{t-1}), we can do whatever the hell we want. In fact, our simple MC process is going to be equivalent to Q(x_{t-1},...,x_0) it is just that nothing ends up contributing except for that x_{t-1}. The process is still the same, but the context matters.

  >  It's not any kind of combining, it's A->B->C->D combining. ***As opposed to multiple pieces that each independently imply D.***
This tells me you drew your chain wrong. If multiple things are each contributing to D independently then that is not A->B->C->D (or as you wrote the first time: `A->B, B->C, C->D`, which is equivalent!) you instead should have written something like A -> C <- B. Or using all 4 letters

       B
       |
       v
  A -> D <- C
These are completely different things! This is not a sequential process. This is not (strictly) composition.

And yet, again, we still do not know if these are decreasing. They will decrease if A,B,C,D ∈ ℙ AND our transition functions are multiplicative (∏ x_i < x_j ∀ j ; where x_i ∈ ℙ), but this will not happen if the transition function is additive (∑ x_i ≥ x_j ∀ j ; where x_i ∈ ℙ)

We are still entirely dependent upon context.

Now, we're talking about LLMs, right? Your conversation (and CoT) is much closer to the Bayesian case than our causal DAG with dependence. Yes, the messages in the conversation transition us through states, but the generation is independent. The prompt and context lengthen, but this is not the same thing as the events being dependent. The LLM response is an independent event. Like the BI case the state has changed, but the generation event is identical (i.e. independent). We don't care how we got to the current state! You don't need to have the conversation with the LLM. Every inference from the LLM is independent, even if the state isn't. The inference only depends on the tokens currently in context. Assuming you turn on deterministic mode (setting seeds identically), you could generate an identical output by passing the conversation (and properly formatting) into a brand new fresh prompt. That shows that the dependence is on state, not inference. Just like our Bayesian example you'd generate the same output if you start from the same state. The independence is because we don't care how we got to that state, only that we are at that state (same with simple MCs). There are added complexities that can change this but we can't go there if we can't get to this place first. We'd need to have this clear before we can add complexities like memory and MoEs because the answer only gets more nuanced.

So again, our context really matters here and the whole conversation is about how these subtleties matter. The question was, if those errors compound. I hope you see that that's not so simple to answer. *Personally*, I'm pretty confident they will in current LLMs, because they rely far too heavily on their prompting (it'll give you incorrect answers if you prime it that way despite being able to give correct answers with better prompting) but this isn't a necessary condition now, is it?

TLDR: We can't determine if likelihood increases or decreases without additional context

Dylan16807 6 days ago | parent [-]

I'll try to keep this simple.

> I'm not disagreeing with you. You understand that, right?

We disagree about whether context can make a difference, right?

> The parent was talking about stringing together inferences. My argument was how you string them together matters. That's all. I said "context matters."

> TLDR: We can't determine if likelihood increases or decreases without additional context

The situations you describe where inference acts differently do not fall under the "stringing together"/"chaining" they were originally talking about. Context never makes their original statement untrue. Chaining always makes evidence weaker.

To be extra clear, it's not about whether the evidence pushes your result number up or down, it's that the likelihood of the evidence itself being correct drops.

> It is the act of chaining together functions.

They were not talking about whether something is composition or not. When they said "string" and "chain" they were talking about a sequence of inferences where each one leads to the next one.

Composition can be used in a wide variety of contexts. You need context to know if composition weakens or strengthens arguments. But you do not need context to know if stringing/chaining weakens or strengthens.

> No, you're being too strict in your definition of "chain".

No, you're being way too loose.

> This tells me you drew your chain wrong. If multiple things are each contributing to D independently then that is not A->B->C->D

??? Of course those are different. That's why I wrote "as opposed to".

> I also gave an example for the other case. So why focus on one of these and ignore the other?

I'm focused on the one you called a "counter example" because I'm arguing it's not an example.

If you specifically want me to address "If these are being multiplied, then yes, this is going to decreases as xy < x and xy < y for every x,y < 1." then yes that's correct. I never doubted your math, and everyone agrees about that one.

TL;DR:

At this point I'm mostly sure we're only disagreeing about the definition of stringing/chaining? If yes, oops sorry I didn't mean to argue so much about definitions. If not, then can you give me an example of something I would call a chain where adding a step increases the probability the evidence is correct?

And I have no idea why you're talking about LLMs.

godelski 6 days ago | parent [-]

  > I'm mostly sure we're only disagreeing about the definition of stringing/chaining? 
Correct.

  > No, you're being way too loose.
Okay, instead of just making claims and for me to trust you, go point to something concrete. I've even tried to google, but despite my years of study in statistics, metric theory, and even mathematical logic I'm at a loss in finding your definition.

I'm aware of the Chain Rule of Probability, but this isn't the only type you will find the term "chain" in statistics. Hell, the calculus Chain Rule is still used there too! So forgive me for being flustered but you are literally arguing to me that a Markov Chain isn't a chain. Maybe I'm having a stroke, but I'm pretty sure the word "chain" is in Markov Chain.

Dylan16807 6 days ago | parent [-]

> Okay, instead of just making claims and for me to trust you, go point to something concrete. I've even tried to google, but despite my years of study in statistics, metric theory, and even mathematical logic I'm at a loss in finding your definition.

Let's look again at what we're talking about:

>>> I think it’s that people tend to build up “logical” conclusions where they think each step is a watertight necessity that follows inevitably from its antecedents, but actually each step is a little bit leaky, leading to runaway growth in false confidence.

>> As a former mechanical engineer, I visualize this phenomenon like a "tolerance stackup". Effectively meaning that for each part you add to the chain, you accumulate error.

> I saw an article recently that talked about stringing likely inferences together but ending up with an unreliable outcome because enough 0.9 probabilities one after the other lead to an unlikely conclusion.

> Edit: Couldn't find the article, but AI referenced Baysian "Chain of reasoning fallacy".

The only term in there you could google is "tolerance stackup". The rest is people making ad-hoc descriptions of things. Except for "Chain of reasoning fallacy" which is a fake term. So I'm not surprised you didn't find anything in google, and I can't provide you anything from google. There is nothing "concrete" to ask for when it comes to some guy's ad-hoc description, you just have to read it and do your best.

And everything I said was referring back to those posts, primarily the last one by robocat. I was not introducing anything new when I used the terms "string" and "chain". I was not referring to any scientific definitions. I was only talking about the concept described by those three posts.

Looking back at those posts, I will confidently state that the concept they were talking about does not include markov chains. You're not having a stroke, it's just a coincidence that the word "chain" can be used to mean multiple things.

godelski 6 days ago | parent [-]

I googled YOUR terms. And if you read my messages you'd notice that I'm not a novice when it comes to math. Hell, you should have gotten that from my very first comment. I was never questioning if I had a stroke, I was questioning your literacy.

  > I was not referring to any scientific definitions.
Yet, you confidently argued against ones that were stated.

If you're going to speak out your ass, at least have the decency to let everyone know first.

Dylan16807 6 days ago | parent [-]

They were never my terms. They were the terms from the people that were having a nice conversation before you interrupted.

You told them they were wrong, that it could go either way.

That's not true.

What they were talking about cannot go either way.

You were never talking about the same thing as them. I gave you the benefit of the doubt by thinking you were trying to talk about the same thing as them. Apparently I shouldn't have.

You can't win this on definitions. They were talking about a thing without using formal definitions, and you replied to them with your own unrelated talk, as if it was what they meant. No. You don't get to change what they meant.

That's why I argued against your definition. Your definition is lovely in some other conversation. Your definition is not what they meant, and cannot override what they meant.

wombatpm 7 days ago | parent | prev | next [-]

Can’t you improve thing if you can calibrate with a known good vampire? You’d think NIST or the CDC would have one locked in a basement somewhere.

godelski 7 days ago | parent | next [-]

IDK, probably? I'm just trying to say that iterative inference doesn't strictly mean decreasing likelihood.

I'm not a virologist or whoever designs these kinds of medical tests. I don't even know the right word to describe the profession lol. But the question is orthogonal to what's being discussed here. I'm only guessing "probably" because usually having a good example helps in experimental design. But then again, why wouldn't the original test that we're using have done that already? Wouldn't that be how you get that 95% accurate test?

I can't tell you the biology stuff, I can just answer math and ML stuff and even then only so much.

weard_beard 7 days ago | parent | prev | next [-]

GPT6 would come faster but we ran out of Casandra blood.

ethbr1 7 days ago | parent | prev [-]

The thought of a BIPM Reference Vampire made me chuckle.

tintor 7 days ago | parent | prev [-]

Assuming your vampire tests are independent.

godelski 7 days ago | parent [-]

Correct. And there's a lot of other assumptions. I did make a specific note that it was a simplified and illustrative example. And yes, in the real world I'd warn about being careful when making i.i.d. assumptions, since these assumptions are made far more than people realize.

7 days ago | parent | prev [-]
[deleted]
to11mtm 7 days ago | parent | prev | next [-]

I like this analogy.

I think of a bike's shifting systems; better shifters, better housings, better derailleur, or better chainrings/cogs can each 'improve' things.

I suppose where that becomes relevant to here, is that you can have very fancy parts on various ends but if there's a piece in the middle that's wrong you're still gonna get shit results.

dylan604 7 days ago | parent [-]

You only as strong as the weakest link.

Your SCSI devices are only as fast as the slowest device in the chain.

I don't need to be faster than the bear, I only have to be faster than you.

jandrese 7 days ago | parent [-]

> Your SCSI devices are only as fast as the slowest device in the chain.

There are not many forums where you would see this analogy.

guerrilla 7 days ago | parent | prev [-]

This is what I hate about real life electronics. Everything is nice on paper, but physics sucks.

godelski 7 days ago | parent [-]

  > Everything is nice on paper
I think the reason this is true is mostly because how people do things "on paper". We can get much more accurate with "on paper" modeling, but the amount of work increases very fast. So it tends to be much easier to just calculate things as if they are spherical chickens in a vacuum and account for error than it is to calculate including things like geometry, drag, resistance, and all that other fun jazz (which you still will also need to account for error/uncertainty though this now can be smaller).

Which I think at the end of the day the important lesson is more how simple explanations can be good approximations that get us most of the way there but the details and nuances shouldn't be so easily dismissed. With this framing we can choose how we pick our battles. Is it cheaper/easier/faster to run a very accurate sim or cheaper/easier/faster to iterate in physical space?

godelski 8 days ago | parent | prev | next [-]

  > I don’t think it’s just (or even particularly) bad axioms
IME most people aren't very good at building axioms. I hear a lot of people say "from first principles" and it is a pretty good indication that they will not be. First principles require a lot of effort to create. They require iteration. They require a lot of nuance, care, and precision. And of course they do! They are the foundation of everything else that is about to come. This is why I find it so odd when people say "let's work from first principles" and then just state something matter of factly and follow from there. If you want to really do this you start simple, attack your own assumptions, reform, build, attack, and repeat.

This is how you reduce the leakiness, but I think it is categorically the same problem as the bad axioms. It is hard to challenge yourself and we often don't like being wrong. It is also really unfortunate that small mistakes can be a critical flaw. There's definitely an imbalance.

  >> The smartest people I have ever known have been profoundly unsure of their beliefs and what they know.
This is why the OP is seeing this behavior. Because the smartest people you'll meet are constantly challenging their own ideas. They know they are wrong to at least some degree. You'll sometimes find them talking with a bit of authority at first but a key part is watching how they deal with challenging of assumptions. Ask them what would cause them to change their minds. Ask them about nuances and details. They won't always dig into those can of worms but they will be aware of it and maybe nervousness or excited about going down that road (or do they just outright dismiss it?). They understand that accuracy is proportional to computation, and you have exponentially increasing computation as you converge on accuracy. These are strong indications since it'll suggest if they care more about the right answer or being right. You also don't have to be very smart to detect this.
joe_the_user 7 days ago | parent [-]

IME most people aren't very good at building axioms.

It seems you implying that some people are good building good axiom systems for the real world. I disagree. There are a few situations in the world where you have generalities so close to complete that you can use simple logic on them. But for the messy parts of the real world, there simply is not set of logical claims which can provide anything like certainty no matter how "good" someone is at "axiom creation".

godelski 7 days ago | parent [-]

I don't even know what you're arguing.

  > you implying that some people are good building good axiom systems
How do you go from "most people aren't very good" to "this implies some people are really good"? First, that is just a really weird interpretation of how people speak (btw, "you're" not "you" ;) because this is nicer and going to be received better than "making axioms is hard and people are shit at it." Second, you've assumed a binary condition. Here's an example. "Most people aren't very good at programming." This is an objectively true statement, right?[0] I'll also make the claim that no one is a good programmer, but some programmers are better than others. There's no contradiction in those two claims, even if you don't believe the latter is true.

Now, there are some pretty good axiom systems. ZF and ZFC seems to be working pretty well. There's others too and they are used to for pretty complex stuff. They all work at least for "simple logic."

But then again, you probably weren't thinking of things like ZFC. But hey, that was kinda my entire point.

  > there simply is not set of logical claims which can provide anything like certainty no matter how "good" someone is at "axiom creation".
 
I agree. I'd hope I agree considering my username... But you've jumped to a much stronger statement. I hope we both agree that just because there are things we can't prove that this doesn't mean there aren't things we can prove. Similarly I hope we agree that if we couldn't prove anything to absolute certainty that this doesn't mean we can't prove things to an incredibly high level of certainty or that we can't prove something is more right than something else.

[0] Most people don't even know how to write a program. Well... maybe everyone can write a Perl program but let's not get into semantics.

joe_the_user 7 days ago | parent | next [-]

I think I misunderstood that you talking of axiomatization of mathematical or related systems.

The original discussion are about the formulation of "axioms" about the real world ("the bus always X minutes late" or more elaborate stuff). I suppose I should have considered with your username, you would have consider the statement in terms of the formulation of mathematical axioms.

But still, I misunderstood you and you misunderstood me.

godelski 7 days ago | parent [-]

  > you talking of axiomatization of mathematical or related systems.
Why do you think these are so different? Math is just a language in which we are able to formalize abstraction. Sure, it is pedantic as fuck, but that doesn't make it "not real world". If you want to talk about the bus always being late you just do this distributionally. Probabilities are our formalization around uncertainty.

We're talking about "rationalist" cults, axioms, logic, and "from first principles", I don't think using a formal language around this stuff is that much of a leap, if any. (Also, not expecting you to notice my username lol. But I did mention it because after the fact it would make more sense and serve as a hint to where I'm approaching this from).

joe_the_user 6 days ago | parent [-]

Why do you think these are so different?

Because "reality" doesn't have "atomic", certain, etc operations? Also, it's notable that since most reasonings about the real world are approximate, the law of excluded middle is much less likely to apply.

If you want to talk about the bus always being late you just do this distributionally. Probabilities are our formalization around uncertainty.

Ah, but you can't be certain that you're dealing with a given distribution, not outside the quantum realm. You can talk about, you can roughly model, real world phenomena with second order or higher kind of statements. But you can't just use axioms

We're talking about "rationalist" cults, axioms, logic, and "from first principles", I don't think using a formal language around this stuff is that much of a leap, if any.

Sure, this group used (improperly) all sorts of logical reasoning and so one might well formal language to describe their (less than useful) beliefs. But this discussion began with the point some made that their use of axiomatic reasoning indeed lead to less than useful outcomes.

godelski 6 days ago | parent [-]

  > Because "reality" doesn't have "atomic", certain, etc operations?
That's not a requirement. The axioms are for our modeling, not reality.

  > but you can't be certain that you're dealing with a given distribution, not outside the quantum realm.
I guess I'll never understand why non-physicists want to talk so confidently about physics. Especially quantum mechanics[0]. You can get through Griffiths with mostly algebra and some calculus. Group theory is a big plus, but not necessary. I also suggest having a stiff drink on hand. Sometimes you'll need to just shut up and do the math. Don't worry, it'll only be more confusing years later if you get to Messiah.

[0] https://xkcd.com/451/

Dylan16807 7 days ago | parent | prev [-]

If you mean nobody is good at something, just say that.

Saying most people aren't good at it DOES imply that some are good at it.

guerrilla 7 days ago | parent | prev | next [-]

> I don’t think it’s just (or even particularly) bad axioms, I think it’s that people tend to build up “logical” conclusions where they think each step is a watertight necessity that follows inevitably from its antecedents, but actually each step is a little bit leaky, leading to runaway growth in false confidence.

This is what you get when you naively re-invent philosophy from the ground up while ignoring literally 2500 years of actual debugging of such arguments by the smartest people who ever lived.

You can't diverge from and improve on what everyone else did AND be almost entirely ignorant of it, let alone have no training whatsoever in it. This extreme arrogance I would say is the root of the problem.

BeFlatXIII 8 days ago | parent | prev | next [-]

> Not that non-rationalists are any better at reasoning, but non-rationalists do at least benefit from some intellectual humility.

Non-rationalists are forced to use their physical senses more often because they can't follow the chain of logic as far. This is to their advantage. Empiricism > rationalism.

whatevertrevor 7 days ago | parent | next [-]

That conclusion presupposes that rationality and empiricism are at odds or mutually incompatible somehow. Any rational position worth listening to, about any testable hypothesis, is hand in hand with empirical thinking.

guerrilla 7 days ago | parent [-]

In traditional philosophy, rationalism and empiricism are at odds; they are essentially diametrically opposed. Rationalism prioritizes a priori reasoning while empiricism prioritizes a posteriori reasoning. You can prioritize both equally but that is neither rationalism nor empiricism in the traditional terminology. The current rationalist movement has no relation to that original rationalist movement, so the words don't actually mean the same thing. In fact, the majority of participants in the current movement seem ignorant of the historical dispute and its implications, hence the misuse of the word.

BlueTemplar 7 days ago | parent | next [-]

Yeah, Stanford has a good recap :

https://plato.stanford.edu/entries/rationalism-empiricism/

(Note also how the context is French vs British, and the French basically lost with Napoleon, so the current "rationalists" seem to be more likely to be heirs to empiricism instead.)

whatevertrevor 7 days ago | parent | prev [-]

Thank you for clarifying.

That does compute with what I thought the "Rationalist" movement as covered by the article was about. I didn't peg them as pure a priori thinkers as you put it. I suppose my comment still holds, assuming the rationalist in this context refers to the version of "Rationalism" being discussed in the article as opposed to the traditional one.

om8 7 days ago | parent | prev | next [-]

Good rationalism includes empiricism though

ehmrb 8 days ago | parent | prev [-]

[dead]

danaris 8 days ago | parent | prev | next [-]

> I think it’s that people tend to build up “logical” conclusions where they think each step is a watertight necessity that follows inevitably from its antecedents, but actually each step is a little bit leaky, leading to runaway growth in false confidence.

Yeah, this is a pattern I've seen a lot of recently—especially in discussions about LLMs and the supposed inevitability of AGI (and the Singularity). This is a good description of it.

kergonath 8 days ago | parent | next [-]

Another annoying one is the simulation theory group. They know just enough about Physics to build sophisticated mental constructs without understanding how flimsy the foundations are or how their logical steps are actually unproven hypotheses.

JohnMakin 8 days ago | parent [-]

Agreed. This one is especially annoying to me and dear to my heart, because I enjoy discussing the philosophy behind this, but it devolves into weird discussions and conclusions fairly quickly without much effort at all. I particularly enjoy the tenets of certain sects of buddhism and how they view these things, but you'll get a lot of people that are doing a really pseudo-intellectual version of the Matrix where they are the main character.

BaseBaal 7 days ago | parent [-]

Which sects of Buddhism? Just curious to read further about them.

spopejoy 7 days ago | parent | prev [-]

You might have just explained the phenomenon of AI doomsayers overlapping with ea/rat types, which I otherwise found inexplicable. EA/Rs seem kind of appalingly positivist otherwise.

danaris 6 days ago | parent [-]

I mean, that's also because of their mutual association with Eliezer Yudkowski, who is (AIUI) a believer in the Singularity, as well as being one of the main wellsprings of "Rationalist" philosophy.

tibbar 8 days ago | parent | prev | next [-]

Yet I think most people err in the other direction. They 'know' the basics of health, of discipline, of charity, but have a hard time following through. 'Take a simple idea, and take it seriously': a favorite aphorism of Charlie Munger. Most of the good things in my life have come from trying to follow through the real implications of a theoretical belief.

bearl 7 days ago | parent [-]

And “always invert”! A related mungerism.

more_corn 7 days ago | parent [-]

I always get weird looks when I talk about killing as many pilots as possible. I need a new example of the always invert model of problem solving.

analog31 8 days ago | parent | prev | next [-]

Perhaps part of being rational, as opposed to rationalist, is having a sense of when to override the conclusions of seemingly logical arguments.

1attice 7 days ago | parent [-]

In philosophy grad school, we described this as 'being reasonable' as opposed to 'being rational'.

That said, big-R Rationalism (the Lesswrong/Yudkowsky/Ziz social phenomenon) has very little in common with what we've standardly called 'rationalism'; trained philosophers tend to wince a little bit when we come into contact with these groups (who are nevertheless chockablock with fascinating personalities and compelling aesthetics.)

From my perspective (and I have only glancing contact,) these mostly seem to be _cults of consequentialism_, an epithet I'd also use for Effective Altruists.

Consequentialism has been making young people say and do daft things for hundreds of years -- Dostoevsky's _Crime and Punishment_ being the best character sketch I can think of.

While there are plenty of non-religious (and thus, small-r rationalist) alternatives to consequentialism, none of them seem to make it past the threshold in these communities.

The other codesmell these big-R rationalist groups have for me, and that which this article correctly flags, is their weaponization of psychology -- while I don't necessarily doubt the findings of sociology, psychology, etc, I wonder if they necessarily furnish useful tools for personal improvement. For example, memorizing a list of biases that people can potentially have is like numbering the stars in the sky; to me, it seems like this is a cargo-cultish transposition of the act of finding _fallacies in arguments_ into the domain of finding _faults in persons_.

And that's a relatively mild use of psychology. I simply can't imagine how annoying it would be to live in a household where everyone had memorized everything from connection theory to attachment theory to narrative therapy and routinely deployed hot takes on one another.

In actual philosophical discussion, back at the academy, psychologizing was considered 'below the belt', and would result in an intervention by the ref. Sometimes this was explicitly associated with something we called 'the Principle of Charity', which is that, out of an abundance of epistemic caution, you commit to always interpreting the motives and interests of your interlocutor in the kindest light possible, whether in 'steel manning' their arguments, or turning a strategically blind eye to bad behaviour in conversation.

The importance Principle of Charity is probably the most enduring lesson I took from my decade-long sojurn among the philosophers, and mutual psychological dissection is anathema to it.

throw4847285 7 days ago | parent | next [-]

I actually think that the fact that rationalists use the term "steel manning" betrays a lack of charity.

If the only thing you owe your interlocutor is to use your "prodigious intellect" to restate their own argument in the way that sounds the most convincing to you, maybe you are in fact a terrible listener.

Eliezer 7 days ago | parent | next [-]

I have tried to tell my legions of fanatic brainwashed adherents exactly this, and they have refused to listen to me because the wrong way is more fun for them.

https://x.com/ESYudkowsky/status/1075854951996256256

Dylan16807 7 days ago | parent | prev | next [-]

Listening to other viewpoints is hard. Restating is a good tool to improve listening and understanding. I don't agree with this criticism at all, since that "prodigious intellect" bit isn't inherent to the term.

throw4847285 7 days ago | parent [-]

I was being snarky, but I think steelmanning does have one major flaw.

By restating the argument in terms that are most convincing to you, you may already be warping the conclusions of your interlocutor to fit what you want them to be saying. Charity is, "I will assume this person is intelligent and overlook any mistakes in order to try and understand what they are actually communicating." Steelmanning is "I can make their case for them, better than they could."

Of course this is downstream of the core issue, and the reason why steelmanning was invented in the first place. Namely, charity breaks down on the internet. Steelmanning is the more individualistic version of charity. It is the responsibility of people as individuals, not a norm that can be enforced by an institution or community.

vintermann 7 days ago | parent | next [-]

One of the most annoying habits of Rationalists, and something that annoyed me with plenty of people online before Yudkowsky's brand was even a thing, is the assumption that they're much smarter than almost everyone else. If that is your true core belief, the one that will never be shaken, then of course you're not going to waste time trying to understand the nuances of the arguments of some pious medieval peasant.

Dylan16807 7 days ago | parent | prev [-]

For mistakes that aren't just nitpicks, for the most part you can't overlook them without something to fix them with. And ideally this fixing should be collaborative, figuring out if that actually is what they mean. It's definitely bad to think you simply know better or are better at arguing, but the opposite end of leaving seeming-mistakes alone doesn't lead to a good resolution either.

1attice 7 days ago | parent | prev [-]

Just so. I hate this term, and for essentially this reason, but it has undeniable currency right now; I was writing to be understood.

NoGravitas 7 days ago | parent | prev | next [-]

> While there are plenty of non-religious (and thus, small-r rationalist) alternatives to consequentialism, none of them seem to make it past the threshold in these communities.

I suspect this is because consequentialism is the only meta-ethical framework that has any leg to stand on other than "because I said so". That makes it very attractive. The problem is you also can't build anything useful on top of it, because if you try to quantify consequences, and do math on them, you end up with the Repugnant Conclusion or worse. And in practice - in Effective Altruism/Longtermism, for example - the use of arbitrarily big numbers lets you endorse the Very Repugnant Conclusion while patting yourself on the back for it.

rendx 7 days ago | parent | prev | next [-]

> to me, it seems like this is a cargo-cultish transposition of the act of finding _fallacies in arguments_ into the domain of finding _faults in persons_.

Well put, thanks!

morpheos137 7 days ago | parent | prev [-]

I am interested in your journey from philosophy to coding.

MajimasEyepatch 8 days ago | parent | prev | next [-]

I feel this way about some of the more extreme effective altruists. There is no room for uncertainty or recognition of the way that errors compound.

- "We should focus our charitable endeavors on the problems that are most impactful, like eradicating preventable diseases in poor countries." Cool, I'm on board.

- "I should do the job that makes the absolute most amount of money possible, like starting a crypto exchange, so that I can use my vast wealth in the most effective way." Maybe? If you like crypto, go for it, I guess, but I don't think that's the only way to live, and I'm not frankly willing to trust the infallibility and incorruptibility of these so-called geniuses.

- "There are many billions more people who will be born in the future than those people who are alive today. Therefore, we should focus on long-term problems over short-term ones because the long-term ones will affect far more people." Long-term problems are obviously important, but the further we get into the future, the less certain we can be about our projections. We're not even good at seeing five years into the future. We should have very little faith in some billionaire tech bro insisting that their projections about the 22nd century are correct (especially when those projections just so happen to show that the best thing you can do in the present is buy the products that said tech bro is selling).

xg15 8 days ago | parent | next [-]

The "longtermism" idea never made sense to me: So we should sacrifice the present to save the future. Alright. But then those future descendants would also have to sacrifice their present to save their future, etc. So by that logic, there could never be a time that was not full of misery. So then why do all of that stuff?

twic 8 days ago | parent | next [-]

At some point in the future, there won't be more people who will live in the future than live in the present, at which point you are allowed to improve conditions today. Of course, by that point the human race is nearly finished, but hey.

That said, if they really thought hard about this problem, they would have come to a different conclusion:

https://theconversation.com/solve-suffering-by-blowing-up-th...

xg15 8 days ago | parent | next [-]

Some time after we've colonized half the observable universe. Got it.

imtringued 7 days ago | parent [-]

Actually, you could make the case that the population won't grow over the next thousand years maybe even then thousand years, but that's the short term and therefore unimportant.

(I'm not a longtermist)

xg15 7 days ago | parent [-]

Not on earth, but my understanding was that space colonization was a big part of their plan.

8 days ago | parent | prev [-]
[deleted]
rawgabbit 8 days ago | parent | prev | next [-]

To me it is disguised way of saying the ends justify the means. Sure, we murder a few people today but think of the utopian paradise we are building for the future.

cogman10 7 days ago | parent [-]

From my observation, that "building the future" isn't something any of them are actually doing. Instead, the concept that "we might someday do something good with the wealth and power we accrue" seems to be the thought that allows the pillaging. It's a way to feel morally superior without actually doing anything morally superior.

Ma8ee 7 days ago | parent | prev | next [-]

A bit of longtermism wouldn’t be so bad. We could sacrifice the convenience of burning fossil fuels today for our descendants to have an inhabitable planet.

NoGravitas 7 days ago | parent [-]

But that's the great thing about Longtermism. As long as a catastrophe is not going to lead to human extinction or otherwise specifically prevent the Singularity, it's not an X-Risk that you need to be concerned about. So AI alignment is an X-Risk we need to work on, but global warming isn't, so we can keep burning as much fossil fuel as we want. In fact, we need to burn more of them in order to produce the Singularity. The misery of a few billion present/near-future people doesn't matter compared to the happiness of sextillions of future post-humans.

vharuck 8 days ago | parent | prev | next [-]

Zeno's poverty

to11mtm 7 days ago | parent | prev | next [-]

Well, there's a balance to be had. Do the most good you can while still being able to survive the rat race.

However, people are bad at that.

I'll give an interesting example.

Hybrid Cars. Modern proper HEVs[0] usually benefit to their owners, both by virtue of better fuel economy as well as in most cases being overall more reliable than a normal car.

And, they are better on CO2 emissions and lower our oil consumption.

And yet most carmakers as well as consumers have been very slow to adopt. On the consumer side we are finally to where we can have hybrid trucks that can get 36-40MPG capable of towing 4000 pounds or hauling over 1000 pounds in the bed [1] we have hybrid minivans capable of 35MPG for transporting groups of people, we have hybrid sedans getting 50+ and Small SUVs getting 35-40+MPG for people who need a more normal 'people' car. And while they are selling better it's insane that it took as long as it has to get here.

The main 'misery' you experience at that point, is that you're driving the same car as a lot of other people and it's not as exciting [2] as something with more power than most people know what to do with.

And hell, as they say in investing, sometimes the market can be irrational longer than you can stay solvent. E.x. was it truly worth it to Hydro-Quebec to sit on LiFePO patents the way they did vs just figuring out licensing terms that got them a little bit of money to then properly accelerate adoption of Hybrids/EVs/etc?

[0] - By this I mean Something like Toyota's HSD style setup used by Ford and Subaru, or Honda or Hyundai/Kia's setup where there's still a more normal transmission involved.

[1] - Ford advertises up to 1500 pounds, but I feel like the GVWR allows for a 25 pound driver at that point.

[2] - I feel like there's ways to make an exciting hybrid, but until there's a critical mass or Stellantis gets their act together, it won't happen...

riku_iki 6 days ago | parent | next [-]

> [2] - I feel like there's ways to make an exciting hybrid, but until there's a critical mass or Stellantis gets their act together, it won't happen...

many hybrid already way more exciting than regular ice, because they provide more torque, and many consumer buy hybrid because of this reason.

7 days ago | parent | prev | next [-]
[deleted]
BlueTemplar 7 days ago | parent | prev [-]

Not that these technologies don't have anything to bring, but any discussion that still presupposes that cars/trucks(/planes) (as we know them) still have a future is (mostly) a waste of time.

P.S.: The article mentions the "normal error-checking processes of society"... but what makes them so sure cults aren't part of them ?

It's not like society is particularly good about it either, immune from groupthink (see the issue above) - and who do you think is more likely to kick-start a strong enough alternative ?

(Or they are just sad about all the failures ? But it's questionable that the "process" can work (with all its vivacity) without the "failures"...)

vlowther 8 days ago | parent | prev | next [-]

"I came up with a step-by-step plan to achieve World Peace, and now I am on a government watchlist!"

NoGravitas 7 days ago | parent | prev [-]

It goes along with the "taking ideas seriously" part of [R]ationalism. They committed to the idea of maximizing expected quantifiable utility, and imagined scenarios with big enough numbers (of future population) that the probability of the big-number-future coming to pass didn't matter anymore. Normal people stop taking an idea seriously once it's clearly a fantasy, but [R]ationalists can't do that if the fantasy is both technically possible and involves big enough imagined numbers to overwhelm its probability, because of their commitment to "shut up and calculate"'

human_person 7 days ago | parent | prev | next [-]

"I should do the job that makes the absolute most amount of money possible, like starting a crypto exchange, so that I can use my vast wealth in the most effective way."

Has always really bothered me because it assumes that there are no negative impacts of the work you did to get the money. If you do a million dollars worth of damage to the world and earn 100k (or a billion dollars worth of damage to earn a million dollars), even if you spend all of the money you earned on making the world a better place, you arent even going to fix 10% of the damage you caused (and thats ignoring the fact that its usually easier/cheaper to break things than to fix them).

to11mtm 7 days ago | parent | next [-]

> If you do a million dollars worth of damage to the world and earn 100k (or a billion dollars worth of damage to earn a million dollars), even if you spend all of the money you earned on making the world a better place, you arent even going to fix 10% of the damage you caused (and thats ignoring the fact that its usually easier/cheaper to break things than to fix them).

You kinda summed up a lot of the world post industrial revolution there, at least as far as stuff like toxic waste (Superfund, anyone?) and stuff like climate change, I mean for goodness sake let's just think about TEL and how they knew Ethanol could work but it just wasn't 'patentable'. [0] Or the "We don't even know the dollar amount because we don't have a workable solution" problem of PFAS.

[0] - I still find it shameful that a university is named after the man who enabled this to happen.

Nursie 7 days ago | parent | prev | next [-]

And not just that, but the very fact that someone considers it valid to try to accumulate billions of dollars so they can have an outsized influence on the direction of society, seems somewhat questionable.

Even with 'good' intentions, there is the implied statement that your ideas are better than everyone else's and so should be pushed like that. The whole thing is a self-satisfied ego-trip.

throawaywpg 7 days ago | parent [-]

Well, it's easy to do good. Or, its easy to plan on doing good, once your multi-decade plan to become a billionaire comes to fruition.

vintermann 6 days ago | parent | prev [-]

There's a hidden (or not so hidden) assumption in the EA's "calculations" that capitalism is great and climate change isn't a big deal. (You pretty much have to believe the latter to believe the former).

7 days ago | parent | prev [-]
[deleted]
DoctorOetker 7 days ago | parent | prev | next [-]

Would you consider the formal verification community to be "rationalists"?

eru 6 days ago | parent | prev | next [-]

> Not that non-rationalists are any better at reasoning, but non-rationalists do at least benefit from some intellectual humility.

I have observed no such correlation of intellectual humility.

kergonath 8 days ago | parent | prev | next [-]

> I don’t think it’s just (or even particularly) bad axioms, I think it’s that people tend to build up “logical” conclusions where they think each step is a watertight necessity that follows inevitably from its antecedents, but actually each step is a little bit leaky, leading to runaway growth in false confidence.

I really like your way of putting it. It’s a fundamental fallacy to assume certainty when trying to predict the future. Because, as you say, uncertainty compounds over time, all prediction models are chaotic. It’s usually associated with some form of Dunning-Kruger, where people know just enough to have ideas but not enough to understand where they might fail (thus vastly underestimating uncertainty at each step), or just lacking imagination.

ramenbytes 7 days ago | parent [-]

Deep Space 9 had an episode dealing with something similar. Superintelligent beings determine that a situation is hopeless and act accordingly. The normal beings take issue with the actions of the Superintelligents. The normal beings turn out to be right.

emmelaich 7 days ago | parent | prev | next [-]

Precisely! I'd even say they get intoxicated with their own braininess. The expression that comes to mind is to get "way out over your skis".

I'd go even further and say most of the world's evils are caused by people with theories that are contrary to evidence. I'd place Marx among these but there's no shortage of examples.

abtinf 8 days ago | parent | prev [-]

> non-rationalists do at least benefit from some intellectual humility

The Islamists who took out the World Trade Center don’t strike me as particularly intellectually humble.

If you reject reason, you are only left with force.

prisenco 8 days ago | parent | next [-]

Are you so sure the 9/11 hijackers rejected reason?

Why Are So Many Terrorists Engineers?

https://archive.is/XA4zb

Self-described rationalists can and often do rationalize acts and beliefs that seem baldly irrational to others.

cogman10 7 days ago | parent [-]

Here's the thing, the goals of the terrorists weren't irrational.

People confuse "rational" with "moral". Those aren't the same thing. You can perfectly rationally do something that is immoral with a bad goal.

For example, if you value your life above all others, then it would be perfectly rational to slaughter an orphanage if a more powerful entity made that your only choice for survival. Morally bad, rationally correct.

vintermann 6 days ago | parent [-]

Yes, there's no such thing as rationality except rationality towards a goal.

But Big R Rationalists assume that if we were rational enough (in an exotic, goal-independent way nebulously called intelligence), we'd all agree on the goals.

So basically there is no morality. No right or wrong, only smart or stupid, and guess who they think are the smart ones.

And this isn't an original philosophy at all. Plato certainly believed it (and if you believe Plato, Socrates too). Norse pagans believed it. And everyone who believes it seem to sink into mystery religion, where you can get access to the secret wisdom if you talk to the right guy who's in the know.

morleytj 8 days ago | parent | prev | next [-]

I now feel the need to comment that this thread does illustrate an issue I have with the naming of the philosophical/internet community of rationalism.

One can very clearly be a rational individual or an individual who practices reason and not associate with the internet community of rationalism. The median member of the group defined as "not being part of the internet-organized movement of rationalism and not reading lesswrong posts" is not "religious extremist striking the world trade center and committing an atrocious act of terrorism", it's "random person on the street."

And to preempt a specific response some may make to this, yes, the thread here is talking about rationalism as discussed in the blog post above as organized around Yudowsky or slate star codex, and not the rationalist movement of like, Spinoza and company. Very different things philosophically.

montefischer 8 days ago | parent | prev [-]

Islamic fundamentalism and cult rationalism are both involved in a “total commitment”, “all or nothing” type of thinking. The former is totally committed to a particular literal reading of scripture, the latter, to logical deduction from a set of chosen premises. Both modes of thinking have produced violent outcomes in the past.

Skepticism, in which no premise or truth claim is regarded as above dispute (or, that it is always permissible and even praiseworthy to suspend one’s judgment on a matter), is the better comparison with rationalism-fundamentalism. It is interesting that skepticism today is often associated with agnostic or atheist religious beliefs, but I consider many religious thinkers in history to have been skeptics par excellence when judged by the standard of their own time. E.g. William Ockham (of Ockham’s razor) was a 14C Franciscan friar (and a fascinating figure) who denied papal infallibility. I count Martin Luther as belonging to the history of skepticism as well, for example, as well as much of the humanist movement that returned to the original Greek sources for the Bible, from the Latin Vulgate translation by Jerome.

The history of ideas is fun to read about. I am hardly an expert, but you may be interested by the history of Aristotelian rationalism, which gained prominence in the medieval west largely through the works of Averroes, a 12C Muslim philosopher who heavily favored Aristotle. In 13C, Thomas Aquinus wrote a definitive Catholic systematic theology, rejecting Averroes but embracing Aristotle. To this day, Catholic theology is still essentially Aristotelian.

praptak 7 days ago | parent | next [-]

True skepticism is rare. It's easy to be skeptical only about beliefs you dislike or at least don't care about. It's hard to approach the 100th self-professed psychic with an honest intention to truly test their claims rather than to find the easiest way to ridicule them.

throwway120385 7 days ago | parent | prev [-]

The only absolute above questioning is that there are no absolutes.

gen220 7 days ago | parent | prev | next [-]

Strongly recommend this profile in the NYer on Curtis Yarvin (who also uses "rationalism" to justify their beliefs) [0]. The section towards the end that reports on his meeting one of his supposed ideological heroes for an extended period of time is particularly illuminating.

I feel like the internet has led to an explosion of these such groups because it abstracts the "ideas" away from the "people". I suspect if most people were in a room or spent an extended amount of time around any of these self-professed, hyper-online rationalists, they would immediately disregard any theories they were able to cook up, no matter how clever or persuasively-argued they might be in their written down form.

[0]: https://www.newyorker.com/magazine/2025/06/09/curtis-yarvin-...

trawy081225 7 days ago | parent [-]

> I feel like the internet has led to an explosion of these such groups because it abstracts the "ideas" away from the "people". I suspect if most people were in a room or spent an extended amount of time around any of these self-professed, hyper-online rationalists, they would immediately disregard any theories they were able to cook up, no matter how clever or persuasively-argued they might be in their written down form.

Likely the opposite. The internet has led to people being able to see the man behind the curtain, and realize how flawed the individuals pushing these ideas are. Whereas many intellectuals from 50 years back were just as bad if not worse, but able to maintain a false aura of intelligence by cutting themselves off from the masses.

wussboy 7 days ago | parent [-]

Hard disagree. People use rationality to support the beliefs they already have, not to change those beliefs. The internet allows everyone to find something that supports anything.

I do it. You do it. I think a fascinating litmus test is asking yourself this question: “When did I last change my mind about something significant?” For most people the answer is “never”. If we lived in the world you described, most people’s answers would be “relatively recently”.

trawy081225 7 days ago | parent | next [-]

That relies on two assumptions that I don't think are true at all:

1. Most people who follow these beliefs will pay attention to/care about the man behind the curtain.

2. Most people who follow these beliefs will change their mind when shown that the man behind the curtain is a charlatan.

If anything, history shows us the opposite. Even in the modern world, it's easy for people to see that other people's thought leaders are charlatans, very difficult to see that our own are.

BlueTemplar 7 days ago | parent | prev [-]

Why wouldn't this phenomenon start with writing itself (supercharged with the printing press), heck, even with oral myths ?

lordnacho 8 days ago | parent | prev | next [-]

> I immediately become suspicious of anyone who is very certain of something

Me too, in almost every area of life. There's a reason it's called a conman: they are tricking your natural sense that confidence is connected to correctness.

But also, even when it isn't about conning you, how do people become certain of something? They ignored the evidence against whatever they are certain of.

People who actually know what they're talking about will always restrict the context and hedge their bets. Their explanation are tentative, filled with ifs and buts. They rarely say anything sweeping.

dcminter 8 days ago | parent | next [-]

In the term "conman" the confidence in question is that of the mark, not the perpetrator.

sdwr 8 days ago | parent [-]

Isn't confidence referring to the alternate definition of trust, as in "taking you into his confidence"?

godelski 8 days ago | parent [-]

I think if you used that definition you could equally say "it is the mark that is taking the conman into [the mark's] confidence"

yunwal 7 days ago | parent | prev [-]

> how do people become certain of something?

They see the same pattern repeatedly until it becomes the only reasonable explanation? I’m certain about the theory of gravity because every time I drop an object it falls to the ground with a constant acceleration.

jpiburn 8 days ago | parent | prev | next [-]

"Cherish those who seek the truth but beware of those who find it" - Voltaire

paviva 8 days ago | parent [-]

Most likely Gide ("Croyez ceux qui cherchent la vérité, doutez de ceux qui la trouvent", "Believe those who seek Truth, doubt those who find it") and not Voltaire ;)

Voltaire was generally more subtle: "un bon mot ne prouve rien", a witty saying proves nothing, as he'd say.

ctoth 8 days ago | parent | prev | next [-]

> I immediately become suspicious of anyone who is very certain of something, especially if they derived it on their own.

Are you certain about this?

teddyh 8 days ago | parent | next [-]

All I know is that I know nothing.

p1esk 7 days ago | parent [-]

How do you know?

teddyh 7 days ago | parent [-]

Socrates told me.

tshaddox 7 days ago | parent | prev | next [-]

Well you could be a critical rationalist and do away with the notion of "certainty" or any sort of justification or privileged source of knowledge (including "rationality").

adrianN 8 days ago | parent | prev | next [-]

Your own state of mind is one of the easiest things to be fairly certain about.

ants_everywhere 8 days ago | parent | next [-]

The fact that this is false is one of the oldest findings of research psychology

PaulHoule 8 days ago | parent [-]

Marvin Minsky wrote forcefully [1] about this in The Society of Mind and went so far to say that trying to observe yourself (e.g. meditation) might be harmful.

Freud of course discovered a certain world of the unconscious but untrained [2] you would certainly struggle to explain how you know sentence S is grammatical and S' is not, or what it is you do when you walk.

If you did meditation or psychoanalysis or some other practice to understand yourself better it would take years.

[1] whether or not it is true.

[2] the "scientific" explanation you'd have if you're trained may or may not be true since it can't be used to program a computer to do it

lazide 8 days ago | parent | prev [-]

said no one familiar with their own mind, ever!

8 days ago | parent | prev | next [-]
[deleted]
JohnMakin 8 days ago | parent | prev | next [-]

no

idontwantthis 8 days ago | parent | prev | next [-]

Suspicious implies uncertain. It’s not immediate rejection.

at-fates-hands 7 days ago | parent | prev [-]

Isaac Newton would like to have a word.

elictronic 7 days ago | parent [-]

I am not a big fan of alchemy, thank you though.

Animats 7 days ago | parent | prev | next [-]

Many arguments arise over the valuation of future money. See "discount function" [1] At one extreme are the rational altruists, who rate that near 1.0, and the "drill, baby, drill" people, who are much closer to 0.

The discount function really should have a noise term, because predictions about the future are noisy, and the noise increases with the distance into the future. If you don't consider that, you solve the wrong problem. There's a classic Roman concern about running out of space for cemeteries. Running out of energy, or overpopulation, turned out to be problems where the projections assumed less noise than actually happened.

[1] https://en.wikipedia.org/wiki/Discount_function

ar-nelson 8 days ago | parent | prev | next [-]

I find Yudowsky-style rationalists morbidly fascinating in the same way as Scientologists and other cults. Probably because they seem to genuinely believe they're living in a sci-fi story. I read a lot of their stuff, probably too much, even though I find it mostly ridiculous.

The biggest nonsense axiom I see in the AI-cult rationalist world is recursive self-improvement. It's the classic reason superintelligence takeoff happens in sci-fi: once AI reaches some threshold of intelligence, it's supposed to figure out how to edit its own mind, do that better and faster than humans, and exponentially leap into superintelligence. The entire "AI 2027" scenario is built on this assumption; it assumes that soon LLMs will gain the capability of assisting humans on AI research, and AI capabilities will explode from there.

But AI being capable of researching or improving itself is not obvious; there's so many assumptions built into it!

- What if "increasing intelligence", which is a very vague goal, has diminishing returns, making recursive self-improvement incredibly slow?

- Speaking of which, LLMs already seem to have hit a wall of diminishing returns; it seems unlikely they'll be able to assist cutting-edge AI research with anything other than boilerplate coding speed improvements.

- What if there are several paths to different kinds of intelligence with their own local maxima, in which the AI can easily get stuck after optimizing itself into the wrong type of intelligence?

- Once AI realizes it can edit itself to be more intelligent, it can also edit its own goals. Why wouldn't it wirehead itself? (short-circuit its reward pathway so it always feels like it's accomplished its goal)

Knowing Yudowsky I'm sure there's a long blog post somewhere where all of these are addressed with several million rambling words of theory, but I don't think any amount of doing philosophy in a vacuum without concrete evidence could convince me that fast-takeoff superintelligence is possible.

ufmace 8 days ago | parent | next [-]

I agree. There's also the point of hardware dependance.

From all we've seen, the practical ability of AI/LLMs seems to be strongly dependent on how much hardware you throw at it. Seems pretty reasonable to me - I'm skeptical that there's that much out there in gains from more clever code, algorithms, etc on the same amount of physical hardware. Maybe you can get 10% or 50% better or so, but I don't think you're going to get runaway exponential improvement on a static collection of hardware.

Maybe they could design better hardware themselves? Maybe, but then the process of improvement is still gated behind how fast we can physically build next-generation hardware, perfect the tools and techniques needed to make it, deploy with power and cooling and datalinks and all of that other tedious physical stuff.

astrange 7 days ago | parent [-]

I think you can get a few more gigantic step functions' worth of improvement on the same hardware. For instance, LLMs don't have any kind of memory, short or long term.

socalgal2 7 days ago | parent | prev | next [-]

> it assumes that soon LLMs will gain the capability of assisting humans

No, it does not. It assumes there will be progress in AI. It does not assume that progress will be in LLMs

It doesn't require AI to be better than humans for AI to take over because unlike a human an AI can be cloned. You have have 2 AIs, then 4, then 8.... then millions. All able to do the same things as humans (the assumption of AGI). Build cars, build computers, build rockets, built space probes, build airplanes, build houses, build power plants, build factories. Build robot factories to create more robots and more power plants and more factories.

PS: Not saying I believe in the doom. But the thought experiment doesn't seem indefensible.

ar-nelson 7 days ago | parent | next [-]

> It does not assume that progress will be in LLMs

If that's the case then there's not as much reason to assume that this progress will occur now, and not years from now; LLMs are the only major recent development that gives the AI 2027 scenario a reason to exist.

> You have have 2 AIs, then 4, then 8.... then millions

The most powerful AI we have now is strictly hardware-dependent, which is why only a few big corporations have it. Scaling it up or cloning it is bottlenecked by building more data centers.

Now it's certainly possible that there will be a development soon that makes LLMs significantly more efficient and frees up all of that compute for more copies of them. But there's no evidence that even state-of-the-art LLMs will be any help in finding this development; that kind of novel research is just not something they're any good at. They're good at doing well-understood things quickly and in large volume, with small variations based on user input.

> But the thought experiment doesn't seem indefensible.

The part that seems indefensible is the unexamined assumptions about LLMs' ability (or AI's ability more broadly) to jump to optimal human ability in fields like software or research, using better algorithms and data alone.

Take https://ai-2027.com/research/takeoff-forecast as an example: it's the side page of AI 2027 that attempts to deal with these types of objections. It spends hundreds of paragraphs on what the impact of AI reaching a "superhuman coder" level will be on AI research, and on the difference between the effectiveness of an organizations average and best researchers, and the impact of an AI closing that gap and having the same research effectiveness as the best humans.

But what goes completely unexamined and unjustified is the idea that AI will be capable of reaching "superhuman coder" level, or developing peak-human-level "research taste", at all, at any point, with any amount of compute or data. It's simply assumed that it will get there because the exponential curve of the recent AI boom will keep going up.

Skills like "research taste" can't be learned at a high level from books and the internet, even if, like ChatGPT, you've read the entire Internet and can see all the connections within it. They require experience, trial and error. Probably the same amount that a human expert would require, but even that assumes we can make an AI that can learn from experience as efficiently as a human, and we're not there yet.

doubleunplussed 7 days ago | parent [-]

> The most powerful AI we have now is strictly hardware-dependent

Of course that's the case and it always will be - the cutting edge is the cutting edge.

But the best AI you can run on your own computer is way better than the state of the art just a few years ago - progress is being made at all levels of hardware requirements, and hardware is progressing as well. We now have dedicated hardware in some of our own devices for doing AI inference - the hardware-specificity of AI doesn't mean we won't continue to improve and commoditise said hardware.

> The part that seems indefensible is the unexamined assumptions about LLMs' ability (or AI's ability more broadly) to jump to optimal human ability [...]

I don't think this is at all unexamined. But I think it's risky to not consider the strong possibility when we have an existence proof in ourselves of that level of intelligence, and an algorithm to get there, and no particular reason to believe we're optimal since that algorithm - evolution - did not optimise us for intelligence alone.

rsynnott 7 days ago | parent | prev [-]

> No, it does not. It assumes there will be progress in AI. It does not assume that progress will be in LLMs

I mean, for the specific case of the 2027 doomsday prediction, it really does have to be LLMs at this point, just given the timeframes. It is true that the 'rationalist' AI doomerism thing doesn't depend LLMs, and in fact predates transformer-based models, but for the 2027 thing, it's gotta be LLMs.

tshaddox 7 days ago | parent | prev | next [-]

> - What if there are several paths to different kinds of intelligence with their own local maxima, in which the AI can easily get stuck after optimizing itself into the wrong type of intelligence?

I think what's more plausible is that there is general intelligence, and humans have that, and it's general in the same sense that Turing machines are general, meaning that there is no "higher form" of intelligence that has strictly greater capability. Computation speed, memory capacity, etc. can obviously increase, but those are available to biological general intelligences just like they would be available to electronic general intelligences.

Viliam1234 6 days ago | parent [-]

I agree that general intelligence is general. But increasing computation speed 1000x could still be something that is available to the machines and not to the humans, simply because electrons are faster than neurons. Also, how specifically would you 1000x increase human memory?

tshaddox 6 days ago | parent [-]

The first way we increased human memory by 1000x was with books. Now it’s mostly with computers.

Electronic AGI might have a small early advantage because it’s probably easier for them to have high-speed interfaces to computing power and memory, but I would be surprised if the innovations required to develop AGI wouldn’t also help us interface our biology with computing power and memory.

In my view this isn’t much less concerning than saying “AGI will have a huge advantage in physical strength because of powerful electric motors, hydraulics, etc.”

JKCalhoun 8 days ago | parent | prev | next [-]

An interesting point you make there — one would assume that if recursive self-improvement were a thing, Nature would have already lead humans into that "hall of mirrors".

Terr_ 7 days ago | parent | next [-]

I often like to point out that Earth was already consumed by Grey Goo, and today we are hive-minds in titanic mobile megastructure-swarms of trillions of the most complex nanobots in existence (that we know of), inheritors of tactics and capabilities from a zillion years of physical and algorithmic warfare.

As we imagine the ascension of AI/robots, it may seem like we're being humble about ourselves... But I think it's actually the reverse: It's a kind of hubris elevating our ability to create over the vast amount we've inherited.

solid_fuel 7 days ago | parent [-]

To take it a little further - if you stretch the conventional definition of intelligence a bit - we already assemble ourselves into a kind of collective intelligence.

Nations, corporations, clubs, communes -- any functional group of humans is capable of observing, manipulating, and understanding our environment in ways no individual human is capable of. When we dream of hive minds and super-intelligent AI it almost feels like we are giving up on collaboration.

BlueTemplar 7 days ago | parent [-]

We can probably thank our individualist mindset for that. (Not that it's all negative.)

twic 8 days ago | parent | prev | next [-]

There's a variant of this that argues that humans are already as intelligent as it's possible to be. Because if it's possible to be more intelligent, why aren't we? And a slightly more reasonable variant that argues that we're already as intelligent as it's useful to be.

lukan 8 days ago | parent | next [-]

"Because if it's possible to be more intelligent, why aren't we?"

Because deep abstract thoughts about the nature of the universe and elaborate deep thinking were maybe not as useful while we were chasing lions and buffaloes with a spear?

We just had to be smarter then them. Which included finding out that tools were great. Learning about the habits of the prey and optmize hunting success. Those who were smarter in that capacity had a greater chance of reproducing. Those who just exceeded in thinking likely did not lived that long.

tshaddox 7 days ago | parent [-]

Is it just dumb luck that we're able to create knowledge about black holes, quarks, and lots of things in between which presumably had zero evolutionary benefit before a handful of generations ago?

bee_rider 7 days ago | parent | next [-]

Basically yes it is luck, in the sense that evolution is just randomness with a filter of death applied, so whatever brains we happen to have are just luck.

The brains we did end up with are really bad at creating that sort of knowledge. Almost none of us can. But we’re good at communicating, coming up with simplified models of things, and seeing how ideas interact.

We’re not universe-understanders, we’re behavior modelers and concept explainers.

tshaddox 7 days ago | parent [-]

I wasn't referring the "luck" factor of evolution, which is of course always there. I was asking whether "luck" is the reason that the cognitive capabilities which presumably were selected for also came with cognitive capabilities that almost certainly were not selected for.

My guess is that it's not dumb luck, and that what we evolved is in fact general intelligence, and that this was an "easier" way to adapt to environmental pressure than to evolve a grab bag of specific (non-general) cognitive abilities. An implication of this claim would be that we are universe-understanders (or at least that we are biologically capable of that, given the right resources and culture).

In other words, it's roughly the same answer for the question "why do washing machines have Turing complete microcontrollers in them when they only need to do a very small number of computing tasks?" At scale, once you know how to implement general (i.e. Turing-complete and programmable) computers it tends to be simpler to use them than to create purpose-built computer hardware.

lukan 7 days ago | parent | prev [-]

Evolution rewarded us for developing general intelligence. But with a very immediate practical focus and not too much specialisation.

godelski 8 days ago | parent | prev | next [-]

I don't think the logic follows here. Nor does it match evidence.

The premise is ignorant of time. It is also ignorant of the fact that we know there's a lot of things we don't know. That's all before we consider other factors like if there are limits and physical barriers or many other things.

danaris 8 days ago | parent | prev [-]

While I'm deeply and fundamentally skeptical of the recursive self-improvement/singularity hypothesis, I also don't really buy this.

There are some pretty obvious ways we could improve human cognition if we had the ability to reliably edit or augment it. Better storage & recall. Lower distractibility. More working memory capacity. Hell, even extra hands for writing on more blackboards or putting up more conspiracy theory strings at a time!

I suppose it might be possible that, given the fundamental design and structure of the human brain, none of these things can be improved any further without catastrophic side effects—but since the only "designer" of its structure is evolution, I think that's extremely unlikely.

JKCalhoun 7 days ago | parent [-]

Some of your suggestions, if you don't mind my saying, seem like only modest improvements — akin to Henry Ford's quote “If I had asked people what they wanted, they would have said a faster horse.”

To your point though, an electronic machine is a different host altogether with different strengths and weaknesses.

danaris 7 days ago | parent [-]

Well, twic's comment didn't say anything about revolutionary improvements, just "maybe we're as smart as we can be".

marcosdumay 8 days ago | parent | prev [-]

Well, arguably that's exactly where we are, but machines can evolve faster.

And that's an entire new angle that the cultists are ignoring... because superintelligence may just not be very valuable.

And we don't need superintelligence for smart machines to be a problem anyway. We don't need even AGI. IMO, there's no reason to focus on that.

derefr 8 days ago | parent | next [-]

> Well, arguably that's exactly where we are

Yep; from the perspective of evolution (and more specifically, those animal species that only gain capability generationally by evolutionary adaptation of instinct), humans are the recursively self-(fitness-)improving accident.

Our species-aggregate capacity to compete for resources within the biosphere went superlinear in the middle of the previous century; and we've had to actively hit the brakes on how much of everything we take since then, handicapping . (With things like epidemic obesity and global climate change being the result of us not hitting those brakes quite hard enough.)

Insofar as a "singularity" can be defined on a per-agent basis, as the moment when something begins to change too rapidly for the given agent to ever hope to catch up with / react to new conditions — and so the agent goes from being a "player at the table" to a passive observer of what's now unfolding around them... then, from the rest of our biosphere's perspective, they've 100% already witnessed the "human singularity."

No living thing on Earth besides humans now has any comprehension of how the world has been or will be reshaped by human activity; nor can ever hope to do anything to push back against such reshaping. Every living thing on Earth other than humans, will only survive into the human future, if we humans either decide that it should survive, and act to preserve it; or if we humans just ignore the thing, and then just-so-happen to never accidentally do anything to wipe it from existence without even noticing.

Terr_ 7 days ago | parent | prev [-]

> machines can evolve faster

[Squinty Thor] "Do they though?"

I think it's valuable to challenge this popular sentiment every once-in-a-while. Sure, it's a good poetic metaphor, but when you really start comparing their "lifecycle" and change-mechanisms to the swarming biological nanobots that cover the Earth, a bunch of critical aspects just aren't there or are being done to them rather than by them.

At least for now, these machines mostly "evolve" in the same sense that fashionable textile pants "evolve".

sfink 7 days ago | parent | prev | next [-]

> What if "increasing intelligence", which is a very vague goal, has diminishing returns, making recursive self-improvement incredibly slow?

This is sort of what I subscribe to as the main limiting factor, though I'd describe it differently. It's sort of like Amdahl's Law (and I imagine there's some sort of Named law that captures it, I just don't know the name): the magic AI wand may be very good at improving some part of AGI capability, but the more you improve that part, the more the other parts come to dominate. Metaphorically, even if the juice is worth the squeeze initially, pretty soon you'll only be left with a dried-out fruit clutched in your voraciously energy-consuming fist.

I'm actually skeptical that there's much juice in the first place; I'm sure today's AIs could generate lots of harebrained schemes for improvement very quickly, but exploring those possibilities is mind-numbingly expensive. Not to mention that the evaluation functions are unreliable, unknown, and non-monotonic.

Then again, even the current AIs have convinced a large number of humans to put a lot of effort into improving them, and I do believe that there are a lot of improvements that humans are capable of making to AI. So the human-AI system does appear to have some juice left. Where we'll be when that fruit is squeezed down to a damp husk, I have no idea.

morleytj 8 days ago | parent | prev | next [-]

The built in assumptions are always interesting to me, especially as it relates to intelligence. I find many of them (though not all), are organized around a series of fundamental beliefs that are very rarely challenged within these communities. I should initially mention that I don't think everyone in these communities believes these things, of course, but I think there's often a default set of assumptions going into conversations in these spaces that holds these axioms. These beliefs more or less seem to be as follows:

1) They believe that there exists a singular factor to intelligence in humans which largely explains capability in every domain (a super g factor, effectively).

2) They believe that this factor is innate, highly biologically regulated, and a static factor about a person(Someone who is high IQ in their minds must have been a high achieving child, must be very capable as an adult, these are the baseline assumptions). There is potentially belief that this can be shifted in certain directions, but broadly there is an assumption that you either have it or you don't, there is no feeling of it as something that could be taught or developed without pharmaceutical intervention or some other method.

3) There is also broadly a belief that this factor is at least fairly accurately measured by modern psychometric IQ tests and educational achievement, and that this factor is a continuous measurement with no bounds on it (You can always be smarter in some way, there is no max smartness in this worldview).

These are things that certainly could be true, and perhaps I haven't read enough into the supporting evidence for them but broadly I don't see enough evidence to have them as core axioms the way many people in the community do.

More to your point though, when you think of the world from those sorts of axioms above, you can see why an obsession would develop with the concept of a certain type of intelligence being recursively improving. A person who has become convinced of their moral placement within a societal hierarchy based on their innate intellectual capability has to grapple with the fact that there could be artificial systems which score higher on the IQ tests than them, and if those IQ tests are valid measurements of this super intelligence factor in their view, then it means that the artificial system has a higher "ranking" than them.

Additionally, in the mind of someone who has internalized these axioms, there is no vagueness about increasing intelligence! For them, intelligence is the animating factor behind all capability, it has a central place in their mind as who they are and the explanatory factor behind all outcomes. There is no real distinction between capability in one domain or another mentally in this model, there is just how powerful a given brain is. Having the singular factor of intelligence in this mental model means being able to solve more difficult problems, and lack of intelligence is the only barrier between those problems being solved vs unsolved. For example, there's a common belief among certain groups among the online tech world that all governmental issues would be solved if we just had enough "high-IQ people" in charge of things irrespective of their lack of domain expertise. I don't think this has been particularly well borne out by recent experiments, however. This also touches on what you mentioned in terms of an AI system potentially maximizing the "wrong types of intelligence", where there isn't a space in this worldview for a wrong type of intelligence.

doubleunplussed 7 days ago | parent [-]

I think you'll indeed find, if you were to seek out the relevant literature, that those claims are more or less true, or at least, are the currently best-supported interpretation available. So I don't think they're assumptions so much as simply current state of the science on the matter, and therefore widely accepted among those who for whatever reason have looked into it (or, more likely, inherited the information from someone they trust who has read up on it).

Interestingly, I think we're increasingly learning that although most aspects of human intelligence seem to correlate with each other (thus the "singular factor" interpretation), the grab-bag of skills this corresponds to are maybe a bit arbitrary when compared to AI. What evolution decided to optimise the hell out of in human intelligence is specific to us, and not at all the same set of skills as you get out of cranking up the number of parameters in an LLM.

Thus LLMs continuing to make atrocious mistakes of certain kinds, despite outshining humans at other tasks.

Nonetheless I do think it's correct to say that the rationalists think intelligence is a real measurable thing, and that although in humans it might be a set of skills that correlate and maybe in AIs it's a different set of skills that correlate (such that outperforming humans in IQ tests is impressive but not definitive), that therefore AI progress can be measured and it is meaningful to say "AI is smarter than humans" at some point. And that AI with better-than-human intelligence could solve a lot of problems, if of course it doesn't kill us all.

morleytj 6 days ago | parent | next [-]

My general disagreements with those axioms from my reading of the literature are around the concepts of immutability and of the belief in the almost entirely biological factor, which I don't think is well supported by current research in genetics, but that may change in the future. I think primarily I disagree about the effect sizes and composition of factors with many who hold these beliefs.

I do agree with you in that I generally have an intuition that intelligence in humans is largely defined as a set of skills that often correlate, I think one of the main areas I differ in interpretation is in the interpretation of the strength of those correlations.

doubleunplussed 5 days ago | parent [-]

I think most in the rationality community (and otherwise in the know) would not say that IQ differences are almost entirely biological - I think they'd say they're about half genetic and half environmental, but that the environmental component is hard to pin to "parenting" or anything else specific. "Non-shared environment" is the usual term.

They'd agree it's largely stable over life, after whatever childhood environmental experiences shape that "non-shared environment" bit.

This is the current state of knowledge in the field as far as I know - IQ is about half genetic, and fairly immutable after adulthood. I think you'll find the current state of the field supports this.

7 days ago | parent | prev [-]
[deleted]
jordanb 7 days ago | parent | prev | next [-]

It's kinda weird how the level of discourse seems to be what you get when a few college students sit around smoking weed. Yet somehow this is taken as very serious and profound in the valley and VC throw money at it.

tim333 8 days ago | parent | prev | next [-]

I've pondered recursive self-improvement. I'm fairly sure it will be a thing - we're at a point already where people could try telling Claude or some such to have a go, even if not quite at a point it would work. But I imagine take off would be very gradual. It would be constrained by available computing resources and probably only comparably good to current human researchers and so still take ages to get anywhere.

tempfile 8 days ago | parent [-]

I honestly am not trying to be rude when I say this, but this is exactly the sort of speculation I find problematic and that I think most people in this thread are complaining about. Being able to tell Claude to have a go has no relation at all to whether it may ever succeed, and you don't actually address any of the legitimate concerns the comment you're replying to points out. There really isn't anything in this comment but vibes.

tim333 7 days ago | parent | next [-]

I don't think it's vibes rather than my thinking about the problem.

If you look at the "legitimate concerns" none are really deal breakers:

>What if "increasing intelligence", which is a very vague goal, has diminishing returns, making recursive self-improvement incredibly slow?

I'm will to believe it will be slow though maybe it won't

>LLMs already seem to have hit a wall of diminishing returns

Who cares - there will be other algorithms

>What if there are several paths to different kinds of intelligence with their own local maxima

well maybe, maybe not

>Once AI realizes it can edit itself to be more intelligent, it can also edit its own goals. Why wouldn't it wirehead itself?

well - you can make another one if the first does that

Those are all potential difficulties with self improvement, not reasons it will never happen. I'm happy to say it's not happening right now but do you have any solid arguments that it won't happen in the next century?

To me the arguments against sound like people in the 1800s discussing powered flight and saying it'll never happen because steam engine development has slowed.

doubleunplussed 7 days ago | parent | prev [-]

On the other hand, I'm baffled to encounter recursive self-improvement being discussed as something not only weird to expect, but as damning evidence of sloppy thinking by those who speculate about it.

We have an existence proof for intelligence that can improve AI: humans.

If AI ever gets to human-level intelligence, it would be quite strange if it couldn't improve itself.

Are people really that sceptical that AI will get to human level intelligence?

It that an insane belief worthy of being a primary example of a community not thinking clearly?

Come on! There is a good chance AI will recursively self-improve! Those poo pooing this idea are the ones not thinking clearly.

tempfile 7 days ago | parent | next [-]

Consider that even the named phenomenon is sloppy: "recursive self improvement" does not imply "self improvement without bounds". This is the "what if you hit diminishing returns and never get past it" claim. Absolutely no justification for the jump, ever, among AI boosters.

> If AI ever gets to human-level intelligence

This picture of intelligence as a numerical scale that you just go up or down, with ants at the bottom and humans/AI at the top, is very very shaky. AI is vulnerable to this problem, because we do not have a definition of intelligence. We can attempt to match up capabilities LLMs seem to have with capabilities humans have, and if the capability is well-defined we may even be able to reason about how stable it is relative to how LLMs work.

For "reasoning" we categorically do not have this. There is not even any evidence that LLMs will continue increasing as techniques improve, except in the tautological sense that if LLMs don't appear to resemble humans more closely we will call the technique a failure. IIRC there was a recent paper about giving LLMs more opportunity processing time, and this reduced performance. Same with adding extraneous details, sometimes that reduces performance too. What if eventually everything you try reduces performance? Totally unaddressed.

> It that an insane belief worthy of being a primary example of a community not thinking clearly?

I really need to stress this: thinking clearly is about the reasoning, not the conclusion. Given the available evidence, no legitimate argument has been presented that implies the conclusion. This does not mean the conclusion is wrong! But just putting your finger in the air and saying "the wind feels right, we'll probably have AGI tomorrow" is how you get bubbles and winters.

tim333 7 days ago | parent | next [-]

>"recursive self improvement" does not imply "self improvement without bounds"

I was thinking that. I mean if you look at something like AlphaGo it was based on human training and then they made one I think called AlphaZero which learned by playing against itself and got very good but not infinitely good as it was still constrained by hardware. I think with Chess the best human is about 2800 on the ELO scale and computers about 3500. I imagine self improving AI would be like that - smarter than humans but not infinitely so and constrained by hardware.

Also like humans still play chess even if computers are better, I imagine humans will still do the usual kind of things even if computers get smarter.

BlueTemplar 7 days ago | parent | prev | next [-]

Also : individual ants might be quite dumb, but ant colonies do seem to be one of the smartest entities we know of.

doubleunplussed 5 days ago | parent | prev [-]

> "recursive self improvement" does not imply "self improvement without bounds"

Obviously not, but thinking that the bounds are going to lie in between where AI intelligence is now and human intelligence I think is unwarranted - as mentioned, humans are unlikely to be the peak of what's possible since evolution did not optimise us for intelligence alone.

If you think the recursive self-improvement people are arguing for improvement without bounds, I think you're simply mistaken, and it seems like you have not made a good faith effort to understand their view.

AI only needs to be somewhat smarter than humans to be very powerful, the only arguments worth having IMHO are over whether recursive self-improvement will lead to AI being a head above humans or not. Diminishing returns will happen at some point (in the extreme due to fundamental physics, if nothing sooner), but whether it happens in time to prevent AI from becoming meaningfully more powerful than humans is the relevant question.

> we do not have a definition of intelligence

This strikes me as an unserious argument to make. Some animals are clearly more intelligent than others, whether you use a shaky definition or not. Pick whatever metric of performance on intellectual tasks you like, there is such a thing as human-level performance, and humans and AIs can be compared. You can't even make your subsequent arguments about AI performance being made worse by various factors unless you acknowledge such performance is measuring something meaningful. You can't even argue against recursive self-improvement if you reject that there is anything measurable that can be improved. I think you should retract this point as it prevents you making your own arguments.

> There is not even any evidence that LLMs will continue increasing as techniques improve, except in the tautological sense that if LLMs don't appear to resemble humans more closely we will call the technique a failure.

I'm pretty confused by this claim - whatever our difficulties defining intelligence, "resembling humans" is not it. Do you not believe there are tasks on which performance can be objectively graded beyond similarity to humans? I think it's quite easy to define tasks that we can judge the success of without being able to do it ourselves. If AI solves all the Millennium Prize Problems, that would be amazing! I don't need to have resolved all issues with a definition of intelligence to be impressed.

Anyway, is there really no evidence? AI having improved so far is not any evidence that it might continue, even a little bit? Are we really helpless to predict whether there will be any better chatbots released in the remainder of this year than we already have?

I do not think we are that helpless - if you entirely reject past trends as an indicator of future trends, and treat them as literally zero evidence at all, then this is simply faulty reasoning. Past trends are not a guarantee of future trends, but neither are they zero evidence. They are a nonzero medium amount of evidence, the strength of which depends on how long the trends have been going on and how well we understand the fundamentals driving them.

> thinking clearly is about the reasoning, not the conclusion.

And I think we have good arguments! You seem to have strong priors that the default is that machines can't reach human intelligence/performance or beyond, and you really need convincing otherwise. I think the fact that we have an existence proof in humans of human intelligence and an algorithm to get there proves it's possible. And I consider it quite unlikely that humans are the peak of intelligence/performance-on-whatever-metrics that is possible given it's now what we were optimised for specifically.

All your arguments about why progress might slow or stop short of superhuman-levels are legitimate and can't be ruled out, and yet these things have not been limiting factors so far despite that they would have been equally valid to make these arguments any time in the past few years.

> no legitimate argument has been presented that implies the conclusion

I mean it's probabilistic, right? I'm expecting something like an 85% chance of AGI before 2040. I don't think it's guaranteed, but when you look at progress so far, and that nature gives us proof (in the form of the human brain) that it's not impossible in any fundamental way, I think that's reasonable. Reasonability arguments and extrapolations are all we have, we can't imply anything definitively.

You think what probability?

Interested in a bet?

solid_fuel 7 days ago | parent | prev [-]

> We have an existence proof for intelligence that can improve AI: humans.

I don't understand what you mean by this. The human brain has not meaningfully changed, biologically, in the past 40,000 years.

We, collectively, have built a larger base of knowledge and learned to cooperate effectively enough to make large changes to our environment. But that is not the same thing as recursive self-improvement. No one has been editing our genes or performing brain surgery on children to increase our intelligence or change the fundamental way it works.

Modern brains don't work "better" than those of ancient humans, we just have more knowledge and resources to work with. If you took a modern human child and raised them in the middle ages, they would behave like everyone else in the culture that raised them. They would not suddenly discover electricity and calculus just because they were born in 2025 instead of 950.

----

And, if you are talking specifically about the ability to build better AI, we haven't matched human intelligence yet and there is no indication that the current LLM-heavy approach will ever get there.

doubleunplussed 5 days ago | parent | next [-]

I just mean that the existence of the human brain is proof that human-level intelligence is possible.

Yes it took billions of years all said and done, but it shows that there are no fundamental limits that prevent this level of intelligence. It even proves it can in principle be done with a few tens of watts a certain approximate amount of computational power.

Some used to think the first AIs would be brain uploads, for this reason. They thought we'd have the computing power and scanning techniques to scan and simulate all the neurons of a human brain before inventing any other architecture capable of coming close to the same level of intelligence. That now looks to be less likely.

Current state of the art AI still operate with less computational power than the human brain, and they are far less efficient at learning that humans are (there is a sense in which a human intelligence takes a merely years to develop - i.e. childhood - rather than billions, this is also a relevant comparison to make). Humans can learn from far fewer examples than current AI can.

So we've got some catching up to do - but humans prove it's possible.

BlueTemplar 7 days ago | parent | prev [-]

Culture is certainly one aspect of recursive self-improvement.

Somewhat akin to 'software' if you will.

PaulHoule 8 days ago | parent | prev | next [-]

Yeah, to compare Yudkowsky to Hubbard I've read accounts of people who read Dianetics or Science of Survival and thought "this is genius!" and I'm scratching my head and it's like they never read Freud or Horney or Beck or Berne or Burns or Rogers or Kohut, really any clinical psychology at all, even anything in the better 70% of pop psychology. Like Hubbard, Yudkowsky is unreadable, rambling [1] and inarticulate -- how anybody falls for it boggles my mind [2], but hey, people fell for Carlos Castenada who never used a word of the Yaqui language or mentioned any plant that grows in the desert in Mexico but has Don Juan give lectures about Kant's Critique of Pure Reason [3] that Castenada would have heard in school and you would have heard in school too if you went to school or would have read if you read a lot.

I can see how it appeals to people like Aella who wash into San Francisco without exposure to education [4] or philosophy or computer science or any topics germane to the content of Sequences -- not like it means you are stupid but, like Dianetics, Sequences wouldn't be appealing if you were at all well read. How is people at frickin' Oxford or Stanford fall for it is beyond me, however.

[1] some might even say a hypnotic communication pattern inspired by Milton Erickson

[2] you think people would dismiss Sequences because it's a frickin' Harry Potter fanfic, but I think it's like the 419 scam email which is riddled by typos which is meant to drive the critical thinker away and, ironically in the case of Sequences, keep the person who wants to cosplay as a critical thinker.

[3] minus any direct mention of Kant

[4] thus many of the marginalized, neurodivergent, transgender who left Bumfuck, AK because they couldn't live at home and went to San Francisco to escape persecution as opposed to seek opportunity

nemomarx 7 days ago | parent | next [-]

I thought sequences was the blog posts and the fanfic was kept separately, to nitpick

NoGravitas 7 days ago | parent | prev [-]

> like Dianetics, Sequences wouldn't be appealing if you were at all well read.

That would require an education in the humanities, which is low status.

PaulHoule 7 days ago | parent [-]

Well, there is "well read" and "educated" which aren't the same thing. I started reading when I was three and checked out ten books a week from the public library throughout my youth. I was well read in psychology, philosophy and such long before I went to college -- I got a PhD in a STEM field so I didn't read a lot of that stuff for classes [1] I still read a lot of that stuff.

Perhaps the reason why Stanford and Oxford students are impressed by that stuff is that they are educated but not well read which has a few angles: STEM privileged over the humanities, the ride of Dyslexia culture, and a shocking level of incuriosity in "nepo baby" professors [2] who are drawn to the profession not because of a thirst for knowledge but because it's the family business.

[1] did get an introduction to https://en.wikipedia.org/wiki/Rogerian_argument and took a relatively "woke" (in a good way) Shakespeare class such that https://en.wikipedia.org/wiki/Troilus_and_Cressida is my favorite

[2] https://pmc.ncbi.nlm.nih.gov/articles/PMC9755046/

doubleunplussed 7 days ago | parent | prev | next [-]

I'm surprised not see see much pushback on your point here, so I'll provide my own.

We have an existence proof for intelligence that can improve AI: humans can do this right now.

Do you think AI can't reach human-level intelligence? We have an existence proof of human-level intelligence: humans. If you think AI will reach human-level intelligence then recursive self-improvement naturally follows. How could it not?

Do you not think human-level intelligence is some kind of natural maximum? Why? That would be strange, no? Even if you think it's some natural maximum for LLMs specifically, why? And why do you think we wouldn't modify architectures as needed to continue to make progress? That's already happening, our LLMs are a long way from the pure text prediction engines of four or five years ago.

There is already a degree of recursive improvement going on right now, but with humans still in the loop. AI researchers currently use AI in their jobs, and despite the recent study suggesting AI coding tools don't improve productivity in the circumstances they tested, I suspect AI researchers' productivity is indeed increased through use of these tools.

So we're already on the exponential recursive-improvement curve, it's just that it's not exclusively "self" improvement until humans are no longer a necessary part of the loop.

On your specific points:

> 1. What if increasing intelligence has diminishing returns, making recursive improvement slow?

Sure. But this is a point of active debate between "fast take-off" and "slow take-off" scenarios, it's certainly not settled among rationalists which is more plausible, and it's a straw man to suggest they all believe in a fast take-off scenario. But both fast and slow take-off due to recursive self-improvement are still recursive self-imrpovement, so if you only want to criticise the fast take-off view, you should speak more precisely.

I find both slow and fast take-off plausible, as the world has seen both periods of fast economic growth through technology, and slower economic growth. It really depends on the details, which brings us to:

> 2. LLMs already seem to have hit a wall of diminishing returns

This is IMHO false in any meaningful sense. Yes, we have to use more computing power to get improvements without doing any other work. But have you seen METR's metric [1] on AI progress in terms of the (human) duration of task they can complete? This is an exponential curve that has not yet bent, and if anything has accelerated slightly.

Do not confuse GPT-5 (or any other incrementally improved model) failing to live up to unreasonable hype for an actual slowing of progress. AI capabilities are continuing to increase - being on an exponential curve often feels unimpressive at any given moment, because the relative rate of progress isn't increasing. This is a fact about our psychology, if we look at actual metrics (that don't have a natural cap like evals that max out at 100%, these are not good for measuring progress in the long-run) we see steady exponential progress.

> 3. What if there are several paths to different kinds of intelligence with their own local maxima, in which the AI can easily get stuck after optimizing itself into the wrong type of intelligence?

This seems valid. But it seems to me that unless we see METR's curve bend soon, we should not count on this. LLMs have specific flaws, but I think if we are honest with ourselves and not over-weighting the specific silly mistakes they still make, they are on a path toward human-level intelligence in the coming years. I realise that claim will sound ridiculous to some, but I think this is in large part due to people instinctively internalising that everything LLMs can do is not that impressive (it's incredible how quickly expectations adapt), and therefore over-indexing on their remaining weaknesses, despite those weaknesses improving over time as well. If you showed GPT-5 to someone from 2015, they would be telling you this thing is near human intelligence or even more intelligent than the average human. I think we all agree that's not true, but I think that superficially people would think it was if their expectations weren't constantly adapting to the state of the art.

> 4. Once AI realizes it can edit itself to be more intelligent, it can also edit its own goals. Why wouldn't it wirehead itself?

It might - but do we think it would? I have no idea. Would you wirehead yourself if you could? I think many humans do something like this (drug use, short-form video addiction), and expect AI to have similar issues (and this is one reason it's dangerous) but most of us don't feel this is an adequate replacement for "actually" satisfying our goals, and don't feel inclined to modify our own goals to make it so, if we were able.

> Knowing Yudowsky I'm sure there's a long blog post somewhere where all of these are addressed with several million rambling words of theory

Uncalled for I think. There are valid arguments against you, and you're pre-emptively dismissing responses to you by vaguely criticising their longness. This comment is longer than yours, and I reject any implication that that weakens anything about it.

Your criticisms are three "what ifs" and a (IMHO) falsehood - I don't think you're doing much better than "millions of words of theory without evidence". To the extent that it's true Yudkowsky and co theorised without evidence, I think they deserve cred, as this theorising predated the current AI ramp-up at a time when most would have thought AI anything like what we have now was a distant pipe dream. To the extent that this theorising continues in the present, it's not without evidence - I point you again to METR's unbending exponential curve.

Anyway, so I contend your points comprise three "what ifs" and (IMHO) a falsehood. Unless you think "AI can't recursively self-improve itself" already has strong priors in its favour such that strong arguments are needed to shift that view (and I don't think that's the case at all), this is weak. You will need to argue why we should need to have strong evidence to overturn a default "AI can't recursively self-improve" view, when it seems that a) we are already seeing recursive improvement (just not purely "self"-improvement), and that it's very normal for technological advancement to have recursive gains - see e.g. Moore's law or technological contributions to GDP growth generally.

Far from a damning example of rationalists thinking sloppily, this particular point seems like one that shows sloppy thinking on the part of the critics.

It's at least debateable, which is all it has to be for calling it "the biggest nonsense axion" to be a poor point.

[1] https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...

bglazer 7 days ago | parent | next [-]

Yudkowsky seems to believe in fast take off, so much so that he suggested bombing data centers. To more directly address your point, I think it’s almost certain that increasing intelligence has diminishing returns and the recursive self improvement loop will be slow. The reason for this is that collecting data is absolutely necessary and many natural processes are both slow and chaotic, meaning that learning from observation and manipulation of them will take years at least. Also lots of resources.

Regarding LLM’s I think METR is a decent metric. However you have to consider the cost of achieving each additional hour or day of task horizon. I’m open to correction here, but I would bet that the cost curves are more exponential than the improvement curves. That would be fundamentally unsustainable and point to a limitation of LLM training/architecture for reasoning and world modeling.

Basically I think the focus on recursive self improvement is not really important in the real world. The actual question is how long and how expensive the learning process is. I think the answer is that it will be long and expensive, just like our current world. No doubt having many more intelligent agents will help speed up parts of the loop but there are physical constraints you can’t get past no matter how smart you are.

doubleunplussed 7 days ago | parent [-]

How do you reconcile e.g. AlphaGo with the idea that data is a bottleneck?

At some point learning can occur with "self-play", and I believe this is already happening with LLMs to some extent. Then you're not limited by imitating human-made data.

If learning something like software development or mathematical proofs, it is easier to verify whether a solution is correct than to come up with the solution in the first place, many domains are like this. Anything like that is amenable to learning on synthetic data or self-play like AlphaGo did.

I can understand that people who think of LLMs as human-imitation machines, limited to training on human-made data, would think they'd be capped at human-level intelligence. However I don't think that's the case, and we have at least one example of superhuman AI in one domain (Go) showing this.

Regarding cost, I'd have to look into it, but I'm under the impression costs have been up and down over time as models have grown but there have also been efficiency improvements.

I think I'd hazard a guess that end-user costs have not grown exponentially like time horizon capabilities, even though investment in training probably has. Though that's tricky to reason about because training costs are amortised and it's not obvious whether end user costs are at a loss or what profit margin for any given model.

On the fast-slow takeoff - Yud does seem to beleive in a fast takeoff yes, but it's also one of the the oldest disagreements in rationality circles, on which he disagreed with his main co-blogger on the orignal rationalist blog, Overcoming Bias, some discussion of this and more recent disagreements here [1].

[1] https://www.astralcodexten.com/p/yudkowsky-contra-christiano...

bglazer 7 days ago | parent | next [-]

AlphaGo showed that RL+search+self play works really well if you have an easy to verify reward and millions of iterations. Math partially falls into this category via automated proof checkers like Lean. So, that’s where I would put the highest likelihood of things getting weird really quickly. It’s worth noting that this hasn’t happened yet, and I’m not sure why. It seems like this recipe should already be yielding results in terms of new mathematics, but it isn’t yet.

That said, nearly every other task in the world is not easily verified, including things we really care about. How do you know if an AI is superhuman at designing fusion reactors? The most important step there is building a fusion reactor.

I think a better reference point than AlphaGo is AlphaFold. Deepmind found some really clever algorithmic improvements, but they didn’t know whether they actually worked until the CASP competition. CASP evaluated their model on new Xray crystal structures of proteins. Needless to say getting Xray protein structures is a difficult and complex process. Also, they trained AlphaFold on thousands of existing structures that were accumulated over decades and required millenia of graduate-student-hours hours to find. It’s worth noting that we have very good theories for all the basic physics underlying protein folding but none of the physics based methods work. We had to rely on painstakingly collected data to learn the emergent phenomena that govern folding. I suspect that this will be the case for many other tasks.

Vegenoid 7 days ago | parent | prev [-]

> How do you reconcile e.g. AlphaGo with the idea that data is a bottleneck?

Go is entirely unlike reality in that the rules are fully known and it can be perfectly simulated by a computer. AlphaGo worked because it could run millions of tests in a short time frame, because it is all simulated. It doesn't seem to answer the question of how an AI improves its general intelligence without real-world interaction and data gathering at all. If anything it points to the importance of doing many experiments and gathering data - and this becomes a bottleneck when you can't simply make the experiment run faster, because the experiment is limited by physics.

Vegenoid 7 days ago | parent | prev [-]

> If you think AI will reach human-level intelligence then recursive self-improvement naturally follows. How could it not?

Humans have a lot more going on than just an intelligence brain. The two big ones are: bodies, with which to richly interact with reality, and emotions/desire, which drive our choices. The one that I don't think gets enough attention in this discussion is the body. The body is critical to our ability to interact with the environment, and therefore learn about it. How does an AI do this without a body? We don't have any kind of machine that comes close to the level of control, feedback, and adaptability that a human body offers. That seems very far away. I don't think that an AI can just "improve itself" without being able to interact with the world in many ways and experiment. How does it find new ideas? How does it test its ideas? How does it test its abilities? It needs an extremely rich interface with the physical world, that external feedback is necessary for improvement. That requirement would put the prospect of a recursive self-improving AI much further into the future than many rationalists believe.

And of course, the "singularity" scenario does not only make "recursive self-improvement" the only assumption, it assumes exponential recursive self-improvement all the way to superintelligence. This is highly speculative. It's just as possible that the curve is more logarithmic, sinusoid, or linear. The reason to believe that fully exponential self-improvement is the likely scenario, based on curve of some metric now that hasn't existed for very long, does not seem solid enough to justify a strong belief. It is just as easy to imagine that intelligence gains get harder and harder as intelligence increases. We see many things that are exponential for a time, and then they aren't anymore, and basing big decisions on "this curve will be exponential all the way" because we're seeing exponential progress now, at the very early stages, does not seem sound.

Humans have human-level intelligence, but we are very far away from understanding our own brain such that we can modify it to increase our capacity for intelligence (to any degree significant enough to be comparable to recursive self-improvement). We have to improve the intelligence of humanity the hard way: spend time in the world, see what works, the smart humans make more smart humans (as do the dumb humans, which often slows the progress of the smart humans). The time spent in the world, observing and interacting with it, is crucial to this process. I don't doubt that machines could do this process faster than humans, but I don't think it's at all clear that they could do so, say, 10,000x faster. A design needs time in the world to see how it fares in order to gauge its success. You don't get to escape this until you have a perfect simulation of reality, which if it is possible at all is likely not possible until the AI is already superintelligent.

Presumably a superintelligent AI has a complete understanding of biology - how does it do that without spending time observing the results of biological experiments and iterating on them? Extrapolate that to the many other complex phenomena that exist in the physical world. This is one of the reasons that our understanding of computers has increased so much faster than our understanding of many physical sciences: to understand a complex system that we didn't create and don't have a perfect model of, we must do lots of physical experiments, and those experiments take time.

The crucial assumption that the AI singularity assumption relies on is that once intelligence hits a certain threshold, it can gaze at itself and self-improve to the top very quickly. I think this is fundamentally flawed, as we exist in a physical reality that underlies everything and defines what intelligence is. Interaction and experimentation with reality is necessary for the feedback loop of increasing intelligence, and I think this both severely limits how short that feedback loop can be, and makes the bar for an entity that can recursively self-improve itself much higher, as it needs a physical embodiment far more complex and autonomous than any robot we've managed to make.

godelski 8 days ago | parent | prev [-]

  > The biggest nonsense axiom I see in the AI-cult rationalist world is recursive self-improvement. 
This is also the weirdest thing and I don't think they even know the assumption they are making. It makes the assumption that there is infinite knowledge to be had. It also ignores the reality that in reality we have exceptionally strong indications that accuracy (truth, knowledge, whatever you want to call it) has exponential growth in complexity. These may be wrong assumptions, but we at least have evidence for them, and much more for the latter. So if objective truth exists, then that intelligence gap is very very different. One way they could be right there is for this to be an S-curve and for us humans to be at the very bottom there. That seems unlikely, though very possible. But they always treat this as linear or exponential as if our understanding to the AI will be like an ant trying to understand us.

The other weird assumption I hear is about how it'll just kill us all. The vast majority of smart people I know are very peaceful. They aren't even seeking power of wealth. They're too busy thinking about things and trying to figure everything out. They're much happier in front of a chalk board than sitting on a yacht. And humans ourselves are incredibly passionate towards other creatures. Maybe we learned this because coalitions are a incredibly powerful thing, but truth is that if I could talk to an ant I'd choose that over laying traps. Really that would be so much easier too! I'd even rather dig a small hole to get them started somewhere else than drive down to the store and do all that. A few shovels in the ground is less work and I'd ask them to not come back and tell others.

Granted, none of this is absolutely certain. It'd be naive to assume that we know! But it seems like these cults are operating on the premise that they do know and that these outcomes are certain. It seems to just be preying on fear and uncertainty. Hell, even Altman does this, ignoring risk and concern of existing systems by shifting focus to "an even greater risk" that he himself is working towards (You can't simultaneously maximize speed and safety). Which, weirdly enough might fulfill their own prophesies. The AI doesn't have to become sentient but if it is trained on lots of writings about how AI turns evil and destroys everyone then isn't that going to make a dumb AI that can't tell fact from fiction more likely to just do those things?

jandrese 7 days ago | parent | next [-]

I think of it more like visualizing a fractal on a computer. The more detail you try to dig down into the more detail you find, and pretty quickly you run out of precision in your model and the whole thing falls apart. Every layer further down you go the resource requirements increase by an exponential amount. That's why we have so many LLMs that seem beautiful at first glance but go to crap when the details really matter.

empiricus 7 days ago | parent | prev [-]

soo many things make no sense in this comment that I feel like 20% chance this a mid quality gpt. and so much interpolation effort, but starting from hearsay instead of primary sources. then the threads stop just before seeing the contradiction with the other threads. I imagine this is how we all reason most of the time, just based on vibes :(

godelski 7 days ago | parent [-]

Sure, I wrote a lot and it's a bit scattered. You're welcome to point to something specific but so far you haven't. Ironically, you're committing the error you're accusing me of.

I'm also not exactly sure what you mean because the only claim I've made is that they've made assumptions where there are other possible, and likely, alternatives. It's much easier to prove something wrong than prove it right (or in our case, evidence, since no one is proving anything).

So the first part I'm saying we have to consider two scenarios. Either intelligence is bounded or unbounded. I think this is a fair assumption, do you disagree?

In an unbounded case, their scenario can happen. So I don't address that. But if you want me to, sure. It's because I have no reason to believe information is bounded when everything around me suggests that it is. Maybe start with the Bekenstein bound. Sure, it doesn't prove information is bounded but you'd then need to convince me that an entity not subject to our universe and our laws of physics is going to care about us and be malicious. Hell, that entity wouldn't even subject to time and we're still living.

In a bounded case it can happen but we need to understand what conditions that requires. There's a lot of functions but I went with S-curve for simplicity and familiarity. It'll serve fine (we're on HN man...) for any monotonically increasing case (or even non-monotonic, it just needs to tends that way).

So think about it. Change the function if you want, I don't care. But if intelligence is bounded, then if we're x more intelligent then ants, where on the graph do we need to be for another thing to be x more intelligent than us? There's not a lot of opportunities for that even to happen. It requires our intelligence (on that hypothetical scale) to be pretty similar than an ant. What cannot happen is that ant be in the tail of that function and us be further than the inflection point (half way). There just isn't enough space on that y-axis for anything to be x more intelligent. This doesn't completely reject that crazy superintelligence, but it does place some additional constraints that we can use to reason about things. For the "AI will be [human to ant difference] more intelligent than us" argument to follow it would require us to be pretty fucking dumb, and in that case we're pretty fucking dumb and it'd be silly to think we can make these types of predictions with reasonable accuracy (also true in the unbounded case!).

Yeah, I'll admit that this is a very naïve model but again, we're not trying to say what's right but instead just say there's good reason to believe their assumption is false. Adding more complexity to this model doesn't make their case stronger, it makes it weaker.

The second part I can make much easier to understand.

Yes, there's bad smart people, but look at the smartest people in history. Did they seek power or wish to harm? Most of the great scientists did not. A lot of them were actually quite poor and many even died fighting persecution.

So we can't conclude that greater intelligence results in greater malice. This isn't hearsay, I'm just saying Newton wasn't a homicidal maniac. I know, bold claim...

  > starting from hearsay
I don't think this word means what you think it means. Just because I didn't link sources doesn't make it a rumor. You can validate them and I gave you enough information to do so. You now have more. Ask gpt for links, I don't care, but people should stop worshiping Yud
empiricus 7 days ago | parent | next [-]

And about this second comment, I agree that intelligence is bounded. We can discuss how much more intelligence is theoretically possible, but even if limit ourselves to extrapolation from human variance (agency of musk, math smart of von neumann, manipulative as trump, etc), and add a little more speed and parallelism (100 times faster, 100 copies cooperating), then we can get pretty far.

Also I agree we are all pretty fucking dumb, and cannot make this kind of predictions, which is actually one very important point in the rationalist circles: doom is not certain, but p(doom) looks uncomfortably high though. How lucky do you feel?

godelski 6 days ago | parent [-]

  > How lucky do you feel?
I don't gamble. But I am confident P(doom) is quite low.

Despite that, I do take AI safety quite seriously and literally work on the fundamental architectures of these things. You don't need P(doom) to be high for you to take doom seriously. The probability isn't that consequential when we consider such great costs. All that matters is the probability is not approximately zero.

But all you P(doom)-ers just make this work harder to do and make it harder to improve those systems and make them safer. It just furthers people like Altman who are pushing a complementary agenda and who recognize that you cannot stop the development of AI. In fact, the more you press this doom story the more you make it impossible. What the story of doom (as well as story of immense wealth) pushes is a need to rush.

If you want to really understand this, go read about nuclear deterrence. I don't mean go watch some youtube video or some Less Wrong article. I mean go grab a few books. Read both sides of the arguments. But as it stands, this is how the military ultimately thinks and that effectively makes it true. You don't launch nukes because your enemy will too. You also don't say what that red line is because then you can still use it as a bargaining chip. If you state that line, your enemy will just walk up to it and do everything before it.

So what about AI? The story being sold is that this enables a weapon of mass destruction. Take US and China. China has to make AI because the US makes AI and if the US makes AI first they can't risk that the US won't use it to take out all their nukes or ruin their economy. They can't take that risk even if the probability is low. But the same is true in reverse. So the US can't stop because China won't and if China gets there first they could destroy the US. You see the trap?[0] Now here's the fucking kicker. Suppose you believe your enemy is close to building that AI weapon. Does that cross your red line in which you will use nukes?

So you doomers are creating a self-fulfilling prophecy, in a way. Ironically this is highly relevant to the real dangers of AI systems. The current (and still future) danger comes from outsourcing intelligence and decision making to these machines. Ironically this becomes less problematic once we actually create machines with intelligence (intelligence like humans or animals, not like automated reasoning (a technology we've had since the 60's)).

You want to reduce the risk of doom? Here's what you do. You convince both sides that instead of competing, they pursue development together. Hand in hand. Openly. No one gets AI first. Secret AI programs? Considered an act of aggression. Yes, this still builds AI but it dramatically reduces the risk of danger. You don't need to rush or cut corners because you are worried about your enemy getting a weapon first and destroying you. You get the "weapon" simultaneously, along with everyone else on the planet. It's not a great solution because you still end up with "nuclear weapons" (analogously), but if everyone gets it at the same time then you end up in a situation like we have been for the last few decades (regardless of the cause, it is an abnormally peaceful time in human history) where MAD policies are in effect[1].

I don't think it'll happen, everyone will say "I would, but they won't" and end up failing without trying. But ultimately this is a better strategy than getting people to stop. You're not going to be successful in stopping. It just won't happen. P(doom) exists in this scenario even without the development of AGI. As long as that notion of doom exists, there is incentives to rush and cut corners. People like Altman will continue to push that message and say that they are the only ones who can do it safely and do it fast (which is why they love the "Scale is All You Need" story). So if you are afraid I don't think you're afraid enough. There's a lot of doom that exists before AGI. You don't need AGI or ASI for the paperclip scenario. Such an AI doesn't even require real thinking[2].

The reason doomers make work like mine harder is because researchers like me care about the nuances and subtleties. We care about understanding how the systems work. But as long as a looming threat is on the line people will argue that we have no time to study the details or find out how these things work. You cannot make these things safe without understanding how they work (to a sufficient degree at least). And frankly, it isn't just doomers, it is also people rushing to make the next AI product. It doesn't matter if ignoring those details and nuances is self-sabotaging. The main assumption under my suggestion is that when people rush they tend to make more mistakes. It's not guaranteed that people make mistakes, but there sure is a tendency for that to happen. After all, we're only human.

You ask how lucky I feel? I'll ask you how confident a bunch of people racing to create something won't make mistakes. Won't make disastrous mistakes. This isn't just a game between US and China, there are a lot more countries involved. You think all of them can race like this and not make a major mistake? A mistake which causes P(doom)? Me? I sure don't feel lucky about that one.

[0] It sounds silly, but this is how Project Stargate happened. No, not the current one that ironically shares the same name, the one in the 70's where they studied psychic powers. It started because a tabloid published that Russians were doing it, so the US started research in response, which caused the Russians to actually research psychic phenomena.

[1] Not to mention that if this happened it would be a unique act of unity that we've never seen in human history. And hey, if you really want to convince AI, Aliens, or whatever that we can be peaceful, here's the chance.

[2] As Melanie Mitchell likes to point out, an AGI wouldn't have this problem because if you have general intelligence you understand that humans won't sacrifice their own lives to make more paperclips. Who then would even use them? So the paperclip scenario is a danger of a sophisticated automata rather than of intelligence.

empiricus 6 days ago | parent [-]

Thank you for the thoughtful response. At the first read I was like everything looks reasonably correct. However you present the doom argument as being dividing and causing the race, when in fact is probably the only argument for cooperation and slowing the race.

bondarchuk 7 days ago | parent | prev | next [-]

>For the "AI will be [human to ant difference] more intelligent than us" argument to follow it would require us to be pretty fucking dumb, and in that case we're pretty fucking dumb and it'd be silly to think we can make these types of predictions with reasonable accuracy (also true in the unbounded case!).

...which is why we should be careful not to rush full-speed ahead and develop AI before we can predict how it will behave after some iterations of self-improvement. As the rationalist argument goes.

BTW you are assuming that intelligence will necessarily and inherently lead to (good) morality, and I think that's a much weirder assumption than some you're accusing rationalists of holding.

godelski 7 days ago | parent [-]

  > you are assuming that intelligence will necessarily and inherently lead to (good) morality
Please read before responding. I said no such thing. I even said there are bad smart people. I only argued that a person's goodness is orthogonal to their intelligence. But I absolutely did not make an assumption that intelligence equates to good. I said it was irrelevant...
bondarchuk 6 days ago | parent [-]

Idk, you certainly seemed to be implying that especially in your earlier comment. I would agree that it is orthogonal, I would think most rationalists would, too.

godelski 6 days ago | parent [-]

I promise you you misread. I think this is probably the problem sentence

  >>>> The vast majority of smart people I know are very peaceful.
I'll also add that the vast majority of people I know are very peaceful. But neither of these means I don't know malicious people. You'd need to change "The vast majority" to "Every" for this to be the conclusion. I'm not discounting malicious smart people, I'm pointing out that it is a weird assumption to make when most people we know are kind and peaceful.

The second comment is explicit though

  >> So we can't conclude that greater intelligence results in greater malice.
This is not equivalent to "We can conclude that greater intelligence results in less malice." Those are completely different claims.
empiricus 7 days ago | parent | prev [-]

I apologize for the tone of my comment, but this is how I read your arguments (I was a little drunk at the time):

1. future AI cannot be infinitely intelligent, therefore AI is safe

But even with our level of intelligence, if we get serious we can eliminate all humans.

2. some smart ppl I know are peaceful

Do you think Putin is dumb?

3. smart ppl have different preferences than other ppl therefore AI is safe

Ironically this is the main doom argument from EY: it is difficult to make an AI that has the same values as us.

4. AI is competent enough to destroy everyone but is not able to tell fact from fiction

So are you willing to bet your life and the life of your loved ones on the certainty of these arguments?

godelski 7 days ago | parent [-]

  > I was a little drunk at the time
Honestly it still sounds like you are. You've still misread my comment and think I said there can't be bad smart people. I made no such argument, I argued that intelligence isn't related to goodness.
bondarchuk 6 days ago | parent [-]

If that was what you meant to say though, you've gotta admit that opening a paragraph with "The other weird assumption I hear is about how it'll just kill us all", and then spending the rest of the paragraph giving examples of the peacefulness of smart people, is not the most effective strategy of communicating that.

godelski 6 days ago | parent [-]

You were the one who interpreted "Here's examples of smart peaceful people" as "smart == peaceful". I was never attempting to make such a claim and did say that. The whole thread is about bad assumptions and bad logic.

  > is not the most effective strategy of communicating that.
The difficulty of talking on the internet is you can't know your audience and your audience is everybody. Yes, this should make us more aware about how we communicate but it also means we need to be more aware how we interpret. The problem was created because you made bad assumptions about what I was trying to communicate. There are multiple ways to interpret what I said, I'm not denying that, it'd be silly to because this is true for ANY thing you say. But the clues are there to get everything I said and when I apologize and try to clarify do you go back and reread what I wrote with the new understanding or do you just pull from memory what I wrote? Probably isn't good to do the latter because clearly it was misinterpreted the first time, right? Even if that is entirely my fault and not yours. That's why I'm telling you to reread. Because

  >>>> So we can't conclude that greater intelligence results in greater malice.
Is not equivalent to

  >>> assuming that intelligence will necessarily and inherently lead to (good) morality
We can see that this is incorrect with a silly example. Suppose someone says "All apples are red" and then someone says "but look at this apple, it is green. In fact, most apples are green." Forget the truthiness of this claim and focus on the logic. Did I claim that red apples don't exist? Did I say that only green exists? Did I forget about yellow, pink, or white ones? No! Yet this is the same logic pattern as above. You will not find the sentence "all smart people are good" (all apples are green).

Let's rewrite your comment with apples

  > If that was what you meant to say though, you've gotta admit that opening a paragraph with "The other weird assumption I hear is about how all apples are red", and then spending the rest of the paragraph giving examples of different types of green apples, is not the most effective strategy of communicating that.
 
Do you agree with your conclusion now? We only changed the subject, the logic is in tact. So, how about them apples?

And forgive my tone, but both you and empiricus are double commenting and so I'm repeating myself. You're also saying very similar things, we don't need to fracture a conversation and repeat. We can just talk human to human.

bondarchuk 6 days ago | parent [-]

I think the big difference between our views is that you are taking the rationalist argument to be "from intelligence follows malice, therefore it will want to kill us all" whereas I take it to be "from intelligence follows great capability and no morality, therefore it may or may not kill us uncaringly in pursuit of other goals".

godelski 6 days ago | parent [-]

  > you are taking the rationalist argument to be
I think they say P(doom) is high number[0]. Or in other words, AGI is likely to kill us. I interpret this as "if we make a really intelligent machine it is very likely to kill us all." My interpretation is mainly biased on them saying "if we build a really intelligent machine, it is very likely to kill us all."

Yud literally wrote a book titled "If Anyone Builds It, Everyone Dies."[1] There's not much room for ambiguity here...

[0] Yud is on the record saying at least 95% https://pauseai.info/pdoom He also said anyone with a higher P(doom) than him is crazy so I think that says a lot...

[1] https://ifanyonebuildsit.com/

bondarchuk 6 days ago | parent [-]

Yes, I agree they are saying it is likely going to kill us all. My interpretation is consistent with that, and so is yours. The difference is in why/how it will kill us; you sound to me like you think the rationalist position is that from intelligence follows malice, and therefore it will kill us. I think that's a wrong interpretation of their views.

godelski 6 days ago | parent [-]

Well then, instead of just telling me I'm wrong why don't you tell me why I'm wrong.

6 days ago | parent [-]
[deleted]
uoaei 8 days ago | parent | prev | next [-]

This is why it's important to emphasize that rationality is not a good goal to have. Rationality is nothing more than applied logic, which takes axioms as given and deduces conclusions from there.

Reasoning is the appropriate target because it is a self-critical, self-correcting method that continually re-evaluates axioms and methods to express intentions.

JKCalhoun 8 days ago | parent | prev | next [-]

You're describing the impressions I had of MENSA back in the 70's.

Viliam1234 6 days ago | parent [-]

He probably is describing Mensa, and assuming that it also applies to the rationality community without having any specific knowledge of the latter.

(From my perspective, Hacker News is somewhere in the middle between Mensa and Less Wrong. Full of smart people, but most of them don't particularly care about evidence, if providing their own opinion confidently is an alternative.)

ambicapter 8 days ago | parent | prev | next [-]

One of the only idioms that I don't mind living my life by is, "Follow the truth-seeker, but beware those who've found it".

JKCalhoun 8 days ago | parent [-]

Interesting. I can't say I've done much following though — not that I am aware of anyway. Maybe I just had no leaders growing up.

zaphar 8 days ago | parent | prev | next [-]

The distinction between them and religion is that religion is free to say that those axioms are a matter of faith and treat them as such. Rationalists are not as free to do so.

UltraSane 7 days ago | parent | prev | next [-]

A good example of this is the number of huge assumptions needed for the argument for Roko's basilisk. I'm shocked that some people actually take it seriously.

niplav 7 days ago | parent [-]

I don't believe anyone has taken it seriously in the last half-decade, if you find counter-evidence for that belief let me know.

GeoAtreides 7 days ago | parent | prev | next [-]

Epistemological skepticism sure is a belief. A strong belief on your side?

I am profoundly sure, I am certain I exist and that a reality outside myself exists. Worse, I strongly believe knowing this external reality is possible, desirable and accurate.

How suspicious does that make me?

NoGravitas 7 days ago | parent [-]

It means you haven't read Hume, or, in general, taken philosophy seriously. An academic philosopher might still come to the same conclusions as you (there is an academic philosopher for every possible position), but they'd never claim the certainty you do.

GeoAtreides 7 days ago | parent [-]

why so aggressive chief

I am certain that your position "All academic philosophers never claim complete certainty about their beliefs" is not even wrong or falsifiable.

ratelimitsteve 7 days ago | parent | prev | next [-]

Are you familiar with ship of theseus as an arugmentation fallacy? Innuendo Studios did a great video on it and I think that a lot of what you're talking about breaks down to this. Tldr - it's a fallacy of substitution, small details of an argument get replaced by things that are (or feel like) logical equivalents until you end up saying something entirely different but are arguing as though you said the original thing. In this video the example is "senator doxxes a political opponent" but on looking "senator" turns out to mean "a contractor working for the senator" and "doxxes a political opponent" turns out to mean "liked a tweet that had that opponent's name in it in a way that could draw attention to it".

Each change is arguably equivalent and it seems logical that if x = y then you could put y anywhere you have x, but after all of the changes are applied the argument that emerges is definitely different from the one before all the substitutions are made. It feels like communities that pride themselves on being extra rational seem subject to this because it has all the trappings of rationalism but enables squishy, feely arguments

ratelimitsteve 7 days ago | parent [-]

https://www.youtube.com/watch?v=Ui-ArJRqEvU

Meant to drop a link for the above, my bad

EGreg 7 days ago | parent | prev | next [-]

There are certain things I am sure of even though I derived them on my own.

But I constantly battle tested them against other smart people’s views, and just after I ran out of people to bring me new rational objections did I become sure.

Now I can battle test them against LLMs.

On a lesser level of confidence, I have also found a lot of times the people who disagreed with what I thought had to be the case, later came to regret it because their strategies ended up in failure and they told me they regretted not taking my recommendation. But that is on an individual level. I have gotten pretty good at seeing systemic problems, architecting systemic solutions, and realizing what it would take to get them adopted to at least a critical mass. Usually, they fly in the face of what happens normally in society. People don’t see how their strategies and lives are shaped by the technology and social norms around them.

Here, I will share three examples:

Public Health: https://www.laweekly.com/restoring-healthy-communities/

Economic and Governmental: https://magarshak.com/blog/?p=362

Wars & Destruction: https://magarshak.com/blog/?p=424

For that last one, I am often proven somewhat wrong by right-wing war hawks, because my left-leaning anti-war stance is about avoiding inflicting large scale misery on populations, but the war hawks go through with it anyway and wind up defeating their geopolitical enemies and gaining ground as the conflict fades into history.

projektfu 7 days ago | parent [-]

"genetically engineers high fructose corn syrup into everything"

This phrase is nonsense, because HFCS is a chemical process applied to normal corn after the harvest. The corn may be a GMO but it certainly doesn't have to be.

EGreg 7 days ago | parent [-]

Agreed, that was phrased wrong. The fruits across the board have been genetically engineered to be extremely sweet (fructose, not the syrup): https://weather.com/news/news/2018-10-03-fruit-so-sweet-zoo-...

While their nutritional quality has gone down tremendously, for vegetables too: https://pmc.ncbi.nlm.nih.gov/articles/PMC10969708/

projektfu 7 days ago | parent [-]

Again, the term GMO is not what you're looking for. In the first article, a zookeeper is quoted making much the same mistake.

Here is a list of approved bioengineered foods in the US:

https://www.ams.usda.gov/rules-regulations/be/bioengineered-...

All the fruits on the list are engineered for properties other than sweetness.

The term you're looking for is "bred". Fruits have been bred to be sweeter, and this has been going on a long time. Corn is bred for high protein or high sugar, but the sweet corn is not what's used for HFCS.

Personally, I think the recent evidence shows that the problem is not so much that fruit is too sweet, but that everything is made to be addictive. Satiety signals are lost or distorted, and we are left with diseases of excess consumption.

EGreg 7 days ago | parent [-]

Well, either way, you agree with me. Government and corporations work together and distract the individual telling them they can fix the downstream situation in their own, private way.

amanaplanacanal 8 days ago | parent | prev | next [-]

It's very tempting to try to reason things through from first principles. I do it myself, a lot. It's one of the draws of libertarianism, which I've been drawn to for a long time.

But the world is way more complex than the models we used to derive those "first principles".

BobaFloutist 7 days ago | parent [-]

It's also very fun and satisfying. But it should be limited to an intellectual exercise at best, and more likely a silly game. Because there's no true first principle, you always have to make some assumption along the way.

7 days ago | parent | prev | next [-]
[deleted]
positron26 7 days ago | parent | prev | next [-]

Any theory of everything will often have a little perpetual motion machine at the nexus. These can be fascinating to the mind.

Pressing through uncertainty either requires a healthy appetite for risk or an engine of delusion. A person who struggles to get out of their comfort zone will seek enablement through such a device.

Appreciation of risk-reward will throttle trips into the unknown. A person using a crutch to justify everything will careen hyperbolically into more chaotic and erratic behaviors hoping to find that the device is still working, seeking the thrill of enablement again.

The extremism comes from where once the user learned to say hello to a stranger, their comfort zone has expanded to an area that their experience with risk-reward is underdeveloped. They don't look at the external world to appreciate what might happen. They try to morph situations into some confirmation of the crutch and the inferiority of confounding ideas.

"No, the world isn't right. They are just weak and the unspoken rules [in the user's mind] are meant to benefit them." This should always resonate because nobody will stand up for you like you have a responsibility to.

A study of uncertainty and the limitations of axioms, the inability of any sufficiently expressive formalism to be both complete and consistent, these are the ideas that are antidotes to such things. We do have to leave the rails from time to time, but where we arrive will be another set of rails and will look and behave like rails, so a bit of uncertainty is necessary, but it's not some magic hat that never runs out of rabbits.

Another psychology that will come into play from those who have left their comfort zone is the inability to revert. It is a harmful tendency to presume all humans fixed quantities. Once a behavior exists, the person is said to be revealed, not changed. The proper response is to set boundaries and be ready to tie off the garbage bag and move on if someone shows remorse and desire to revert or transform. Otherwise every relationship only gets worse. If instead you can never go back, extreme behavior is a ratchet. Ever mistake becomes the person.

bobson381 8 days ago | parent | prev | next [-]

There should be an extremist cult of people who are certain only that uncertainty is the only certain thing

hungmung 8 days ago | parent | next [-]

What makes you so certain there isn't? A group that has a deep understanding fnord of uncertainty would probably like to work behind the scenes to achieve their goals.

cwmoore 8 days ago | parent | next [-]

The Fnords do keep a lower profile.

dcminter 8 days ago | parent | prev [-]

One might even call them illuminati? :D

pancakemouse 8 days ago | parent | prev | next [-]

My favourite bumper sticker, "Militant Agnostic. I don't know, and neither do you."

bobson381 8 days ago | parent [-]

I heard about this the other day! I think I need one.

rpcope1 8 days ago | parent | prev | next [-]

More people should read Sextus Empiricus as he's basically the O.G. Phyrronist skeptic and goes pretty hard on this very train of thought.

Telemakhos 8 days ago | parent | next [-]

If I remember my Gellius, it was the Academic Skeptics who claimed that the only certainty was uncertainty; the Pyrrhonists, in opposition, denied that one could be certain about the certainty of uncertainty.

bobson381 8 days ago | parent | prev [-]

Cool. Any specific recs or places to start with him?

rpcope1 8 days ago | parent [-]

Probably the Hackett book, "Sextus Empiricus: Selections from the Major Writings on Scepticism"

bobson381 8 days ago | parent [-]

Thanks!

jazzyjackson 8 days ago | parent | prev | next [-]

A Wonderful Phrase by Gandhi

  I do dimly perceive
  that while everything around me is ever-changing,
  ever-dying there is,
  underlying all that change,
  a living power
  that is changeless,
  that holds all together,
  that creates,
  dissolves,
  and recreates
Viliam1234 6 days ago | parent | prev | next [-]

You mean like this? https://www.readthesequences.com/Zero-And-One-Are-Not-Probab...

card_zero 8 days ago | parent | prev | next [-]

The Snatter Goblins?

https://archive.org/details/goblinsoflabyrin0000frou/page/10...

tomjakubowski 7 days ago | parent | prev | next [-]

https://realworldrisk.com/

tim333 8 days ago | parent | prev | next [-]

Socrates was fairly close to that.

freedomben 8 days ago | parent [-]

My thought as well! I can't remember names at the moment, but there were some cults that spun off from Socrates. Unfortunately they also adopted his practice of never writing anything down, so we don't know a whole lot about them

JTbane 8 days ago | parent | prev | next [-]

"I have no strong feelings one way or the other." thunderous applause

saltcured 8 days ago | parent | prev | next [-]

There would be, except we're all very much on the fence about whether it is the right cult for us.

mapontosevenths 7 days ago | parent | prev | next [-]

There already is, they're called "Politicians."

ameliaquining 8 days ago | parent | prev | next [-]

Like Robert Anton Wilson if he were way less chill, perhaps.

arwhatever 8 days ago | parent | prev [-]

“Oh, that must be exhausting.”

konfusinomicon 7 days ago | parent | prev | next [-]

all of science would makes sense if it wasn't for that 1 pesky miracle

antisthenes 7 days ago | parent | prev | next [-]

It's crazy to read this, because by writing what you wrote you basically show that you don't understand what an axiom is.

You need to review the definition of the word.

> The smartest people I have ever known have been profoundly unsure of their beliefs and what they know.

The smartest people are unsure about their higher level beliefs, but I can assure you that they almost certainly don't re-evaluate "axioms" as you put it on a daily or weekly basis. Not that it matters, as we almost certainly can't verify who these people are based on an internet comment.

> I immediately become suspicious of anyone who is very certain of something, especially if they derived it on their own.

That's only your problem, not anyone else's. If you think people can't arrive to a tangible and useful approximation of truth, then you are simply delusional.

mapontosevenths 7 days ago | parent | next [-]

> If you think people can't arrive to a tangible and useful approximation of truth, then you are simply delusional

Logic is only a map, not the territory. It is a new toy, still bright and shining from the box in terms of human history. Before logic there were other ways of thinking, and new ones will come after. Yet, Voltaire's bastards are always certain they're right, despite being right far less often than they believe.

Can people arrive at tangible and useful conclusions? Certainly, but they can only ever find capital "T" Truth in a very limited sense. Logic, like many other models of the universe, is only useful until you change your frame of reference or the scale at which you think. Then those laws suddenly become only approximations, or even irrelevant.

antisthenes 7 days ago | parent [-]

There is no (T)ruth, but there is a useful approximation of truth for 99.9% things that I want to do in life.

YMMV.

JohnMakin 7 days ago | parent | prev [-]

> It's crazy to read this, because by writing what you wrote you basically show that you don't understand what an axiom is. You need to review the definition of the word.

Oh, do enlighten then.

> The smartest people are unsure about their higher level beliefs, but I can assure you that they almost certainly don't re-evaluate "axioms" as you put it on a daily or weekly basis. Not that it matters, as we almost certainly can't verify who these people are based on an internet comment.

I'm not sure you are responding to the right comment, or are severely misinterpreting what I said. Clearly a nerve was struck though, and I do apologize for any undue distress. I promise you'll recover from it.

antisthenes 7 days ago | parent [-]

> Oh, do enlighten then.

Absolutely. Just in case your keyboard wasn't working to arrive at this link via Google.

https://www.merriam-webster.com/dictionary/axiom

First definition, just in case it still isn't obvious.

> I'm not sure you are responding to the right comment, or are severely misinterpreting what I said. Clearly a nerve was struck though, and I do apologize for any undue distress.

Someone was wrong on the Internet! Just don't want other people getting the wrong idea. Good fun regardless.

JohnMakin 6 days ago | parent [-]

Ah, I see the confusion now. It is you that actually does not know what the word means or how it was used in this context. Several hundred other people seemed to have avoided this confusion though, and you still haven’t said how it was used wrong, so I can safely dismiss you as a troll. hint: it was once axiomatic that the sun revolved around the earth. Hope that helps!

antisthenes 6 days ago | parent [-]

Quantity of idiots agreeing with someone does not make someone right. Keep being delusional.

JohnMakin 5 days ago | parent [-]

You first! It would seem you know quite a bit about that.

inasio 8 days ago | parent | prev | next [-]

Saw once a discussion that people should not have kids as it's by far the highest increase in your carbon footprint in your lifetime (>10x than going vegan, etc) be driven all the way to advocating genocide as a way of carbon footprint minimization

throw0101a 8 days ago | parent | next [-]

> Saw once a discussion that people should not have kids as it's by far the highest increase in your carbon footprint in your lifetime (>10x than going vegan, etc) be driven all the way to advocating genocide as a way of carbon footprint minimization

The opening scene of Utopia (UK) s2e6 goes over this:

> "Why did you have him then? Nothing uses carbon like a first-world human, yet you created one: why would you do that?"

* https://www.youtube.com/watch?v=rcx-nf3kH_M

derektank 8 days ago | parent | prev [-]

Setting aside the reductio ad absurdum of genocide, this is an unfortunately common viewpoint. People really need to take into account the chances their child might wind up working on science or technology which reduces global CO2 emissions or even captures CO2. This reasoning can be applied to all sorts of naive "more people bad" arguments. I can't imagine where the world would be if Norman Borlaug's parents had decided to never have kids out of concern for global food insecurity.

freedomben 8 days ago | parent | next [-]

It also entirely subjugates the economic realities that we (at least currently) live in to the future health of the planet. I care a great deal about the Earth and our environment, but the more I've learned about stuff the more I've realized that anyone advocating for focusing on one without considering the impact on the other is primarily following a religion

mapontosevenths 7 days ago | parent [-]

> It also entirely subjugates the economic realities that we...

To play devils advocate, you could be seen as trying to subjugate the worlds health to your own economic well-being, and far fewer people are concerned with your tax bracket than there are people on earth. In a pure democracy, I'm fairly certain the planets well being would be deemed more important than the economy of whatever nation you live in.

> advocating for focusing on one... is primarily following a religion

Maybe, but they could also just be doing the risk calculus a bit differently. If you are a many step thinker the long term fecundity of our species might feel more important than any level of short term financial motivation.

freedomben 7 days ago | parent [-]

> To play devils advocate, you could be seen as trying to subjugate the worlds health to your own economic well-being, and far fewer people are concerned with your tax bracket than there are people on earth.

Well, if they choose to see me as trying to subjugate the world's health to my own economic well-being (despite the fact that I advocate policies that would harm me personally in the name of climate sustainability), then we're already starting the discussion from bad faith (literally they are already assuming bad faith on my part). I'm at the point where I don't engage with bad faith arguments because they just end up in frustration on both sides. This whole modern attitude of "if you disagree with me then you must be evil" thing is (IMHO) utter poison to our culture and our democracy, and the current resident of the White House is a great example of where that leads.

> In a pure democracy, I'm fairly certain the planets well being would be deemed more important than the economy of whatever nation you live in.

Yeah, for about 3 days until people start getting hungry, or less extreme, until they start losing their jobs and their homes, or even longer term when they start to realize that they won't be able to retire and/or that they are leaving their kids a much worse situation than they themselves had (much worse than the current dichotomy between Boomers and Millenials/Zoomers). Ignoring or disregarding Maslow's Hierarchy of Needs is a sure way to be surprised and rejected by the people. We know that even respectable people will often turn to violence (including cannabalism) when they get hungry or angry enough. We're not going to be able to save the planet if there's widespread violence.

> Maybe, but they could also just be doing the risk calculus a bit differently. If you are a many step thinker the long term fecundity of our species might feel more important than any level of short term financial motivation.

I think this actually pointed at our misunderstanding (I know you're playing devil's advocate so this isn't addressed to you personally, rather your current presentation :-) ). I'm not talking about short-term financial or even economic motivation. I'm looking medium to long term, the same scale that I think needs to be considered for the planet. Now that said, banning all fossil fuels tomorrow and causing sweeping global depression in the short-term is something I would radically oppose, because it would cause immense suffering and I don't believe it would make much of a dent in the climate long-term (as it would quickly be reversed under the realities of politics) and it would absolutely harm the lower income brackets to a much greater proportional extent than the upper income brackets who already have solar panels and often capable of being off-grid. Though, even they will still run out of food when the truck companies aren't able to re-stock local grocery store shelves...

mapontosevenths 7 days ago | parent [-]

> I'm at the point where I don't engage with bad faith arguments because they just end up in frustration on both sides.

I agree, and that's almost exactly why I replied to your statement that anyone who saw it differently than you did was "just following a religion" (to slightly paraphrase). They aren't, they just have a different perspective on the situation and have just made different calculations regarding the risk/reward ratio.

> Ignoring or disregarding Maslow's Hierarchy of Needs is a sure way to be surprised and rejected by the people.

>I'm looking medium to long term, the same scale that I think needs to be considered for the planet.

I don't think they ARE ignoring Maslow's hierarchy. It seems to me, that they just see the environmental destruction as being a more immediate concern than you do. You seem to have a "we'll fix it when it's more convenient" stance. That doesn't work for the folks who believe we'll all be starving within a decade or less, or who believe that it will NEVER be more convenient. To them this is near the top of the hierarchy.

At the end of the day, I'm very much on your side of the argument. I think we do have some time to sort it out, and I suspect that we will eventually make significant progress towards those goals (despite modern Republicans Ostrich based approach to risk). However, I understand why other people disagree, and I respect that. There's even some science that agrees with the "sky is falling" crowd. It's certainly not a totally irrational stance.

mapontosevenths 7 days ago | parent | prev | next [-]

> this is an unfortunately common viewpoint

Not everyone believes that the purpose of life is to make more life, or that having been born onto team human automatically qualifies team human as the best team. It's not necessarily unfortunate.

I am not a rationalist, but rationally that whole "the meaning of life is human fecundity" shtick is after school special tautological nonsense, and that seems to be the assumption buried in your statement. Try defining what you mean without causing yourself some sort of recursion headache.

> their child might wind up..

They might also grow up to be a normal human being, which is far more likely.

> if Norman Borlaug's parents had decided to never have kids

Again, this would only have mattered if you consider the well being of human beings to be the greatest possible good. Some people have other definitions, or are operating on much longer timescales.

freejazz 7 days ago | parent | prev | next [-]

Insane to call "more people bad" naive but then actually try and account for what would otherwise best be described as hope.

inasio 7 days ago | parent [-]

The point is that you can go "from more people bad" to "less people good" in just a few jumps, and that is not great.

Dylan16807 7 days ago | parent | prev [-]

> People really need to take into account the chances their child might wind up working on science or technology which reduces global CO2 emissions or even captures CO2.

All else equal, it would be better to spread those chances across a longer period of time at a lower population with lower carbon use.

mensetmanusman 7 days ago | parent | prev | next [-]

Another issue with these groups is that they often turn into sex cults.

SLWW 7 days ago | parent | prev [-]

A logical argument is only as good as it's presuppositions. To first lay siege to your own assumptions before reasoning about them tends towards a more beneficial outcome.

Another issue with "thinkers" is that many are cowards; whether they realize it or not a lot of presuppositions are built on a "safe" framework, placing little to no responsibility on the thinker.

> The smartest people I have ever known have been profoundly unsure of their beliefs and what they know. I immediately become suspicious of anyone who is very certain of something, especially if they derived it on their own.

This is where I depart from you. If I say it's anti-intellectual I would only be partially correct, but it's worse than that imo. You might be coming across "smart people" who claim to know nothing "for sure", which in itself is a self-defeating argument. How can you claim that nothing is truly knowable as if you truly know that nothing is knowable? I'm taking these claims to their logical extremes btw, avoiding the granular argumentation surrounding the different shades and levels of doubt; I know that leaves vulnerabilities in my argument, but why argue with those who know that they can't know much of anything as if they know what they are talking about to begin with? They are so defeatist in their own thoughts, it's comical. You say, "profoundly unsure", which reads similarly to me as "can't really ever know" which is a sure truth claim, not a relative claim or a comparative as many would say, which is a sad attempt to side-step the absolute reality of their statement.

I know that I exist, regardless of how I get here I know that I do, there is a ridiculous amount of rhetoric surrounding that claim that I will not argue for here, this is my presupposition. So with that I make an ontological claim, a truth claim, concerning my existence; this claim is one that I must be sure of to operate at any base level. I also believe I am me and not you, or any other. Therefore I believe in one absolute, that "I am me". As such I can claim that an absolute exists, and if absolutes exist, then within the right framework you must also be an absolute to me, and so on and so forth; what I do not see in nature is an existence, or notion of, the relative on it's own as at every relative comparison there is an absolute holding up the comparison. One simple example is heat. Hot is relative, yet it also is objective; some heat can burn you, other heat can burn you over a very long time, some heat will never burn. When something is "too hot" that is a comparative claim, stating that there is another "hot" which is just "hot" or not "hot enough", the absolute still remains which is heat. Relativistic thought is a game of comparisons and relations, not making absolute claims; the only absolute claim is that there is no absolute claim to the relativist. The reason I am talking about relativists is that they are the logical, or illogical, conclusion of the extremes of doubt/disbelief i previously mentioned.

If you know nothing you are not wise, you are lazy and ill-prepared, we know the earth is round, we know that gravity exists, we are aware of the atomic, we are aware of our existence, we are aware that the sun shines it's light upon us, we are sure of many things that took debate among smart people many many years ago to arrive to these sure conclusions. There was a time where many things we accept where "not known" but were observed with enough time and effort by brilliant people. That's why we have scientists, teachers, philosophers and journalists. I encourage you that the next time you find a "smart" person who is unsure of their beliefs, you should kindly encourage them to be less lazy and challenge their absolutes, if they deny the absolute could be found then you aren't dealing with a "smart" person, you are dealing with a useful idiot who spent too much time watching skeptics blather on about meaningless topics until their brains eventually fell out. In every relative claim there must be an absolute or it fails to function in any logical framework. You can with enough thought, good data, and enough time to let things steep find the (or an) absolute and make a sure claim. You might be proven wrong later, but that should be an indicator to you that you should improve (or a warning you are being taken advantage of by a sophist), and that the truth is out there, not to sequester yourself away in this comfortable, unsure hell that many live in till they die.

The beauty of absolute truth is that you can believe absolutes without understanding the entirety of the absolute. I know gravity exists but I don't know fully how it works. Yet I can be absolutely certain it acts upon me, even if I only understand a part of it. People should know what they know and study it until they do and not make sure claims outside of what they do not know until they have the prerequisite absolute claims to support the broader claims with the surety of the weakest of their presuppositions.

Apologies for grammar, length and how schizo my thought process appears; I don't think linearly and it takes a goofy amount of effort to try to collate my thoughts in a sensible manner.