Remix.run Logo
vidarh 3 days ago

> If you think you have no agency why do anything at all?

I addressed that in my comment, but let me address it again since it's the most frequent objection to this:

> You could choose to stop doing anything.

In the mechanical sense that an "IF ... THEN ... ELSE" statement makes the program "choose" which branch to take, you're right, yes I could.

But then I'd also suffer the consequences.

As I pointed out, if I were to life down in despair and not go to work, I won't keep getting paid just because I didn't have agency over the "choice" of whether to lie down and sulk or get up and go to work.

But for "agency" to have any meaning, we can't interpret choice that way. If we don't have agency, then while I may have an artificial "choice", that "choice" can't change the outcome.

In that case every "choice" I make is just as deterministic as that IF ... THEN ... ELSE: The branch taken depends on the state of the system.

> Or you could decide that your partial knowledge(untealised futures) gives you agency. > > It's a matter of metacognition. Being able to compute possible futures gives you artificial agency at some level. At a meta level even that compute ca be deterministic but you should not care.

What you are describing is compatibilism: The school of thought on "free will" that effectively says that free will is real, but is also an illusion.

Personally I think that is basically brushing the issue under the carpet, though I also think it is the only definition of free will that is logically consistent.

I do agree with the point that you mostly should not care:

You need to mostly act as if every "choice" you make does matter, because whether or not you have control over it, if you do lie down in despair, your paychecks will stop arriving.

Cause and effect does not care whether or not you have agency.

Where I take issue with compatibilism is because there are considerable differences in how you should "choose" to act if you consider agency to be "artificial" or an illusion (compatibilism) or not exist at all (for this purpose these are pretty much equivalent) vs. if you consider it to be real.

E.g. we blame and reward people or otherwise treat people differently based on their perceived agency all the time, and a lot of that treatment is a lot harder to morally justify if you don't believe in actual agency. Real harm happens to people because we assume they have agency. If that agency isn't real, it doesn't matter if we have an illusion of it - in that case a lot of that harm is immoral.

To tie it back to the thread: Whether agency is not real at all, or just significantly constrained by circumstance, it changes the considerations in what we should expect ourselves and others to be able to overcome.

E.g. it makes no logical sense to feel bad about past choices, because they couldn't have gone differently (you can still feel bad about the effects, and commit to "choosing" differently in the future). You also then shouldn't feel bad if you haven't achieved what you wanted to if you believe the context you live within either have total control over the actions you take, or "just" a significant degree of influence over it.

And so we're back to my original argument that for most people, acting as-if they have agency by "choosing" to bet on making the surrounding conditions more amenable to good outcomes is a better bet than thinking they have agency or enough agency to achieve a different outcome.

But again: The fact that I believe we have no agency, does not mean I won't try to do things that will get me better outcomes. I just don't assume I could act any other way in a given instance than I end up acting in that given instance, any more than a movie will change if you rewind it and press play again.

aatd86 3 days ago | parent [-]

I think we agree. The subtelty is that, it is about closed and open systems. Your partial knowledge makes things a locally open system. You are processing new data and then acting accordingly. That's dynamic agency. The better you can get knowledge, the better you can influence the next step.

That realization happens at the meta level and gives you agency in your actual universe. Even though at the meta-meta level, that realization itself can be deterministic.

Not to be confused with someone who would be external to the system and could watch your life as if it was a video tape, being omniscient. They would not have agency in your system as they can't interact with you and for them everything is predetermined, and they could compute the next state of the system from the past state. You can't but the system is impredicative enough that by recognizing this, by self-consciousness, the system effects itself toward its own favored state. And in fact, the more knowledge you have, the less agency. Because the fewer choices.

The meta level person doesn't just observe how the video. It observes the fact that people realize they are characters in a video and how that realization affects the choices they make. Given the initial conditions.

Should you have regrets in life? You had the choice of knowing more and be more able so it makes sense. Could things have happened differently given that they did and obviously you wanted back then for them to be different and wish they had been? Or did it happen because the conditions were set to happen?

Basically the question is whether we control our odds? Doing anything is controlling some odds so I'd say yes. Requires increased self consciousness. Being able to imagine what is not there. Animals seem to have that capacity. Especially humans. We can make sure that certain things don't happen by virtue of our own existence. This is our agency. Are we biased by construction toward the best odds of we can recognize them? Yes. Are their really things with the exact same odds in the system? Wouldn't that block us? Probably. But the system is already made in a way that it wouldn't happen by virtue of having (at least local) asymmetries. In practice we wouldn't be blocked. Someone perfectly symmetrical in a system that also is, would perhaps. But there might not be any two same most desired odds then so no. Unlikely.

vidarh 3 days ago | parent [-]

So again, this is basically the compatibilist stance. To me, it rings hollow because it glosses over whether you actually made a decision in a way that is qualtitively different from how a clock "chooses" to move the minute hand one minute further.

And so I would answer to your question about regrets that I don't believe you had that choice. That you couldn't have chosen differently given the same inputs and state. Your "choice" followed the preceding state with the same predictability as a well functioning clock.

aatd86 2 days ago | parent [-]

Interesting thought exercise, let me try something:

Only if we can predict everything ourselves do we not have a choice. But since we don't know what we don't know and that may occur at any moment (black swan), we can only act given probabilities.

Then what we control is our level of appetite for risk of an undesired outcome.

That risk is not data that we can reliably measure and assert. So it creates randomness/stochasticity in the system.

That's why I was speaking of open vs closed system.

Randomness provides agency.

That randomness is subjective. You may well still be predictable for an omniscient person. But that person would not have any agency. You do as long as your choice does not rely upon knowledge.

I guess that's why the human society is weird in a sense. People act from belief they have no certitude about.

A clock does not do that, there is no metacognitive process to influence an action toward a yet unrealised future. Seems incomparable?

But yes, other than that, there is not real accurate way to deny compatibilism I'm afraid.

In fact, true agency is the attempt to eliminate choice.

It is like being in a Labyrinth where the walls are moving.

The clock sits in the labyrinth and gets crushed by a moving wall.

An agentic person detects the movements and recalibrates.