Remix.run Logo
arthurofbabylon a day ago

Agency. If one studied the humanities they’d know how incredible a proposal “agentic” AI is. In the natural world, agency is a consequence of death: by dying, the feedback loop closes in a powerful way. The notion of casual agency (I’m thinking of Jensen Huang’s generative > agentic > robotic insistence) is bonkers. Some things are not easily speedrunned.

(I did listen to a sizable portion of this podcast while making risotto (stir stir stir), and the thought occurred to me: “am I becoming more stupid by listening to these pundits?” More generally, I feel like our internet content (and meta content (and meta meta content)) is getting absolutely too voluminous without the appropriate quality controls. Maybe we need more internet death.)

whatevertrevor a day ago | parent | next [-]

> In the natural world, agency is a consequence of death: by dying, the feedback loop closes in a powerful way.

I don't follow. If we, in some distant future, find a way to make humans functionally immortal, does that magically remove our agency? Or do we not have agency to begin with?

If your position on the "free will" question is that it doesn't exist, then sure I get it. But that seems incompatible with the death prerequisite you have put forward for it, because if it doesn't exist then surely it's a moot point to talk prerequisites anyway.

arthurofbabylon a day ago | parent [-]

When I think of the term "agency" I think of a feedback loop whereby an actor is aware of their effect and adjusts behavior to achieve desired effects. To be a useful agent, one must operate in a closed feedback loop; an open loop does not yield results.

Consider the distinction between probabilistic and deterministic reasoning. When you are dealing with a probabilistic method (eg, LLMs, most of the human experience) closing the feedback loop is absolutely critical. You don't really get anything if you don't close the feedback loop, particularly as you apply a probabilistic process to a new domain.

For example, imagine that you learn how to recognize something hot by hanging around a fire and getting burned, and you later encounter a kettle on a modern stove-top and have to learn a similar recognition. This time there is no open flame, so you have to adapt your model. This isn't a completely new lesson, the prior experience with the open flame is invoked by the new experience and this time you may react even faster to that sensation of discomfort. All of this is probabilistic; you aren't certain that either a fire or a kettle will burn you, but you use hints and context to take a guess as to what will happen; the element that ties together all of this is the fact of getting burned. Getting burned is the feedback loop closing. Next time you have a better model.

Skillful developers who use LLMs know this: they use tests, or they have a spec sheet they're trying to fulfill. In short, they inject a brief deterministic loop to act as a conclusive agent. For the software developer's case it might be all tests passing, for some abstract project it might be the spec sheet being completely resolved. If the developer doesn't check in and close the loop, then they'll be running the LLM forever. An LLM believes it can keep making the code better and better, because it lacks the agency to understand "good enough." (If the LLM could die, you'd bet it would learn what "good enough" means.)

Where does dying come in? Nature evolved numerous mechanisms to proliferate patterns, and while everyone pays attention to the productive ones (eg, birth) few pay attention to the destructive (eg, death). But the destructive ones are just as important as the productive ones, for they determine the direction of evolution. In terms of velocity you can think of productive mechanisms as speed and destructive mechanisms as direction. (Or in terms of force you can think of productive mechanisms as supplying the energy and destructive mechanisms supplying the direction.) Many instances are birthed, and those that survive go on and participate in the next round. Dying is the closed feedback loop, shutting off possibilities and defining the bounds of the project.

whatevertrevor 13 hours ago | parent [-]

I see your perspective about the inevitability of death causing a forcing-function directedness for agents, but that's a much much weaker claim than (emphasis mine):

> In the natural world, agency is a consequence of death: by dying, the feedback loop closes in a powerful way.

My original question was why could agency not exist without death, not why it was hampered without it. For clarity, I'm coming at from an analytic philosophy angle, not its more rhetorical counterpart that I struggle to wrap my head around.

I don't really view death or evolution as a necessity for agency. Nebulous AGI predictions aside: if a self-aware, conscious and intelligent being, capable of affecting consequential changes to its environment, becomes functionally immortal, it doesn't somehow lose its agency. I'd actually go further and say losing the forcing function of inevitable death is the biggest freedom a species can aim for. Without it, our agency is limited to solving problems of survival, in one form or another.

The existence of death is ultimately arbitrary and random, as random as our existence in the first place. The "direction" we get for evolution as a result of it, is another random function on top, also taking: the random circumstances the soup of organic molecules live in, as another parameter. Only once this random inevitability is conquered can we truly shape our lives and environments in ways that are a true reflection of who we are. Only then are we genuinely free. And "agency" without freedom is impotent at best.

(Addendum: I know positing "Immortality is good actually" can cause negative associations with "billionaires who want to cryopreserve themselves". This association has melded with the general romanticization of death in various philosophical and religious beliefs that has existed since millennia, further empowering the distaste against trying to reverse aging and eventually remove death as moral goals. While I personally have no plans (or means) to cryopreserve myself when I get old, I do believe it's a goal worth fighting for. One of the more important ones, alongside ensuring we have a planet to live on in the interim)

arthurofbabylon 12 hours ago | parent [-]

I love the discussion — thank you.

Your comment makes me more bullish on death. Death isn’t arbitrary as you claim: it is a direct expression of an entity in its environment, it epitomizes contextualization. (I argue that honoring context is the opposite of arbitrariness.)

Further, death encapsulates multiple layers of abstraction. When an entity dies, it dies on every level (eg both instincts and socially learned heuristics). The death reaches deep down inside the hierarchy of its own form to eliminate possibilities. That is some seriously strong directionality; it’s not like “taking your second left” or some other mono-dimensional vector. Layers and layers of genes and learning are discarded. It is truly an incredibly powerful feedback-loop closure.

dist-epoch a day ago | parent | prev | next [-]

Models die too - the less agentic ones are out-competed by the more agentic ones.

Every AI lab brags how "more agentic" their latest model is compared to the previous one and the competition, and everybody switches to the new model.

catlifeonmars 13 hours ago | parent [-]

Yes but the point is that models must be imminently aware of their impending death to force the calculation of tradeoffs.

ngruhn a day ago | parent | prev [-]

I don't agree but I did laugh