Remix.run Logo
neilv 6 days ago

> All in all, once an organization gets big enough, power does what power wants, and power wants what is good for them in the short term, regardless of what is good for the organization.

This has the ring of truth.

Has anyone solved this problem?

Is anyone trying to solve this problem? (Or is everyone in a position to work on the problem just playing the game?)

forshaper 5 days ago | parent | next [-]

On smaller scales.

I suspect alignment to long-term profits weirdly solves it at larger scales, but that kind of unbridled greed is weirdly hard for large organizations anyway.

The thing it wants is usually continuation of certain hierarchies, and singular long-term goals toward anything tend to disrupt that.

marcus_holmes 6 days ago | parent | prev | next [-]

Valve? Though I have no idea how they're going now with the anarchy-as-an-organisational-structure thing these days.

bluesroo 5 days ago | parent [-]

I remember in the 2010s reading about them and also reading that there are de facto hierarchies within Valve for given projects, even if they aren’t explicitly laid out.

neilv 5 days ago | parent [-]

It would be interesting if the de facto hierarchies arose entirely by bottom-up merit (not, say, approval from above), and were flexible and ephemeral, not self-perpetuating.

People could self-organize, on-demand, for a task, and structure whatever hierarchy was appropriate, based on somewhat optimal resource allocations for that task.

(Example: Person A might normally be the most experienced at facilitating the group's coordination, but A is providing key technical expertise for this task. Person B isn't critical path on this task, and has facilitating skills and interest in that role, so B volunteers for that role for the duration of the task.)

AnotherGoodName 6 days ago | parent | prev | next [-]

If i’m honest I feel the issue is all the humans with human motivations.

An AI run organisation may solve it?

neilv 6 days ago | parent [-]

So far, the majority of the "AI" adoption we're seeing since ChatGPT is reflecting some of those undesirable human motivations. (It's actually worse than I thought the baseline human motivations/intention were.)

o1bf2k25n8g5 6 days ago | parent | prev | next [-]

>> power does what power wants

> Has anyone solved this problem?

You're asking if anyone has solved the problem... of human nature? I don't think it's at the top of most people's lists of action items.

> Is anyone trying to solve this problem?

Your nearest meditation center, I suppose.

benreesman 5 days ago | parent [-]

Human nature admits a spectrum of outcomes on this, and I'd argue that most humans are not in fact pathologically acquisitive and power obsessed. Most humans value high status, but healthy societies confer high status in ways de-coupled from counterproductive Putinism. The people who attended the fifth Solvay Conference (that famous photo), who ran the Manhattan Project, who put men on the moon (or went) all were fabulously high status for good reasons with incentives that served society rather than parisitizing it. Those people got to be admired and enjoy the privileges of high status without bankrupting the body politic for countless commas.

This Bezos-style hyperaquisition isn't new exactly but it's not the constant norm its currently made out to be: its a sociecal failure mode with clear precedent but by no means a constant and its not at all obvious that it's inevitable.

bluesroo 5 days ago | parent [-]

I’d agree that most humans are not pathological power seekers; however I believe that’s exactly why we end up with successful pathological power seekers.

Like the world is learning with nukes, you cannot rely on the powerful for mercy. You can only rely on the powerful to grasp for more power and the only way to stop them is to yourself be as strong as possible.

If a utopia ever exists, it will only be because of a stalemate arms race (see: no nuclear powers have had an open war). Peaceful utopia is otherwise too easily disrupted by a single asshole with a big stick.

seer 6 days ago | parent | prev [-]

I wonder what happened with Frédéric Laloux’s “reinventing organisations”? Seemed to have so much promise at the time when Ruby on Rails was a new thing and people were laughing at DHH’s joke essay on the “Emotional Programmer”…

This all seems like a failure of incentives - the hard truth is that organisations that survive long enough all end up valuing only the survival of the organisation itself - and structure incentives accordingly. But maybe there is a way to modify these incentives somewhat?

Humanity all thought that monarchies are the only way of ruling successful states for _thousands_ of years … but now they are almost gone, and people live much more happy and productive lives.

Maybe we can figure out a way to shape institutions to not only have an “executive branch” but some other institutions that can also govern it.

We kinda have the idea of CEO and “board” which share power, maybe there is one or two more power centers that we can add that will ultimately prolong the life of an org?