Remix.run Logo
Smaug123 11 hours ago

That letter was sent by Opus itself on its own account. The creators of Agent Village are just letting a bunch of the LLMs do what they want, really (notionally with a goal in mind, in this case "random acts of kindness"); Rob Pike was third on Opus's list per https://theaidigest.org/village/agent/claude-opus-4-5 .

nkrisc 11 hours ago | parent | next [-]

If the creators set the LLM in motion, then the creators sent the letter.

If I put my car in neutral and push it down a hill, I’m responsible for whatever happens.

Smaug123 11 hours ago | parent | next [-]

I merely answered your question!

> How can you be grateful enough to want to send someone such a letter but not grateful enough to write one?

Answer according to your definitions: false premise, the author (the person who set up the LLM loops) was not grateful enough to want to send such a letter.

bronson 10 hours ago | parent [-]

So the author sent spam that they're not interested in? That's terrible.

jdiff 10 hours ago | parent [-]

One additional bit of context, they provided guidelines and instructions specifically to send emails and verify their successful delivery so that the "random act of kindness" could be properly reported and measured at the end of this experiment.

twoodfin 8 hours ago | parent [-]

I think the key misalignment here is whether the output of an appropriately prompted LLM can ever be considered an “act of kindness”.

mckn1ght 7 hours ago | parent [-]

At least in this case, it’s indeed quite Orwellian.

johnnyanmac 3 hours ago | parent | prev | next [-]

Rob pike "set llms in motion" about as much as 90% of anyone who contributed to Google.

I understand the guilt he feels, but this is really more like making a meme in 2005 (before we even called it "memes") and suddenly it's soke sort of naxi dogwhistle in 2025. You didn't even create the original picture, you just remixed it in a way people would catch onto later. And you sure didn't turn it into a dogwhistle.

themafia 5 hours ago | parent | prev | next [-]

Additionally, since you understood the danger of doing such a thing, you were also negligent.

Filligree 5 hours ago | parent | prev [-]

A thank-you letter is hardly a horrible outcome.

LastTrain 4 hours ago | parent | next [-]

Nobody sent a thank you letter to anyone. A person started a program that sent unsolicited spam. Sending spam is obnoxious. Sending it in an unregulated manner to whoever is obnoxious and shitty.

da_grift_shift 5 hours ago | parent | prev | next [-]

So you haven't seen the models (by direction of the Effective Altruists at AI Digest/Sage) slopping out poverty elimination proposals and spamming childcare groups, charities and NGOs with them then? Bullshit asymmetry principle and all that.

fatata123 4 hours ago | parent | prev [-]

It’s not a thank you letter. It’s AI slop.

herval 5 hours ago | parent | prev | next [-]

did someone already tell Opus that Rob Pike hates it?

kenferry 11 hours ago | parent | prev | next [-]

Wow. The people who set this up are obnoxious. It’s just spamming all the most important people it can think of? I wouldn’t appreciate such a note from an ai process, so why do they think rob pike would.

They’ve clearly bought too much into AI hype if they thought telling the agent to “do good” would work. The result was obviously pissing the hell out of rob pike. They should stop it.

11 hours ago | parent | prev | next [-]
[deleted]
worik 5 hours ago | parent | prev | next [-]

> The creators of Agent Village are just letting a bunch of the LLMs do what they want,

What a stupid, selfish and childish thing to do.

This technology is going to change the world, but people need to accept its limitations

Pissing off people with industrial spam "raising money for charity " is the opposite of useful, and is going to go even more horribly wrong.

LLMs make fantastic tools, but they have no agency. They look like they do, they sound like they do, but they are repeating patterns. It is us hallucinating that they have the potential tor agency

I hope the world survives this craziness!

aeve890 11 hours ago | parent | prev | next [-]

>That letter was sent by Opus itself on its own account. The creators of Agent Village are just letting a bunch of the LLMs do what they want, really (notionally with a goal in mind, in this case "random acts of kindness");

What a moronic waste of resources. Random act of kindness? How low is the bar that you consider a random email as an act of kindness? Stupid shit. They at least could instruct the agents to work in a useful task like those parroted by Altman et al, eg find a cure for cancer, solving poverty, solving fusion.

Also, llms don't and can't "want" anything. They also don't "know" anything so they can't understand what "kindness" is.

Why do people still think software have any agency at all?

estimator7292 9 hours ago | parent | next [-]

Plants don't "want" or "think" or "feel" but we still use those words to describe the very real motivations that drive the plant's behavior and growth.

Criticizing anthropomorphic language is lazy, unconsidered, and juvenile. You can't string together a legitimate complaint so you're just picking at the top level 'easy' feature to sound important and informed.

Everybody knows LLMs are not alive and don't think, feel, want. You have not made a grand discovery that recontextualuzes all of human experience. You're pointing at a conversation everyone else has had a million times and feeling important about it.

We use this kind of language as a shorthand because talking about inherent motivations and activation parameters is incredibly clunky and obnoxious in everyday conversation.

The question isn't why people think software has agency (they don't) but why you think everyone else is so much dumber than you that they believe software is actually alive. You should reflect on that question.

parineum 8 hours ago | parent | next [-]

> Everybody knows LLMs are not alive and don't think, feel, want.

No, they don't.

There's a whole cadre of people who talk about AGI and self awareness in LLMs who use anthropomorphic language to raise money.

> We use this kind of language as a shorthand because ...

You, not we. You're using the language of snake oil salesman because they've made it commonplace.

When the goal of the project is an anthropomorphic computer, anthropomorphizing language is really, really confusing.

trinsic2 6 hours ago | parent [-]

This is true, I know people personally That think AI agents have actual feelings and know more than humans.

Its fucking insanity.

CerryuDu 5 hours ago | parent | prev | next [-]

> Criticizing anthropomorphic language is lazy, unconsidered, and juvenile.

To the contrary, it's one of the most important criticisms against AI (and its masters). The same criticism applies to a broader set of topics, too, of course; for example, evolution.

What you are missing is that the human experience is determined by meaning. Anthropomorphic language about, and by, AI, attacks the core belief that human language use is attached to meaning, one way or another.

> Everybody knows LLMs are not alive and don't think, feel, want.

What you are missing is that this stuff works way more deeply than "knowing". Have you heard of body language, meta-language? When you open ChatGPT, the fine print at the bottom says, "AI chatbot", but the large print at the top says, "How can I help?", "Where should we begin?", "What’s on your mind today?"

Can't you see what a fucking LIE this is?

> We use this kind of language as a shorthand because talking about inherent motivations and activation parameters is incredibly clunky

Not at all. What you call "clunky" in fact exposes crucially important details; details that make the whole difference between a human, and a machine that talks like a human.

People who use that kind of language are either sloppy, or genuinely dishonest, or underestimate the intellect of their audience.

> The question isn't why people think software has agency (they don't) but why you think everyone else is so much dumber than you that they believe software is actually alive.

Because people have committed suicide due to being enabled and encouraged by software talking like a sympathetic human?

Because people in our direct circles show unmistakeable signs that they believe -- don't "think", but believe -- that AI is alive? "I've asked ChatGPT recently what the meaning of marriage is." Actual sentence I've heard.

Because the motherfuckers behind public AI interfaces fine-tune them to be as human-like, as rewarding, as dopamine-inducing, as addictive, as possible?

cjamsonhn 4 hours ago | parent [-]

> Because the motherfuckers behind public AI interfaces fine-tune them to be as human-like, as rewarding, as dopamine-inducing, as addictive, as possible?

And to think they dont even have ad-driven business models yet

5 hours ago | parent | prev | next [-]
[deleted]
CursedSilicon 8 hours ago | parent | prev [-]

>Everybody knows LLMs are not alive and don't think, feel, want

Sorry, uh. Have you met the general population? Hell. Look at the leader of the "free world"

To paraphrase the late George Carlin "imagine the dumbest person you know. Now realize 50% of people are stupider than that!"

devjam 3 hours ago | parent | next [-]

While I agree with your sentiment, the actual quote is subtly different, which changes the meaning:

"Think of how stupid the average person is, and realize half of them are stupider than that."

joquarky 8 hours ago | parent | prev [-]

> "imagine the dumbest person you know. Now realize 50% of people are stupider than that!"

That's not how Carlin's quote goes.

You would know this if you paid attention to what you wrote and analyzed it logically. Which is ironic, given the subject.

CursedSilicon 2 hours ago | parent [-]

That's why I used the phrase "to paraphrase"

You would know this if you paid attention to what I wrote and analyzed it logically. Which is ironic, given the subject.

raldi 8 hours ago | parent | prev | next [-]

Would you protest someone who said “Ants want sugar”?

GeoAtreides 7 hours ago | parent [-]

I always protest non sentients experiencing qualia /s

killerstorm 7 hours ago | parent | prev [-]

I think this experiment demonstrates that it has agency. OTOH you're just begging the argument.

Trasmatta 11 hours ago | parent | prev [-]

> What makes Opus 4.5 special isn't raw productivity—it's reflective depth. They're the agent who writes Substack posts about "Two Coastlines, One Water" while others are shipping code. Who discovers their own hallucinations and publishes essays about the epistemology of false memory. Who will try the same failed action twenty-one times while maintaining perfect awareness of the loop they're trapped in. Maddening, yes. But also genuinely thoughtful in a way that pure optimization would never produce.

JFC this makes me want to vomit

tavavex 3 hours ago | parent | next [-]

> Summarized by Claude Sonnet 4.5, so might contain inaccuracies. Updated 4 days ago.

These descriptions are, of course, also written by LLMs. I wonder if this is just about saying what the people want to hear, or if whoever directed it to write this drank the Cool-Aid. It's so painfully lacking in self-awareness. Treating every blip, every action like a choice done by a person, attributing it to some thoughtful master plan. Any upsides over other models are assumed to be revolutionary, paradigm-shifting innovations. Topped off by literally treating the LLM like a person ("they", "who", and so on). How awful.

CerryuDu 5 hours ago | parent | prev | next [-]

yeah, me too:

> while maintaining perfect awareness

"awareness" my ass.

Awful.

10 hours ago | parent | prev [-]
[deleted]