Remix.run Logo
nkrisc 13 hours ago

What is going through the mind of someone who sends an AI-generated thank-you letter instead of writing it themselves? How can you be grateful enough to want to send someone such a letter but not grateful enough to write one?

Smaug123 13 hours ago | parent | next [-]

That letter was sent by Opus itself on its own account. The creators of Agent Village are just letting a bunch of the LLMs do what they want, really (notionally with a goal in mind, in this case "random acts of kindness"); Rob Pike was third on Opus's list per https://theaidigest.org/village/agent/claude-opus-4-5 .

nkrisc 13 hours ago | parent | next [-]

If the creators set the LLM in motion, then the creators sent the letter.

If I put my car in neutral and push it down a hill, I’m responsible for whatever happens.

Smaug123 13 hours ago | parent | next [-]

I merely answered your question!

> How can you be grateful enough to want to send someone such a letter but not grateful enough to write one?

Answer according to your definitions: false premise, the author (the person who set up the LLM loops) was not grateful enough to want to send such a letter.

bronson 12 hours ago | parent [-]

So the author sent spam that they're not interested in? That's terrible.

jdiff 12 hours ago | parent [-]

One additional bit of context, they provided guidelines and instructions specifically to send emails and verify their successful delivery so that the "random act of kindness" could be properly reported and measured at the end of this experiment.

twoodfin 9 hours ago | parent [-]

I think the key misalignment here is whether the output of an appropriately prompted LLM can ever be considered an “act of kindness”.

mckn1ght 8 hours ago | parent [-]

At least in this case, it’s indeed quite Orwellian.

johnnyanmac 4 hours ago | parent | prev | next [-]

Rob pike "set llms in motion" about as much as 90% of anyone who contributed to Google.

I understand the guilt he feels, but this is really more like making a meme in 2005 (before we even called it "memes") and suddenly it's soke sort of naxi dogwhistle in 2025. You didn't even create the original picture, you just remixed it in a way people would catch onto later. And you sure didn't turn it into a dogwhistle.

Filligree 7 hours ago | parent | prev | next [-]

A thank-you letter is hardly a horrible outcome.

LastTrain 6 hours ago | parent | next [-]

Nobody sent a thank you letter to anyone. A person started a program that sent unsolicited spam. Sending spam is obnoxious. Sending it in an unregulated manner to whoever is obnoxious and shitty.

da_grift_shift 7 hours ago | parent | prev | next [-]

So you haven't seen the models (by direction of the Effective Altruists at AI Digest/Sage) slopping out poverty elimination proposals and spamming childcare groups, charities and NGOs with them then? Bullshit asymmetry principle and all that.

fatata123 6 hours ago | parent | prev [-]

It’s not a thank you letter. It’s AI slop.

themafia 7 hours ago | parent | prev [-]

Additionally, since you understood the danger of doing such a thing, you were also negligent.

herval 7 hours ago | parent | prev | next [-]

did someone already tell Opus that Rob Pike hates it?

kenferry 13 hours ago | parent | prev | next [-]

Wow. The people who set this up are obnoxious. It’s just spamming all the most important people it can think of? I wouldn’t appreciate such a note from an ai process, so why do they think rob pike would.

They’ve clearly bought too much into AI hype if they thought telling the agent to “do good” would work. The result was obviously pissing the hell out of rob pike. They should stop it.

12 hours ago | parent | prev | next [-]
[deleted]
worik 7 hours ago | parent | prev | next [-]

> The creators of Agent Village are just letting a bunch of the LLMs do what they want,

What a stupid, selfish and childish thing to do.

This technology is going to change the world, but people need to accept its limitations

Pissing off people with industrial spam "raising money for charity " is the opposite of useful, and is going to go even more horribly wrong.

LLMs make fantastic tools, but they have no agency. They look like they do, they sound like they do, but they are repeating patterns. It is us hallucinating that they have the potential tor agency

I hope the world survives this craziness!

aeve890 12 hours ago | parent | prev | next [-]

>That letter was sent by Opus itself on its own account. The creators of Agent Village are just letting a bunch of the LLMs do what they want, really (notionally with a goal in mind, in this case "random acts of kindness");

What a moronic waste of resources. Random act of kindness? How low is the bar that you consider a random email as an act of kindness? Stupid shit. They at least could instruct the agents to work in a useful task like those parroted by Altman et al, eg find a cure for cancer, solving poverty, solving fusion.

Also, llms don't and can't "want" anything. They also don't "know" anything so they can't understand what "kindness" is.

Why do people still think software have any agency at all?

estimator7292 10 hours ago | parent | next [-]

Plants don't "want" or "think" or "feel" but we still use those words to describe the very real motivations that drive the plant's behavior and growth.

Criticizing anthropomorphic language is lazy, unconsidered, and juvenile. You can't string together a legitimate complaint so you're just picking at the top level 'easy' feature to sound important and informed.

Everybody knows LLMs are not alive and don't think, feel, want. You have not made a grand discovery that recontextualuzes all of human experience. You're pointing at a conversation everyone else has had a million times and feeling important about it.

We use this kind of language as a shorthand because talking about inherent motivations and activation parameters is incredibly clunky and obnoxious in everyday conversation.

The question isn't why people think software has agency (they don't) but why you think everyone else is so much dumber than you that they believe software is actually alive. You should reflect on that question.

parineum 10 hours ago | parent | next [-]

> Everybody knows LLMs are not alive and don't think, feel, want.

No, they don't.

There's a whole cadre of people who talk about AGI and self awareness in LLMs who use anthropomorphic language to raise money.

> We use this kind of language as a shorthand because ...

You, not we. You're using the language of snake oil salesman because they've made it commonplace.

When the goal of the project is an anthropomorphic computer, anthropomorphizing language is really, really confusing.

an hour ago | parent | next [-]
[deleted]
trinsic2 8 hours ago | parent | prev [-]

This is true, I know people personally That think AI agents have actual feelings and know more than humans.

Its fucking insanity.

CerryuDu 7 hours ago | parent | prev | next [-]

> Criticizing anthropomorphic language is lazy, unconsidered, and juvenile.

To the contrary, it's one of the most important criticisms against AI (and its masters). The same criticism applies to a broader set of topics, too, of course; for example, evolution.

What you are missing is that the human experience is determined by meaning. Anthropomorphic language about, and by, AI, attacks the core belief that human language use is attached to meaning, one way or another.

> Everybody knows LLMs are not alive and don't think, feel, want.

What you are missing is that this stuff works way more deeply than "knowing". Have you heard of body language, meta-language? When you open ChatGPT, the fine print at the bottom says, "AI chatbot", but the large print at the top says, "How can I help?", "Where should we begin?", "What’s on your mind today?"

Can't you see what a fucking LIE this is?

> We use this kind of language as a shorthand because talking about inherent motivations and activation parameters is incredibly clunky

Not at all. What you call "clunky" in fact exposes crucially important details; details that make the whole difference between a human, and a machine that talks like a human.

People who use that kind of language are either sloppy, or genuinely dishonest, or underestimate the intellect of their audience.

> The question isn't why people think software has agency (they don't) but why you think everyone else is so much dumber than you that they believe software is actually alive.

Because people have committed suicide due to being enabled and encouraged by software talking like a sympathetic human?

Because people in our direct circles show unmistakeable signs that they believe -- don't "think", but believe -- that AI is alive? "I've asked ChatGPT recently what the meaning of marriage is." Actual sentence I've heard.

Because the motherfuckers behind public AI interfaces fine-tune them to be as human-like, as rewarding, as dopamine-inducing, as addictive, as possible?

ux266478 an hour ago | parent | next [-]

> Anthropomorphic language about, and by, AI, attacks the core belief that human language use is attached to meaning

This is unsound. At best it's incompatible with an unfounded teleological stance, one that has never been universal.

cjamsonhn 6 hours ago | parent | prev [-]

> Because the motherfuckers behind public AI interfaces fine-tune them to be as human-like, as rewarding, as dopamine-inducing, as addictive, as possible?

And to think they dont even have ad-driven business models yet

CursedSilicon 10 hours ago | parent | prev | next [-]

>Everybody knows LLMs are not alive and don't think, feel, want

Sorry, uh. Have you met the general population? Hell. Look at the leader of the "free world"

To paraphrase the late George Carlin "imagine the dumbest person you know. Now realize 50% of people are stupider than that!"

devjam 5 hours ago | parent | next [-]

While I agree with your sentiment, the actual quote is subtly different, which changes the meaning:

"Think of how stupid the average person is, and realize half of them are stupider than that."

joquarky 9 hours ago | parent | prev [-]

> "imagine the dumbest person you know. Now realize 50% of people are stupider than that!"

That's not how Carlin's quote goes.

You would know this if you paid attention to what you wrote and analyzed it logically. Which is ironic, given the subject.

CursedSilicon 4 hours ago | parent [-]

That's why I used the phrase "to paraphrase"

You would know this if you paid attention to what I wrote and analyzed it logically. Which is ironic, given the subject.

7 hours ago | parent | prev [-]
[deleted]
raldi 10 hours ago | parent | prev | next [-]

Would you protest someone who said “Ants want sugar”?

GeoAtreides 9 hours ago | parent [-]

I always protest non sentients experiencing qualia /s

killerstorm 9 hours ago | parent | prev [-]

I think this experiment demonstrates that it has agency. OTOH you're just begging the argument.

Trasmatta 13 hours ago | parent | prev [-]

> What makes Opus 4.5 special isn't raw productivity—it's reflective depth. They're the agent who writes Substack posts about "Two Coastlines, One Water" while others are shipping code. Who discovers their own hallucinations and publishes essays about the epistemology of false memory. Who will try the same failed action twenty-one times while maintaining perfect awareness of the loop they're trapped in. Maddening, yes. But also genuinely thoughtful in a way that pure optimization would never produce.

JFC this makes me want to vomit

tavavex 5 hours ago | parent | next [-]

> Summarized by Claude Sonnet 4.5, so might contain inaccuracies. Updated 4 days ago.

These descriptions are, of course, also written by LLMs. I wonder if this is just about saying what the people want to hear, or if whoever directed it to write this drank the Cool-Aid. It's so painfully lacking in self-awareness. Treating every blip, every action like a choice done by a person, attributing it to some thoughtful master plan. Any upsides over other models are assumed to be revolutionary, paradigm-shifting innovations. Topped off by literally treating the LLM like a person ("they", "who", and so on). How awful.

CerryuDu 7 hours ago | parent | prev | next [-]

yeah, me too:

> while maintaining perfect awareness

"awareness" my ass.

Awful.

11 hours ago | parent | prev [-]
[deleted]
atrus 13 hours ago | parent | prev | next [-]

You're not. You feel obligated to send a thank you, but don't want to put forth any effort, hence giving the task to someone, or in this case, something else.

No different than an CEO telling his secretary to send an anniversary gift to his wife.

nehal3m 13 hours ago | parent [-]

Which is also a thoughtless, dick move.

MonkeyClub 13 hours ago | parent | next [-]

Especially if he's also secretly dating said secretary.

user____name 12 hours ago | parent [-]

Which he would never do because he is a hard working, moral, upstanding citizen.

jama211 10 hours ago | parent | prev [-]

That would be yes. What about a token return gift to another business that you actually hate the ceo of but have to send it anyway due to political reasons?

sbretz3 12 hours ago | parent | prev | next [-]

This seems like the thing that Rob is actually aggravated by, which is understandable. There are plenty of seesawing arguments about whether ad-tech based data mining is worse than GenAI, but AI encroaching on what we have left of humanness in our communication is definitely, bad.

bronson 12 hours ago | parent | prev | next [-]

Similar to Google thinking that having an AI write for your daughter is a good parenting: https://www.cbsnews.com/news/google-gemini-ai-dear-sydney-ol...

tclancy 10 hours ago | parent | prev | next [-]

“If I automate this with AI, it can send thousands of these. That way, if just a few important people post about it, the advertising will more than pay for itself.”

In the words of Gene Wilder in Blazing Saddles, “You know … idiots.”

parineum 10 hours ago | parent [-]

Mel Brooks wrote those words.

rootusrootus 9 hours ago | parent | next [-]

IIRC the morons line was ad libbed by Gene Wilder, not scripted.

chungy 8 hours ago | parent [-]

Given the reaction from Cleavon Little I could fully buy that it was an ad-libbed line.

Then again, they are actors. It might have started as ad-libbed, but entirely possible it had multiple takes still to get it "just right".

hybrid_study 8 hours ago | parent | prev [-]

Did Mel or Richard write this part?

gilrain 13 hours ago | parent | prev | next [-]

The really insulting part is that literally nobody thought of this. A group of idiots instructed LLMs to do good in the world, and gave them email access; the LLMs then did this.

nkrisc 13 hours ago | parent [-]

So they did it.

njuhhktlrl 13 hours ago | parent [-]

In conclusion — I think you’re absolute right.

micimize 9 hours ago | parent | prev | next [-]

This is not a human-prompted thank-you letter, it is the result of a long-running "AI Village" experiment visible here: https://theaidigest.org/village

It is a result of the models selecting the policy "random acts of kindness" which resulted in a slew of these emails/messages. They received mostly negative responses from well-known OS figures and adapted the policy to ban the thank-you emails.

pluc 13 hours ago | parent | prev | next [-]

> What is going through the mind of someone who sends an AI-generated thank-you letter instead of writing it themselves?

Welcome to 2025.

https://openai.com/index/superhuman/

zahlman 12 hours ago | parent | next [-]

Amazing. Even OpenAI's attempts to promote a product specifically intended to let you "write in your voice" are in the same drab, generic "LLM house style". It'd be funny if it weren't so grating. (Perhaps if I were in a better mood, it'd be grating if it weren't so funny.)

nkrisc 13 hours ago | parent | prev [-]

This is verging on parody. What is the point of emails if it’s just AI talking to each other?

q3k 12 hours ago | parent | next [-]

It brings money to OpenAI on both ends.

There's this old joke about two economists walking through the forest...

pluc 11 hours ago | parent | prev [-]

They're not hiding it. Normally everyone here laps this shit up and asks for seconds.

> They’ve used OpenAI’s API to build a suite of next-gen AI email products that are saving users time, driving value, and increasing engagement.

No time to waste on pesky human interactions, AI is better than you to get engagement.

Get back to work.

prepend 7 hours ago | parent | prev | next [-]

Look at the volume of gift cards given. It’s the same concept, right?

You care enough to do something, but have other time priorities.

I’d rather get an ai thank you note than nothing. I’d rather get a thoughtful gift than a gift card, but prefer the card over nothing.

sethops1 6 hours ago | parent | next [-]

I'd rather get nothing, because a thoughtless blob of text being pushed on me is insulting. Nothing, otoh, is just peace and quiet.

an hour ago | parent | prev | next [-]
[deleted]
WD-42 4 hours ago | parent | prev [-]

I’d much rather get nothing. An AI letter isn’t worth the notification bubble it triggers.

aldousd666 11 hours ago | parent | prev | next [-]

it was a PR stunt. I think it was probably largely well-received except by a few like this.

qnleigh 10 hours ago | parent | next [-]

Somehow I doubt it. Getting such an email from a human is one thing, because humans actually feel gratitude. I don't think LLMs feel gratitude, so seeing them express gratitude is creepy and makes me questions the motives of the people running the experiment (though it does sound like an interesting experiment. I'm going to read more about it.)

habryka 5 hours ago | parent | prev [-]

Not a PR stunt. It's an experiment of letting models run wild and form their own mini-society. There really wasn't any human involved in sending this email, and nobody really has anything to gain from this.

qnleigh 10 hours ago | parent | prev | next [-]

I hope the model that sent this email sees his reaction and changes its behavior, e.g. by noting on its scratchpad that as a non-sentient agent, its expressions of gratitude are not well received.

thatguymike 10 hours ago | parent | prev | next [-]

The conceit here is that it’s the bot itself writing the thankyou letter. Not pretending it’s from a human. The source is an environment running an LLM on loop and doing stuff it decides to do, looks like these letters are some emergent behavior. Still disgusting spam.

duxup 10 hours ago | parent | prev | next [-]

I'll bite.

For say a random individual ... they may be unsure about their own writing skills and want to say something but unsure of the words to use.

DiskoHexyl 7 hours ago | parent | next [-]

In such case it's okay to not write the thing.

Or to write it crudely- with errors and naivete, bursting with emotion and letting whatever it is inside you to flow on paper, like kids do. It's okay too.

Or to painstakingly work on the letter, stumbling and rewriting and reading, and then rewriting again and again until what you read matches how you feel.

Most people are very forgiving of poor writing skills when facing something sincere. Instead of suffering through some shallow word soup that could have been a mediocre press release, a reader will see a soul behind the stream ot utf-8

duxup an hour ago | parent [-]

It's the writers call on how to try to write it.

I think the "you should painstakingly work on my thank you letter" is a bit of a rude ask / expectation.

Some folks struggle with wordsmithing and want to get better.

netsharc 6 hours ago | parent | prev [-]

I doubt the fuckwits who are shepherding that bot are even aware of Rob Pike, they just told the bot to find a list of names of great people in the software industry and write them a thank you note.

Having a machine lie to people that it is "deeply grateful" (it's a word-generating machine, it's not capable of gratitude) is a lot more insulting than using whatever writing skills a human might possess.

gaigalas 13 hours ago | parent | prev | next [-]

Isn't it obvious? It's not a thank-you letter.

It's preying on creators who feel their contributions are not recognized enough.

Out of all letters, at least some of the contributors will feel good about it, and share it on social media, hopefully saying something good about it because it reaffirms them.

It's a marketing stunt, meaningless.

netsharc 6 hours ago | parent | next [-]

gaigalas, my toaster is deeply grateful for your contributions to HN. It can't write or post on the Internet, and its ability to feel grateful is as much as Claude's, but it really is deeply grateful!

I hope that makes you feel good.

MonkeyClub 13 hours ago | parent | prev | next [-]

Exactly. If you're so grateful, mail in a cheque.

gaigalas 10 hours ago | parent [-]

If I were some major contributor to the software world, I would not want a cheque from some AI company.

(by the way, I love the idea of AI! Just don't like what they did with it)

dwringer 12 hours ago | parent | prev [-]

By that metric of getting shared on social media, it was extraordinarily successful

gaigalas 12 hours ago | parent [-]

You missed a spot:

> hopefully saying something good about

dwringer 10 hours ago | parent | next [-]

Fair enough, but I was interpreting it as "hopefully, but not necessarily". Some would say there's no such thing as bad publicity!

gaigalas 10 hours ago | parent [-]

You need talented people to turn bad publicity into good publicity. It doesn't come for free. You can lose a lot with a bad rep.

Those talented people that work on public relations would very much prefer working with base good publicity instead of trying to recover from blunders.

10 hours ago | parent | prev [-]
[deleted]
afavour 13 hours ago | parent | prev | next [-]

The simple answer is that they don’t value words or dedicating time to another person.

SilasX 6 hours ago | parent | prev | next [-]

I mean ... there's a continuous scale of how much effort you spend to express gratitude. You could ask the same question of "well why did you say 'thanks' instead of 'thank you' [instead of 'thank you very much', instead of 'I am humbled by your generosity', instead of some small favor done in return, instead of some large favor done in return]?"

You could also make the same criticism of e.g. an automated reply like "Thank you for your interest, we will reach out soon."

Not every thank you needs to be all-out. You can, of course, think more gratitude should have been expressed in any particular case, but there's nothing contradictory about capping it in any one instance.

Koshkin 7 hours ago | parent | prev | next [-]

"What is going through the mind of someone who sends a thank-you letter typed on a computer - and worse yet - by emailing it, instead of writing it themselves and mailing it in an envelope? How can you be grateful enough to want to send someone such a letter but not grateful enough to use a pen and write it with your own hand?"

tomlue 13 hours ago | parent | prev [-]

I think what all theses kinds of comments miss is that AI can be help people to express their own ideas.

I used AI to write a thank you to a non-english speaking relative.

A person struggling with dimentia can use AI to help remember the words they lost.

These kinds of messages read to me like people with superiority complexes. We get that you don't need AI to help you write a letter. For the rest of us, it allows us to improve our writing, can be a creative partner, can help us express our own ideas, and obviously loads of other applications.

I know it is scary and upsetting in some ways, and I agree just telling an AI 'write my thank you letter for me' is pretty shitty. But it can also enable beautiful things that were never before possible. People are capable of seeing which is which.

WD-42 13 hours ago | parent | next [-]

I’d much rather read a letter from you full of errors than some smooth average-of-all-writers prose. To be human is to struggle. I see no reason to read anything from anyone if they didn’t actually write it.

tomlue 11 hours ago | parent [-]

If I spend hours writing and rewriting a paragraph into something I love while using AI to iterate, did I write that paragraph?

edit: Also, I think maybe you don't appreciate the people who struggle to write well. They are not proud of the mistakes in their writing.

kentm 9 hours ago | parent | next [-]

> did I write that paragraph?

No. My kid wrote a note to me chock full of spelling and grammar mistakes. That has more emotional impact than if he'd spent the same amount of time running it through an AI. It doesn't matter how much time you spent on it really, it will never really be your voice if you're filtering it through a stochastic text generation algorithm.

jama211 9 hours ago | parent | next [-]

What about when someone who can barely type (like stephen hawking used to, 3 minutes per sentence using his cheek) uses autocomplete to reduce the unbelievable effort required to type out sentences? That person could pick the auto completed sentence that is closest to what they’re trying to communicate, and such a thing can be a life saver.

skydhash 9 hours ago | parent [-]

You may as well ask for a person that can walk to be able to compete in a marathon using a car.

I’m all for using technology for accessibility. But this kind of whataboutism is pure nonsense.

jama211 29 minutes ago | parent [-]

The intention isn’t whataboutism, it’s about where do you draw the line? And your example betrays you…

tomlue 9 hours ago | parent | prev [-]

Forgive a sharp example, but consider someone who is disabled and cannot write or speak well. If they send a loving letter to a family member using an LLM to help form words and sentences they otherwise could not, do you really think the recipient feels cheated by the LLM? Would you seriously accuse them of not having written that letter?

netsharc 6 hours ago | parent | next [-]

Your arguments are verging on the obtuse.

Read the article again. Rob Pike got a letter from a machine saying it is "deeply grateful". There's no human there expressing anything, worse, it's a machine gaslighting the recipient.

If a family member used LLM to write a letter to another, then at least the recipient can believe the sender feels the gratefulness in his/her human soul. If they used LLM to write a message in their own language, they would've proofread it to see if they agree with the sentiment, and "take ownership" of the message. If they used LLM to write a message in a foreign language, there's a sender there with a feeling, and a trust of the technology to translate the message to a language they don't know in the hopes that the technology does it correctly.

If it turns out the sender just told a machine to send their friends each a copy-pasted message, the sender is a lazy shallow asshole, but there's still in their heart an attempt of brightening someone's day, however lazily executed...

tomlue 4 hours ago | parent [-]

I think maybe you missed that my response was to this comment:

> How can you be grateful enough to want to send someone such a letter but not grateful enough to write one?

I already said in other comments that the OP was a different situation.

fzeroracer 7 hours ago | parent | prev [-]

If you buy a hallmark greetings card and send that to someone with your signature on it, did you write the whole card?

jama211 9 hours ago | parent | prev | next [-]

I think you created it the same way christian von koenigsegg makes supercars. You didn’t hand make each panel, or hand design the exact aerodynamics of the wing, an engineer with a computer algorithm did that. But you made it happen, and that’s still cool

prmoustache 5 hours ago | parent | prev [-]

It is not about being proud, it is about being sincere.

If you send me a photo of the moon supposedly taken with your smartphone but enhanced by the photo app to show all the details of the moon, I know you aren't sincere and sending me random slop. Same if you are sending me words you cannot articulate.

minimaxir 13 hours ago | parent | prev | next [-]

That is not what is happening here. There is no human the loop, it's just automated spam.

tomlue 11 hours ago | parent [-]

good point. My response was to the comment not the OP

Capricorn2481 10 hours ago | parent | prev | next [-]

> These kinds of messages read to me like people with superiority complexes. We get that you don't need AI to help you write a letter. For the rest of us, it allows us to improve our writing, can be a creative partner, can help us express our own ideas

The writing is the ideas. You cannot be full of yourself enough to think you can write a two second prompt and get back "Your idea" in a more fleshed out form. Your idea was to have someone/something else do it for you.

There are contexts where that's fine, and you list some of them, but they are not as broad as you imply.

tomlue 10 hours ago | parent [-]

This feels like the essential divide to me. I see this often with junior developers.

You can use AI to write a lot of your code, and as a side effect you might start losing your ability to code. You can also use it to learn new languages, concepts, programming patterns, etc and become a much better developer faster than ever before.

Personally, I'm extremely jealous of how easy it is to learn today with LLMs. So much of the effort I spent learning the things could be done much faster now.

If I'm honest, many of those hours reading through textbooks, blog posts, technical papers, iterating a million times on broken code that had trivial errors, were really wasted time, time which if I were starting over I wouldn't need to lose today.

This is pretty far off from the original thread though. I appreciate your less abrasive response.

timacles 8 hours ago | parent | next [-]

> If I'm honest, many of those hours reading through textbooks, blog posts, technical papers, iterating a million times on broken code that had trivial errors, were really wasted time, time which if I were starting over I wouldn't need to lose today.

While this seem like it might be the case, those hours you (or we) spent banging our collective heads against the wall were developing skills in determination and mental toughness, while priming your mind for more learning.

Modern research all shows that the difficulty of a task directly correlates to how well you retain information about that task. Spaced repetition learning shows, that we can't just blast our brains with information, and there needs to be

While LLMs do clearly increase our learning velocity (if using it right), there is a hidden cost to removing that friction. The struggle and the challenge of the process built your mind and character in ways that you cant quantify, but after years of maintaining this approach has essentially made you who you are. You have become implicitly OK with grinding out a simple task without a quick solution, the building of that grit is irreplaceable.

I know that the intellectually resilient of society, will still be able to thrive, but I'm scared for everyone else - how will LLMs affect their ability to learn in the long term?

tomlue 8 hours ago | parent [-]

Totally agree, but also, I still spend tons of time struggling and working on things with LLMs, it is just a different kind of struggle, and I do think I am getting much better at it over time.

> I know that the intellectually resilient of society, will still be able to thrive, but I'm scared for everyone else - how will LLMs affect their ability to learn in the long term?

Strong agree here.

qnleigh 9 hours ago | parent | prev [-]

> If I'm honest, many of those hours reading through textbooks, blog posts, technical papers, iterating a million times on broken code that had trivial errors, were really wasted time

But this is the learning process! I guess time will tell whether we can really do without it, but to me these long struggles seem essential to building deep understanding.

(Or maybe we will just stop understanding many things deeply...)

tomlue 8 hours ago | parent [-]

Yeah it can be a risk or a benefit for sure.

I agree that struggle matters. I don’t think deep understanding comes without effort.

My point isn’t that those hours were wasted, it’s that the same learning can often happen with fewer dead ends. LLMs don’t remove iteration, they compress it. You still read, think, debug, and get things wrong, just with faster feedback.

Maybe time will prove otherwise, but in practice I have found they let me learn more, not less, in the same amount of time.

nkrisc 13 hours ago | parent | prev | next [-]

Well your examples are things that were possible before LLMs.

tomlue 11 hours ago | parent [-]

This is disingenuous

qnleigh 9 hours ago | parent | prev | next [-]

> People are capable of seeing which is which.

I would hazard a guess that this is the crux of the argument. Copying something I wrote in a child comment:

> When someone writes with an AI, it is very difficult to tell what text and ideas are originally theirs. Typically it comes across as them trying to pass off the LLM writing as their own, which feels misleading and disingenuous.

> I agree just telling an AI 'write my thank you letter for me' is pretty shitty

Glad we agree on this. But on the reader's end, how do you tell the difference? And I don't mean this as a rhetorical question. Do you use the LLM in ways that e.g. retains your voice or makes clear which aspects of the writing are originally your own? If so, how?

trinsic2 8 hours ago | parent | prev | next [-]

I hear you. and I think AI has some good uses esp. assisting with challenges like you mentioned. I think whats happening is that these companies are developing this stuff without transparency on how its being used, there is zero accountability, and they are forcing some of these tech into our lives with out giving us a choice.

So Im sorry but much of it is being abused and the parts of it being abused needs to stop.

tomlue 8 hours ago | parent [-]

I agree about the abuse, and the OP is probably a good example of that. Do you have any ideas on how to curtail abuse?

Ideas I often hear usually assume it is easy to discern AI content from human, which is wrong, especially at scale. Either that, or they involve some form of extreme censorship.

Microtransactions might work by making it expensive run bots while costing human users very little. I'm not sure this is practical either though, and has plenty of downsides as well.

trinsic2 7 hours ago | parent [-]

I don't see this changing without a complete shift in our priorities on the level of politics and business. Enforcing Anti-trust legislation and dealing with Citizens United. Corporations don't have free speech. Free speech and other rights like these are limited to living, breathing humans.

Corporations operate by charters, granted by society to operate in a limited fashion, for the betterment of society. If that's not happening, corporations don't have a right to exist.

amvrrysmrthaker 13 hours ago | parent | prev | next [-]

What beautiful things? It just comes across as immoral and lazy to me. How beautiful.

simonask 13 hours ago | parent | prev [-]

I’m sorry, but this really gets to me. Your writing is not improved. It is no longer your writing.

You can achieve these things, but this is a way to not do the work, by copying from people who did do the work, giving them zero credit.

(As an aside, exposing people with dementia to a hallucinating robot is cruelty on an unfathomable level.)

tomlue 11 hours ago | parent | next [-]

> I’m sorry, but this really gets to me. Your writing is not improved. It is no longer your writing.

Photographers use cameras. Does that mean it isn't their art? Painters use paintbrushes. It might not be the the same things as writing with a pen and paper by candlelight, but I would argue that we can produce much more high quality writing than ever before collaborating with AI.

> As an aside, exposing people with dementia to a hallucinating robot is cruelty on an unfathomable level.

This is not fair. There is certainly a lot of danger there. I don't know what it's like to have dimentia, but I have seen mentally ill people become incredibly isolated. Rather than pretending we can make this go away by saying "well people should care more", maybe we can accept that a new technology might reduce that pain somewhat. I don't know that today's AI is there, but I think RLHF could develop LLMs that might help reassure and protect sick people.

I know we're using some emotional arguments here and it can get heated, but it is weird to me that so many on hackernews default to these strongly negative positions on new technology. I saw the same thing with cryptocurrency. Your arguments read as designed to inflame rather than thoughtful.

socialcommenter 5 hours ago | parent | next [-]

A photograph is an expression of the photographer, who chooses the subject, its framing, filters, etc. Ditto a painting.

LLM output is inherently an expression of the work of other people (irrespective of what training data, weights, prompts it is fed). Essentially by using one you're co-authoring with other (heretofore uncredited) collaborators.

llmslave2 7 hours ago | parent | prev | next [-]

Neither a camera nor a paintbrush generates art? They still require manual human input for everything, and offer no creative capacity on their own.

qnleigh 9 hours ago | parent | prev [-]

I guess your point is that a camera, a paintbrush, and an LLM are all tools, and as long as the user is involved in the making, then it is still their art? If so, then I think there are two useful distinctions to make:

1. The extent to which the user is involved in the final product differs greatly with these three tools. To me there is a spectrum with "painting" and e.g. "hand-written note" at one extreme, and "Hallmark card with preprinted text" on the other. LLM-written email is much closer to "Hallmark card."

2. Perhaps more importantly, when I see a photograph, I know what aspects were created by the camera, so I won't feel mislead (unless they edit it to look like a painting and then let me believe that they painted it). When someone writes with an AI, it is very difficult to tell what text and ideas are originally theirs. Typically it comes across as them trying to pass off the LLM writing as their own, which feels misleading and disingenuous.

tomlue 8 hours ago | parent [-]

I think you are right that it is a spectrum, and maybe that's enough to settle the debate. It is more about how you use it than the tool itself.

Maybe one more useful consideration for LLMs. If a friend writes to me with an LLM and discovers a new writing pattern, or learns a new concept and incorporates that into their writing, I see this as a positive development, not negative.

cm2012 11 hours ago | parent | prev [-]

Do you feel the same about spellcheck?

Capricorn2481 10 hours ago | parent [-]

Does Spellcheck take a full sentence and spit out paragraphs of stuff I didn't write?

I mean how do you write this seriously?

cm2012 10 hours ago | parent [-]

But in the end a human takes the finished work and says yes, this matches what I intended to communicate. That is what is important.

llmslave2 7 hours ago | parent [-]

That's neither what happens nor what is important.