Remix.run Logo
tptacek 13 hours ago

"I do not and will not use LLMs, in any form, for any purpose. Although LLMs are fascinating from a purely technical perspective, I refuse to participate in or contribute to such systems that are built on massive exploitation of human labor and make profligate use of scarce resources. I also don't think they are actually very good for a lot of the applications people seem excited about. Even in cases where LLMs are technically good at a task, that does not necessarily mean their use for that task contributes positively to human flourishing.

A good way to describe myself is as a generative AI vegetarian. You can find a fuller explanation—and many, many links—at the above essay by Sean Boots, which I agree with almost 100%."

simonw 13 hours ago | parent | next [-]

I remain hopeful that some day someone will train an LLM which is tolerable to people who take this stance (which I respect, much like I respect food vegetarians despite not being one myself).

I've been tracking models trained entirely on out-of-copyright data, for example. I've not yet seen one of those which appears generally useful and didn't chuck in a scrape of the web or get fine-tuned on examples generated by a non-vegetarian model.

Andrej Karpathy can train a GPT-2 class model for less than $80 now, so at least the environmental cost of training may drop to a point that it's acceptable to LLM vegetarians: https://twitter.com/karpathy/status/2017703360393318587

Why do I care? This post is a great example. If you're a professor of computer science I really want you to be able to tinker with this fascinating class of models without violating your principles.

UPDATE: Huh, speaking of potentially vegetarian models, I just saw https://talkie-lm.com/introducing-talkie on the HN homepage https://news.ycombinator.com/item?id=47927903

I've explored I different out-of-copyright trained model Mr Chatterbox before but found it to have been mildly corrupted through the help of synthetic conversation pairs from Haiku and GPT-4o-mini - https://simonwillison.net/2026/Mar/30/mr-chatterbox/

Talkie isn't entirely pure either though: "Finally, we did another round of supervised fine-tuning, this time on rejection-sampled multi-turn synthetic chats between Claude Opus 4.6 and talkie, to smooth out persistent rough edges in its conversational abilities."

strange_quark 12 hours ago | parent | next [-]

I don't get why it's so hard for you and others in this comment section to understand why people hate AI so much because it's not just the theft and environmental destruction. A college professor, especially one at a liberal arts school, is obviously not going to like something that enables you to outsource your thinking and steals your agency. I think that's a perfectly valid viewpoint; maybe talk to someone without STEM-brain who lives outside of SF for once.

simonw 12 hours ago | parent [-]

I've recently been amplifying this excellent piece about that by Nilay Patel https://www.theverge.com/podcast/917029/software-brain-ai-ba...

I don't need computer science professors to like LLMs, but I still want them to be able to poke at them with a stick without feeling like they are violating their principles regarding energy usage and unlicensed training data.

strange_quark 11 hours ago | parent [-]

> I don't need computer science professors to like LLMs, but I still want them to be able to poke at them with a stick without feeling like they are violating their principles regarding energy usage and unlicensed training data.

Why? Language models are interesting from a technical perspective, but so are tons of areas of CS. There's nothing inherently virtuous about using an LLM.

simonw 9 hours ago | parent [-]

I think LLMs are the most fascinating new piece of computer science to come along in at least the past decade.

The academic field of computer science pretty much started as an exploration into whether machines could be built that could understand human language.

The Turing test dates back to Turing!

infotainment 13 hours ago | parent | prev [-]

> Andrej Karpathy can train a GPT-2 class model for less than $80 now, so at least the environmental cost of training may drop to a point that it's acceptable to LLM vegetarians: https://twitter.com/karpathy/status/2017703360393318587

I suspect that even if you reduced the cost of training or any other real world metric, the goalposts would immediately move. It seems to me that it has never been about those things, but simply about the feeling of superiority one can attain by eschewing something seen as trending.

WatchDog 13 hours ago | parent [-]

It's that but also the narcissistic injury caused by seeing an LLM practice the craft you have spent your life trying to perfect.

12 hours ago | parent [-]
[deleted]
infotainment 13 hours ago | parent | prev | next [-]

> built on massive exploitation of human labor and make profligate use of scarce resources

This kind of hyperbole repeated ad infinitum by haters online is not-constructive, IMO. I would be quite certain that the manufacture of whatever computing device the author is accessing the internet on used far more resources and exploited far more human labor than training an ML model ever did.

cwillu 13 hours ago | parent | next [-]

Be that as it may, it is a quote from the “Statement on LLMs” at the bottom of the link.

infotainment 13 hours ago | parent [-]

Of course, which tells you the position from which the author of the linked post is arguing.

tcfhgj 12 hours ago | parent | prev [-]

Mentioning facts is not constructive, interesting.

How constructive are ad hominem arguments?

13 hours ago | parent | prev | next [-]
[deleted]
nikcub 12 hours ago | parent | prev [-]

* real programmers write assembly, not FORTRAN

* real programmers manage memory, it's a craft

* real programmers don't drag and drop

* real programmers don't use intellisense

* real programmers don't need stack overflow

* real programmers don't tab-complete

* real programmers don't need copilot

* real programmers don't use llms <- you are here

2ndorderthought 12 hours ago | parent | next [-]

That's also not what he is saying. I don't see how that is what everyone is taking from this.

11 hours ago | parent | prev [-]
[deleted]