| ▲ | tptacek 13 hours ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
"I do not and will not use LLMs, in any form, for any purpose. Although LLMs are fascinating from a purely technical perspective, I refuse to participate in or contribute to such systems that are built on massive exploitation of human labor and make profligate use of scarce resources. I also don't think they are actually very good for a lot of the applications people seem excited about. Even in cases where LLMs are technically good at a task, that does not necessarily mean their use for that task contributes positively to human flourishing. A good way to describe myself is as a generative AI vegetarian. You can find a fuller explanation—and many, many links—at the above essay by Sean Boots, which I agree with almost 100%." | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | simonw 13 hours ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
I remain hopeful that some day someone will train an LLM which is tolerable to people who take this stance (which I respect, much like I respect food vegetarians despite not being one myself). I've been tracking models trained entirely on out-of-copyright data, for example. I've not yet seen one of those which appears generally useful and didn't chuck in a scrape of the web or get fine-tuned on examples generated by a non-vegetarian model. Andrej Karpathy can train a GPT-2 class model for less than $80 now, so at least the environmental cost of training may drop to a point that it's acceptable to LLM vegetarians: https://twitter.com/karpathy/status/2017703360393318587 Why do I care? This post is a great example. If you're a professor of computer science I really want you to be able to tinker with this fascinating class of models without violating your principles. UPDATE: Huh, speaking of potentially vegetarian models, I just saw https://talkie-lm.com/introducing-talkie on the HN homepage https://news.ycombinator.com/item?id=47927903 I've explored I different out-of-copyright trained model Mr Chatterbox before but found it to have been mildly corrupted through the help of synthetic conversation pairs from Haiku and GPT-4o-mini - https://simonwillison.net/2026/Mar/30/mr-chatterbox/ Talkie isn't entirely pure either though: "Finally, we did another round of supervised fine-tuning, this time on rejection-sampled multi-turn synthetic chats between Claude Opus 4.6 and talkie, to smooth out persistent rough edges in its conversational abilities." | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | infotainment 13 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
> built on massive exploitation of human labor and make profligate use of scarce resources This kind of hyperbole repeated ad infinitum by haters online is not-constructive, IMO. I would be quite certain that the manufacture of whatever computing device the author is accessing the internet on used far more resources and exploited far more human labor than training an ML model ever did. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | 13 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| [deleted] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | nikcub 12 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
* real programmers write assembly, not FORTRAN * real programmers manage memory, it's a craft * real programmers don't drag and drop * real programmers don't use intellisense * real programmers don't need stack overflow * real programmers don't tab-complete * real programmers don't need copilot * real programmers don't use llms <- you are here | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||