Remix.run Logo
devnullbrain 12 hours ago

I don't see why this wouldn't just lead to model collapse:

https://www.nature.com/articles/s41586-024-07566-y

If you've spent any time using LLMs to write documentation you'll see this for yourself: the compounding will just be rewriting valid information with less terse information.

I find it concerning Karpathy doesn't see this. But I'm not surprised, because AI maximalists seem to find it really difficult to be... "normal"?

Rule of thumb: if you find yourself needing to broadcast the special LLM sauce you came up with instead of what it helped you produce, ask yourself why.

gojomo 9 hours ago | parent | next [-]

Here in 2026, many forms of training LLMs on (well-chosen) outputs of themselves, or other LLMs, have delivered gigantic wins. So 2024 & earlier fears of 'model collapse' will lead your intuition astray about what's productive.

It is unlikely you are accurately perceiving some limitation that Karpathy does not.

ChrisGreenHeur 10 hours ago | parent | prev | next [-]

The article is not on training LLMs. it is about using LLMs to write a wiki for personal use. The article assumes a fully trained LLM such as ChatGPT or Claude already exists to be used.

khalic 5 hours ago | parent [-]

Don't even try, after vibe coding, people seem to be adopting vibe thinking. "Model Collapse sounds cool, I'm gonna use it without looking up"

mikkupikku 2 hours ago | parent [-]

Vibe thinking... that's an interesting premise. I'll have to build up my new llm-wiki before I'll know what to think about "vibe thinking."

mikkupikku 19 minutes ago | parent [-]

I was joking but also not joking, this llm-wiki idea is fun. I fed into it it's own llm-wiki.md, Foucault's Pendulum, randomly collected published papers about the philosophy of GiTS, several CCRU essays, and Manufacturing Consent. It drew fun red yarn between all of them, about the topic of red yarn (e.g. schizos drawing connections out of nothing, particularly through the use of computers, and how this relates to itself doing literally this as it does it.)

I'll spare you most of the slop but.. "The Case That I Am Abulafia: The parallel is uncomfortable and precise. [...]"

Yeah... It's fun though.

jahala 6 hours ago | parent | prev | next [-]

I did a proof of concept for self-updating html files (polyglot bash/html) some weeks ago. It actually works quite well, with simple prompting it seems to not just go in circles (https://github.com/jahala/o-o)

kwar13 10 hours ago | parent | prev | next [-]

also my experience. it can't even keep up with a simple claude.md let alone a whole wiki...

sebmellen 10 hours ago | parent | prev | next [-]

Edit for context: the sibling comment from karpathy is gone after being flagged to oblivion. Not sure if he deleted it or if it was just removed based on the number of flags? He had copy-pasted a few snarky responses from Claude and essentially said “Claude has this to say to you:” followed by a super long run on paragraph of slop.

————

Wow, I respect karpathy so much and have learned a ton from him. But WTF is the sibling comment he wrote as a response to you? Just pasting a Claude-written slop retort… it’s sad.

Maybe we need to update that old maxim about “if you don’t have something nice to say, don’t say it” to “if you don’t have something human to say, don’t say it.”

So many really smart people I know have seen the ‘ghost in the machine’ and as a result have slowly lost their human faculties. Ezra Klein, of all people, had a great article about this recently titled “I Saw Something New in San Francisco” (gift link if you want to read it): https://www.nytimes.com/2026/03/29/opinion/ai-claude-chatgpt...

prodigycorp 8 hours ago | parent | next [-]

It's not sad. He's a person like you and me. devnullbrain's comment is snarky. He invoked model collapse which has nothing to do with the topic of a wiki/kb, he wrote that karpathy is not normal, and then seemed to imply that the idea was useless. I'd be pretty in my feels and the fact that he wrote it and deleted it seems like a +1 normal guy thing.

girvo 6 hours ago | parent | prev | next [-]

Eh, he’s just a person. I’m not surprised he posted a rude comment haha, and it got rightfully flagged off the site for being AI slop.

Appreciate the gift link, I’ll give it a read!

moralestapia 9 hours ago | parent | prev [-]

Lol at that.

It's weird how some people cover the whole range of putting out some really good stuff and other times the complete opposite.

Feels as if they were two different people ... or three, or four.

iamflimflam1 5 hours ago | parent [-]

Emotional state, tiredness, drunkenness, a goods nights sleep… the number of factors that drive our responses is ridiculous.

10 hours ago | parent | prev [-]
[deleted]