| ▲ | evdubs 9 hours ago | |
> When I read Rob's work and learn from it, and make it part of my cognitive core, nobody is particularly threatened by it. When a machine does the same it feels very threatening to many people, a kind of theft by an alien creature busily consuming us all and shitting out slop. It's not about reading. It's about output. When you start producing output in line with Rob's work that is confidently incorrect and sloppy, people will feel just as they do when LLMs produce output that is confidently incorrect and sloppy. No one is threatened if someone trains an LLM and does nothing with it. | ||