| ▲ | throw4847285 3 hours ago | |||||||
There are two major mistakes here. The first is equating human and LLM intelligence. Note that I am not saying that humans are smarter than LLMs. But I do believe that LLMs represent an alien intelligence with a linguistic layer that obscures the differences. The thought processes are very different. At top AI firms, they have the equivalent of Asimov's Susan Calvin trying to understand how these programs think, because it does not resemble human cognition despite the similar outputs. The second and more important is the feedback loop. What makes gambling gambling is you can smash that lever over and over again and immediately learn if you lost or got a jackpot. The slowness and imprecision of human communication creates a totally different dynamic. To reiterate, I am not saying interns are superior to LLMs. I'm just saying they are fundamentally different. And, if we're being honest, the way people talk about interns is weirdly dehumanizing, and the fact that they are always trotted out in these AI debates is depressing. | ||||||||
| ▲ | simonw 3 hours ago | parent [-] | |||||||
> And, if we're being honest, the way people talk about interns is weirdly dehumanizing, and the fact that they are always trotted out in these AI debates is depressing. Yeah, I agree with that. That thought crossed my mind as I was posting this comment, but I decided to go with it anyway because I think this is one of those cases where I think the comparison is genuinely useful. We delegate work to humans all the time without thinking "this is gambling, these collaborators are unreliable and non-deterministic". | ||||||||
| ||||||||