| ▲ | snowwrestler 17 hours ago | |
If a human cares about the work, they can often outperform an LLM because they will keep at it until the work meets their standard of quality. Whereas the LLM will guess and then wait to be corrected. As a recent tweet I saw said: it’s amazing how fast the software bottleneck went from writing code, to reviewing code. I think we’ll need to split the concept of intelligence into the capacity to accomplish a task and the capacity to conceive and prompt a task. If the former is called “intelligence” then LLMs are intelligent. But what then do we call the latter? I think the idea of an AI that can independently accomplish great things is where people talk about “general” intelligence. But I think we need a label more specific, that covers this idea that successful humans are not just good at doing things, they originate what should be done and are not easy to dissuade. | ||
| ▲ | fluoridation 9 hours ago | parent [-] | |
>I think we’ll need to split the concept of intelligence into the capacity to accomplish a task and the capacity to conceive and prompt a task. If the former is called “intelligence” then LLMs are intelligent. Huh? No. "The capacity to accomplish a task" is not intelligence. By that definition, a washing machine is intelligent. | ||