| ▲ | solumunus 2 days ago | |||||||
If the LLM output is more effective than a human at problem solving, which I think we can all agree requires intelligence, how would one describe this? The LLM is just pretending to be more intelligent? At a certain point saying that will just seem incredibly silly. It’s either doing the thing or it’s not, and it’s already doing a lot. | ||||||||
| ▲ | emp17344 2 days ago | parent | next [-] | |||||||
LLM output is in no way more effective than human output. | ||||||||
| ||||||||
| ▲ | lo_zamoyski 21 hours ago | parent | prev [-] | |||||||
> If the LLM output is more effective than a human at problem solving, which I think we can all agree requires intelligence Your premise is wrong. Unless you want to claim that the distant cause by way of the training data is us, but that's exactly the conclusion you're trying to avoid. After all, we put the patterns in the training data, which means we already did the upfront intellectual work for the LLM. | ||||||||