| ▲ | chpatrick 4 days ago |
| I think plenty of people have problems with the second one but you wouldn't say that means they can't think. |
|
| ▲ | bluefirebrand 4 days ago | parent [-] |
| We don't need to prove all humans are capable of this. We can demonstrate that some humans are, therefore humans must be capable, broadly speaking Until we see an LLM that is capable of this, then they aren't capable of it, period |
| |
| ▲ | chpatrick 4 days ago | parent [-] | | Sometimes LLMs hallucinate or bullshit, sometimes they don't, sometimes humans hallucinate or bullshit, sometimes they don't. It's not like you can tell a human to stop being delusional on command either. I'm not really seeing the argument. | | |
| ▲ | bluefirebrand 4 days ago | parent [-] | | If a human hallucinates or bullshits in a way that harms you or your company you can take action against them That's the difference. AI cannot be held responsible for hallucinations that cause harm, therefore it cannot be incentivized to avoid that behavior, therefore it cannot be trusted Simple as that | | |
|
|