Remix.run Logo
elbasti 4 days ago

It's not some "magical way"--the ways in which a human thinks that an LLM doesn't are pretty obvious, and I dare say self-evidently part of what we think constitutes human intelligence:

- We have a sense of time (ie, ask an LLM to follow up in 2 minutes)

- We can follow negative instructions ("don't hallucinate, if you don't know the answer, say so")

int_19h 4 days ago | parent | next [-]

We only have a sense of time in the presence of inputs. Stick a human into a sensory deprivation tank for a few hours and then ask them how much time has passed afterwards. They wouldn't know unless they managed to maintain a running count throughout, but that's a trick an LLM can also do (so long as it knows generation speed).

The general notion of passage of time (i.e. time arrow) is the only thing that appears to be intrinsic, but it is also intrinsic for LLMs in a sense that there are "earlier" and "later" tokens in its input.

chpatrick 4 days ago | parent | prev [-]

I think plenty of people have problems with the second one but you wouldn't say that means they can't think.

bluefirebrand 4 days ago | parent [-]

We don't need to prove all humans are capable of this. We can demonstrate that some humans are, therefore humans must be capable, broadly speaking

Until we see an LLM that is capable of this, then they aren't capable of it, period

chpatrick 4 days ago | parent [-]

Sometimes LLMs hallucinate or bullshit, sometimes they don't, sometimes humans hallucinate or bullshit, sometimes they don't. It's not like you can tell a human to stop being delusional on command either. I'm not really seeing the argument.

bluefirebrand 4 days ago | parent [-]

If a human hallucinates or bullshits in a way that harms you or your company you can take action against them

That's the difference. AI cannot be held responsible for hallucinations that cause harm, therefore it cannot be incentivized to avoid that behavior, therefore it cannot be trusted

Simple as that

chpatrick 4 days ago | parent [-]

The question wasn't can it be trusted, it was does it think.