Remix.run Logo
bigstrat2003 2 days ago

Because it very obviously isn't. For example (though this is a year or so ago), look at when people hooked Claude up to Pokemon. It got stuck on things that no human, even a small child, would get stuck on (such as going in and out of a building over and over). I'm sure we could train an LLM to play Pokemon, but you don't need to train a child to play. You hand them the game and they figure it out with no prior experience. That is because the human is intelligent, and the LLM is not.

suzzer99 2 days ago | parent [-]

100%. Slack does this annoying thing where I click a chat, which gains focus, but I actually have to click again to switch to the chat I want. Every now and then I slack the wrong person, fortunately not to disastrous consequences, yet.

If I had a moderately intelligent human who never loses focus looking over my shoulder, they might say something like "Hey, you're typing a Tailwind CSS issue in the DevOps group chat. Did you mean that for one of the front-end devs?"

Similarly, about once or twice a year, I set the alarm on my phone and then accidentally scroll the wheel to PM w/o noticing. A non-brain-dead human would see that and say, "Are you sure you want to set your alarm for 8:35 PM Saturday?"

When we have a digital assistant that can do these things, and not because it's been specifically trained on these or similar issues, then I'll start to believe we're closing in on AGI.

At the very least I'd like to be able to tell a digital assistant to help me with things like this as they come up, and have it a) remember forever and b) realize stuff like Zoom chat has the same potential for screw ups as Slack chat (albeit w/o the weird focus thing).

davnicwil 2 days ago | parent [-]

a recent example I came across was losing a single airpod (dropped on street) and getting a find my notification only when I was already several blocks away. Went back, 30 mins had passed, nowhere to be found.

This is the kind of thing that makes it really clear how far away we actually are from 'real world' intelligence or maybe better described as common sense in our devices, in the detail.

Obviously, the intelligent thing to do there would have been to spam me with notifications the instant my devices noticed my airpods were separated by > 10 metres, one was moving away from the other, and the stationary one was in a street or at least some place that was not home.

But although AI can search really well, and all sorts of other interesting stuff, I think we all have to admit that it still seems really hard to imagine it taking 'initiative' so to speak even in that super simple situation and making a good decision and acting on it in the sensible way that any human would, unless it was specifically programmed to do so.

And that's the problem I think fundamentally, at least for now. There's just too much randomness and too many situations that can occur in the real world, and there's too many integration points for LLMs to deal with these, even supposing they would deal with them well.

In theory it seems like it could be done, but in practice it isn't being done even after years of the tech being available, and by the most well funded companies.

That's the kind of thing that makes me think the long tail of usefulness of LLMs on the ground is still really far away.