| ▲ | bigstrat2003 2 days ago | |||||||
Because it very obviously isn't. For example (though this is a year or so ago), look at when people hooked Claude up to Pokemon. It got stuck on things that no human, even a small child, would get stuck on (such as going in and out of a building over and over). I'm sure we could train an LLM to play Pokemon, but you don't need to train a child to play. You hand them the game and they figure it out with no prior experience. That is because the human is intelligent, and the LLM is not. | ||||||||
| ▲ | suzzer99 2 days ago | parent [-] | |||||||
100%. Slack does this annoying thing where I click a chat, which gains focus, but I actually have to click again to switch to the chat I want. Every now and then I slack the wrong person, fortunately not to disastrous consequences, yet. If I had a moderately intelligent human who never loses focus looking over my shoulder, they might say something like "Hey, you're typing a Tailwind CSS issue in the DevOps group chat. Did you mean that for one of the front-end devs?" Similarly, about once or twice a year, I set the alarm on my phone and then accidentally scroll the wheel to PM w/o noticing. A non-brain-dead human would see that and say, "Are you sure you want to set your alarm for 8:35 PM Saturday?" When we have a digital assistant that can do these things, and not because it's been specifically trained on these or similar issues, then I'll start to believe we're closing in on AGI. At the very least I'd like to be able to tell a digital assistant to help me with things like this as they come up, and have it a) remember forever and b) realize stuff like Zoom chat has the same potential for screw ups as Slack chat (albeit w/o the weird focus thing). | ||||||||
| ||||||||