Remix.run Logo
roughly 6 hours ago

I've been pretty skeptical of LLMs as the solution to AGI already, mostly just because the limits of what the models seem capable of doing seem to be lower than we were hoping (glibly, I think they're pretty good at replicating what humans do when we're running on autopilot, so they've hit the floor of human cognition, but I don't think they're capable of hitting the ceiling). That said, I think LLMs will be a component of whatever AGI winds up being - there's too much "there" there for them to be a total dead end - but, echoing the commenter below and taking an analogy to the brain, it feels like "many well-trained models, plus some as-yet unknown coordinator process" is likely where we're going to land here - in other words, to take the Kahneman & Tversky framing, I think the LLMs are making a fair pass at "system 1" thinking, but I don't think we know what the "system 2" component is, and without something in that bucket we're not getting to AGI.