Remix.run Logo
Sprotch 3 days ago

He thinks "AI" "may be capable of taking over cognition", which shows he doesn't understand how LLM work...

ozten 3 days ago | parent | next [-]

Why is AI limited to just a raw LLM. Scaffolding, RL, multi-modal... so many techniques which can be applied. METR has shown AI's time horizon for staying on task is doubling every 7 months or less.

https://metr.org/blog/2025-07-14-how-does-time-horizon-vary-...

marcosdumay 3 days ago | parent | next [-]

Because all the money has been going into LLMs and "inference machines" (what a non-descriptive name). So when an investor says "AI", that's what they mean.

Night_Thastus 3 days ago | parent | prev [-]

Because LLMs are just about all that actually exists as a product, even if an inconsistent one.

Maybe some day a completely different approach could actually make AI, but that's vapor at the moment. IF it happens, there will be something to talk about.

simianwords 3 days ago | parent | prev [-]

Why are you so sure it is not capable of cognition?

bigstrat2003 2 days ago | parent | next [-]

Because it very obviously isn't. For example (though this is a year or so ago), look at when people hooked Claude up to Pokemon. It got stuck on things that no human, even a small child, would get stuck on (such as going in and out of a building over and over). I'm sure we could train an LLM to play Pokemon, but you don't need to train a child to play. You hand them the game and they figure it out with no prior experience. That is because the human is intelligent, and the LLM is not.

suzzer99 2 days ago | parent [-]

100%. Slack does this annoying thing where I click a chat, which gains focus, but I actually have to click again to switch to the chat I want. Every now and then I slack the wrong person, fortunately not to disastrous consequences, yet.

If I had a moderately intelligent human who never loses focus looking over my shoulder, they might say something like "Hey, you're typing a Tailwind CSS issue in the DevOps group chat. Did you mean that for one of the front-end devs?"

Similarly, about once or twice a year, I set the alarm on my phone and then accidentally scroll the wheel to PM w/o noticing. A non-brain-dead human would see that and say, "Are you sure you want to set your alarm for 8:35 PM Saturday?"

When we have a digital assistant that can do these things, and not because it's been specifically trained on these or similar issues, then I'll start to believe we're closing in on AGI.

At the very least I'd like to be able to tell a digital assistant to help me with things like this as they come up, and have it a) remember forever and b) realize stuff like Zoom chat has the same potential for screw ups as Slack chat (albeit w/o the weird focus thing).

davnicwil 2 days ago | parent [-]

a recent example I came across was losing a single airpod (dropped on street) and getting a find my notification only when I was already several blocks away. Went back, 30 mins had passed, nowhere to be found.

This is the kind of thing that makes it really clear how far away we actually are from 'real world' intelligence or maybe better described as common sense in our devices, in the detail.

Obviously, the intelligent thing to do there would have been to spam me with notifications the instant my devices noticed my airpods were separated by > 10 metres, one was moving away from the other, and the stationary one was in a street or at least some place that was not home.

But although AI can search really well, and all sorts of other interesting stuff, I think we all have to admit that it still seems really hard to imagine it taking 'initiative' so to speak even in that super simple situation and making a good decision and acting on it in the sensible way that any human would, unless it was specifically programmed to do so.

And that's the problem I think fundamentally, at least for now. There's just too much randomness and too many situations that can occur in the real world, and there's too many integration points for LLMs to deal with these, even supposing they would deal with them well.

In theory it seems like it could be done, but in practice it isn't being done even after years of the tech being available, and by the most well funded companies.

That's the kind of thing that makes me think the long tail of usefulness of LLMs on the ground is still really far away.

Sprotch 2 days ago | parent | prev | next [-]

Because LLMs are language generation machines based on statistics - they do not analyse the underlying data, let alone understand it. They are not AI.

hagbarth 2 days ago | parent | prev | next [-]

Ah yes, proving a negative. What makes you sure a stone is not capable of cognition?

encyclopedism 2 days ago | parent [-]

An LLM is an algorithm. You can obtain the same result as a SOTA LLM via pen and paper it will take a lot of long laborious effort. That's ONE reason why LLM's do not have cognition.

Also they don't reason, or think or any of the other myriad nonsense attributed to LLM's. I hate the platitudes given to LLM's it's at PHD level. It's now able to answer math olympiad questions. It answers them by statistical pattern recognition!

dboon 2 days ago | parent [-]

A brain is an algorithm. Given an unreasonably precise graph of neurons, neurotransmitter levels at each junction, and so on and so forth, one could obtain the same result via pen and paper. It will just take a lot of long laborious effort. That’s ONE reason why brains do not have cognition.

Sprotch 2 days ago | parent [-]

There is a whole branch of AI trying to do this, but they are still at the very initial stages. LLMs are not the same thing at all.

sph 2 days ago | parent | prev [-]

Nice try. The onus is on you to prove the extraordinary claim that we have invented actual artificial cognition.

simianwords 2 days ago | parent [-]

I can do it.

My claim is that an llm acts the same way (or superset) to how a person with short term memory would behave if the only mode they could communicate with was text. Do you agree?

sph 2 days ago | parent [-]

That is not a proof, that is opinion.

And I do not agree. LLMs are literally incapable of understanding the concept of truth, right/wrong, knowledge and not-knowledge. It seems pretty crucial to be able to tell if you know something or not for any level of human-level intelligence.

Again, this conversation has been had in many variations constantly since LLMs were on the rise, and we can't rehash the same points over and over. If one believes LLMs are capable of cognition, they should offer formal proof first, otherwise we're just wasting our time.

That said, I wonder if there are major differences in cognition between humans, because there is no way I would look at how my brain works and think "oh, this LLM is capable of the same level of cognition as I am." Not because I am ineffably smart, but because LLMs are utterly simplistic in comparison to even a fruit fly.

simianwords 2 days ago | parent [-]

>And I do not agree. LLMs are literally incapable of understanding the concept of truth, right/wrong, knowledge and not-knowledge. It seems pretty crucial to be able to tell if you know something or not for any level of human-level intelligence.

How are you so sure about this?

> If one believes LLMs are capable of cognition,

honestly asking: what formal proof is there for our own cognition?