Remix.run Logo
ako 2 days ago

An LLM by itself is not thinking, just remembering and autocompleting. But if you add a feedback loop where it can use tools, investigate external files or processes, and then autocomplete on the results, you get to see something that is (close to) thinking. I've seen claude code debug things by adding print statements in the source and reasoning on the output, and then determining next steps. This feedback loop is what sets AI tools apart, they can all use the same LLM, but the quality of the feedback loop makes the difference.

DebtDeflation 2 days ago | parent | next [-]

>But if you add a feedback loop where it can use tools, investigate external files or processes, and then autocomplete on the results, you get to see something that is (close to) thinking

It's still just information retrieval. You're just dividing it into internal information (the compressed representation of the training data) and external information (web search, API calls to systems, etc). There is a lot of hidden knowledge embedded in language and LLMs do a good job of teasing it out that resembles reasoning/thinking but really isn't.

ako 2 days ago | parent | next [-]

No, it's more than information retrieval. The LLM is deciding what information needs to be retrieved to make progress and how to retrieve this information. It is making a plan and executing on it. Plan, Do, Check, Act. No human in the loop if it has the required tools and permissions.

naasking 2 days ago | parent | prev [-]

> LLMs do a good job of teasing it out that resembles reasoning/thinking but really isn't.

Given the fact that "thinking" still hasn't been defined rigourously, I don't understand how people are so confident in claiming they don't think.

notepad0x90 2 days ago | parent [-]

reasoning might be a better term to discuss as it is more specific?

naasking 2 days ago | parent [-]

It too isn't rigourously defined. We're very much at the hand-waving "I know it when I see it" [1] stage for all of these terms.

[1] https://en.wikipedia.org/wiki/I_know_it_when_I_see_it

notepad0x90 18 hours ago | parent [-]

I can't speak for academic rigor, but it is very clear and specific from my understanding at least. Reasoning, simply put is the ability to come to a conclusion after analyzing information using a logic-derived deterministic algorithm.

naasking 17 hours ago | parent [-]

* Humans are not deterministic.

* Humans that make mistakes are still considered to be reasoning.

* Deterministic algorithms have limitations, like Goedel incompleteness, which humans seem able to overcome, so presumably, we expect reasoning to also be able to overcome such challenges.

notepad0x90 5 hours ago | parent [-]

1) I didn't say we were, but when someone is called reasonable or acting with reason, then that implies deterministic/algorithmic thinking. When we're not deterministic, we're not reasonable.

2) Yes, to reason does imply to be infallible. The deterministic algorithms we follow are usually flawed.

3) I can't speak much to that, but I speculate that if "AI" can do reasoning, it would be a much more complex construct that uses LLMs (among other tools) as tools and variables like we do.

assimpleaspossi 2 days ago | parent | prev | next [-]

>>you get to see something that is (close to) thinking.

Isn't that still "not thinking"?

ako 2 days ago | parent [-]

Depends who you ask, what their definition of thinking is.

lossyalgo 2 days ago | parent | prev [-]

Just ask it how many r's are in strawberry and you will realize there isn't a lot of reasoning going on here, it's just trickery on top of token generators.

Workaccount2 2 days ago | parent | next [-]

This is akin to "Show a human an optical illusion that exploits their physiology".

LLM's be like "The dumb humans can't even see the dots"[1]

[1]https://compote.slate.com/images/bdbaa19e-2c8f-435e-95ca-a93...

lossyalgo 2 days ago | parent [-]

haha that's a great analogy!

How about non-determinism (i.e. hallucinations)? Ask a human ANY question 3 times and they will give you the same answer, every time, unless you prod them or rephrase the question. Sure the answer might be wrong 3 times, but at least you have consistency. Then again, maybe that's a disadvantage for humans!

adrianmonk 2 days ago | parent | prev [-]

Ask an illiterate person the same thing and they will fail badly too. Is it impossible to have intelligence without literacy? (Bonus: if so, how was writing invented?)

lossyalgo a day ago | parent [-]

Yes but an illiterate person can be taught to read. Also LLMs generally fail (non-deterministically) at math in general, but humans can also be taught math.