Remix.run Logo
Terr_ 3 days ago

Well put, and if it doesn't notice/collapse under introduced contradictions, that's evidence it's not the kind of reasoning we were hoping for. The "real thing" is actually brittle when you do it right.

czl 3 days ago | parent [-]

Human reasoning is, in practice, much closer to statistical association than to brittle rule-following. The kind of strict, formal deduction we teach in logic courses is a special, slow mode we invoke mainly when we’re trying to check or communicate something, not the default way our minds actually operate.

Everyday reasoning is full of heuristics, analogies, and pattern matches: we jump to conclusions, then backfill justification afterward. Psychologists call this “post hoc rationalization,” and there’s plenty of evidence that people form beliefs first and then search for logical scaffolding to support them. In fact, that’s how we manage to think fluidly at all; the world is too noisy and underspecified for purely deductive inference to function outside of controlled systems.

Even mathematicians, our best examples of deliberate, formal thinkers, often work this way. Many major proofs have been discovered intuitively and later found to contain errors that didn’t actually invalidate the final result. The insight was right, even if the intermediate steps were shaky. When the details get repaired, the overall structure stands. That’s very much like an LLM producing a chain of reasoning tokens that might include small logical missteps yet still landing on the correct conclusion: the “thinking” process is not literal step-by-step deduction, but a guided traversal through a manifold of associations shaped by prior experience (or training data, in the model’s case).

So if an LLM doesn’t collapse under contradictions, that’s not necessarily a bug; it may reflect the same resilience we see in human reasoning. Our minds aren’t brittle theorem provers; they’re pattern-recognition engines that trade strict logical consistency for generalization and robustness. In that sense, the fuzziness is the strength.

Terr_ 2 days ago | parent [-]

> The kind of strict, formal deduction we teach in logic courses is a special, slow mode

Yes, but that seems like moving the goalposts.

The stricter blends of reasoning are what everybody is so desperate to evoke from LLMs, preferably along with inhuman consistency, endurance, and speed. Just imagine the repercussions if a slam-dunk paper came out tomorrow, which somehow proved the architectures and investments everyone is using for LLMs are a dead-end for that capability.

crazygringo 2 days ago | parent | next [-]

> The stricter blends of reasoning are what everybody is so desperate to evoke from LLMs

This is definitely not true for me. My prompts frequently contain instructions that aren't 100% perfectly clear, suggest what I want rather than formally specifying it, typos, mistakes, etc. The fact that the LLM usually figures out what I meant to say, like a human would, is a feature for me.

I don't want an LLM to act like an automated theorem prover. We already have those. Their strictness makes them extremely difficult to use, so their application is extremely limited.

czl 2 days ago | parent | prev [-]

I get the worry. AFAIK most of the current capex is going into scalable parallel compute, memory, and networking. That stack is pretty model agnostic, similar to how all that dot com fiber was not tied to one protocol. If transformers stall, the hardware is still useful for whatever comes next.

On reasoning, I see LLMs and classic algorithms as complements. LLMs do robust manifold following and associative inference. Traditional programs do brittle rule following with guarantees. The promising path looks like a synthesis where models use tools, call code, and drive search and planning methods such as MCTS, the way AlphaGo did. Think agentic systems that can read, write, execute, and verify.

LLMs are strongest where the problem is language. Language co evolved with cognition as a way to model the world, not just to chat. We already use languages to describe circuits, specify algorithms, and even generate other languages. That makes LLMs very handy for specification, coordination, and explanation.

LLMs can also statistically simulate algorithms, which is useful for having them think about these algorithms. But when you actually need the algorithm, it is most efficient to run the real thing in software or on purpose built hardware. Let the model write the code, compose the tools, and verify the output, rather than pretending to be a CPU.

To me the risk is not that LLMs are a dead end, but that people who do not understand them have unreasonable expectations. Real progress looks like building systems that use language to invent and implement better tools and route work to the right place. If a paper lands tomorrow that shows pure next token prediction is not enough for formal reasoning, that would be an example of misunderstanding LLMs, not a stop sign. We already saw something similar when Minsky and Papert highlighted that single layer perceptrons could not represent XOR, and the field later moved past that with multilayer networks. Hopefully we remember that and learn the right lesson this time.