Remix.run Logo
dopadelic 2 days ago

You're pointing out a real class of hard problems — modeling sparse, nonlinear, spatiotemporal systems — but there’s a fundamental mischaracterization in lumping all transformer-based models under “LLMs” and using that to dismiss the possibility of spatial reasoning.

Yes, classic LLMs (like GPT) operate as sequence predictors with no inductive bias for space, causality, or continuity. They're optimized for language fluency, not physical grounding. But multimodal models like ViT, Flamingo, and Perceiver IO are a completely different lineage, even if they use transformers under the hood. They tokenize images (or video, or point clouds) into spatially-aware embeddings and preserve positional structure in ways that make them far more suited to spatial reasoning than pure text LLMs.

The supposed “impedance mismatch” is real for language-only models, but that’s not the frontier anymore. The field has already moved into architectures that integrate vision, text, and action. Look at Flamingo's vision-language fusion, or GPT-4o’s real-time audio-visual grounding — these are not mere LLMs with pictures bolted on. These are spatiotemporal attention systems with architectural mechanisms for cross-modal alignment.

You're also asserting that "no general-purpose representations of space exist" — but this neglects decades of work in computational geometry, graphics, physics engines, and more recently, neural fields and geometric deep learning. Sure, no universal solution exists (nor should we expect one), but practical approximations exist: voxel grids, implicit neural representations, object-centric scene graphs, graph neural networks, etc. These aren't perfect, but dismissing them as non-existent isn’t accurate.

Finally, your concern about who on the team understands these deep theoretical issues is valid. But the fact is: theoretical CS isn’t the bottleneck here — it’s scalable implementation, multimodal pretraining, and architectural experimentation. If anything, what we need isn’t more Solomonoff-style induction or clever data structures — it’s models grounded in perception and action.

The real mistake isn’t that people are trying to cram physical reasoning into LLMs. The mistake is in acting like all transformer models are LLMs, and ignoring the very active (and promising) space of multimodal models that already tackle spatial, embodied, and dynamical reasoning problems — albeit imperfectly.

mumbisChungo 2 days ago | parent | next [-]

Claude, is that you?

calf 2 days ago | parent | prev [-]

How do we prove a trained LLM has no inductive bias for space, causality, etc.? We can't assume this is true by construction, can we?

dopadelic 2 days ago | parent [-]

Why would we need to prove such a thing? Human vision has strong inductive biases, which is why you can perceive objects in abstract patterns. This is why you can lay down at a park and see a duck in a cloud. It's also why we can create abstracted representations of things with graphics. Having inductive biases makes it more relatable to the way we work.

And again, you're using the term LLMs again when vision based transformers in multimodal models aren't simply LLMs.

calf a day ago | parent [-]

You said that classic LLMs have no inductive bias for causality. So I am simply asking if any computer scientist has actually proved that. Otherwise it is just a fancy way of saying "LLMs can't reason, they are just stochastic parrots". AFAIK not every computer scientist shares that consensus. So to use that claim is to potentially smuggle in an assumption that is not scientifically settled. That's why I specifically asked about this claim which you made a few paragraphs into your response to the parent commenter.