Remix.run Logo
calf 2 days ago

How do we prove a trained LLM has no inductive bias for space, causality, etc.? We can't assume this is true by construction, can we?

dopadelic 2 days ago | parent [-]

Why would we need to prove such a thing? Human vision has strong inductive biases, which is why you can perceive objects in abstract patterns. This is why you can lay down at a park and see a duck in a cloud. It's also why we can create abstracted representations of things with graphics. Having inductive biases makes it more relatable to the way we work.

And again, you're using the term LLMs again when vision based transformers in multimodal models aren't simply LLMs.

calf a day ago | parent [-]

You said that classic LLMs have no inductive bias for causality. So I am simply asking if any computer scientist has actually proved that. Otherwise it is just a fancy way of saying "LLMs can't reason, they are just stochastic parrots". AFAIK not every computer scientist shares that consensus. So to use that claim is to potentially smuggle in an assumption that is not scientifically settled. That's why I specifically asked about this claim which you made a few paragraphs into your response to the parent commenter.