▲ | calf 2 days ago | |||||||
How do we prove a trained LLM has no inductive bias for space, causality, etc.? We can't assume this is true by construction, can we? | ||||||||
▲ | dopadelic 2 days ago | parent [-] | |||||||
Why would we need to prove such a thing? Human vision has strong inductive biases, which is why you can perceive objects in abstract patterns. This is why you can lay down at a park and see a duck in a cloud. It's also why we can create abstracted representations of things with graphics. Having inductive biases makes it more relatable to the way we work. And again, you're using the term LLMs again when vision based transformers in multimodal models aren't simply LLMs. | ||||||||
|