| ▲ | jabron 3 hours ago | |||||||
I'd argue that "assumptions", i.e. the statistical models it uses to predict text, is basically what makes LLMs useful. The problem here is that its assumptions are naive. It only takes the distance into account, as that's what usually determines the correct response to such a question. | ||||||||
| ▲ | jnovek 2 hours ago | parent [-] | |||||||
I think that’s still anthropomorphization. The point I’m making is that these things aren’t “assumptions” as we characterize them, not from the model’s perspective. We use assumptions as an analogy but the analogy becomes leaky when we get to the edges (like this situation). | ||||||||
| ||||||||