▲ | Lerc 2 days ago | |||||||
>but I don't think that LLMs can create synthetic a priori knowledge. Do you think that a LLM has the ability to identify a new a priori knowledge? It seems like it would be a lower threshold to meet but If you combine that with a stochastic process then it seems inevitable that it would be able to ruminate until it came up with new a priori knowledge. | ||||||||
▲ | viccis a day ago | parent [-] | |||||||
I've said this in another comment but an example would be to train an LLM on a corpus with ALL mathematic content removed. Nothing at all. Then ask it what the shortest distance between two points is. That's an example of synthetic a priori knowledge. | ||||||||
|