| ▲ | anon84873628 an hour ago | |
I'm skeptical of LLM "reasoning" but they sure as hell know a lot. That's what the embeddings are: a giant semantic relationship between concepts. | ||
| ▲ | WorldMaker 21 minutes ago | parent | next [-] | |
Embeddings are still mostly just vectors into n-dimensional K-means clusters. It isn't "knowing" two things are related and here's the evidence, it is guessing two things are statistically likely to be related, based on trained patterns, and running with it without evidence. It has no "semantic understanding" as we would define it. It's just increasingly good at winning cluster lotteries because we've increased the amount of training data to incredible heights. | ||
| ▲ | wiseowise 35 minutes ago | parent | prev | next [-] | |
Encyclopedia and Wikipedia know a lot too. Knowledge isn't much of use on its own, it's about how you use it. | ||
| ▲ | koonsolo an hour ago | parent | prev [-] | |
I agree with you, but a big drawback is that the accuracy or confidence of their output can't be estimated. So they surely know a lot, but you are never sure if the info is correct or not. | ||