| ▲ | andy99 3 hours ago | |||||||
No, this describes the common understanding of LLMs and adds little to just calling it AI. The search is the more accurate model when considering their actual capabilities and understanding weaknesses. “Lossy compression of human knowledge” is marketing. | ||||||||
| ▲ | XenophileJKO 3 hours ago | parent [-] | |||||||
It is fundamentally and provably different than search because it captures things on two dimensions that can be used combinatorially to infer desired behavior for unobserved examples. 1. Conceptual Distillation - Proven by research work that we can find weights that capture/influence outputs that align with higher level concepts. 2. Conceptual Relations - The internal relationships capture how these concepts are related to each other. This is how the model can perform acts and infer information way outside of it's training data. Because if the details map to concepts then the conceptual relations can be used to infer desirable output. (The conceptual distillation also appears to include meta-cognitive behavior, as evidenced by Anthropic's research. Which manes sense to me, what is the most efficient way to be able to replicate irony and humor for an arbitrary subject? Compressing some spectrum of meta-cognitive behavior...) | ||||||||
| ||||||||