| ▲ | havealight 12 hours ago | |
The most mind-blowing thing LLMs proved to me was that knowledge is only prediction at scale. If you see “The quick brown fox jumped over the lazy dog” 80,000 times - and enough other sentences involving all those words to see how they generally fit together - then the system KNOWS what jumped over the lazy dog. It “knows” that it was none other than the quick brown fox. Any system that models things (saves the relations) in a way that makes it easier and easier to output a prediction about it each time is developing “knowledge” of it, is “learning” about it, and if it can reliably articulate abstract truths about it, is “intelligent” in that particular way. When I say somebody knows C better than anyone else, I mean they can reliably articulate all the right things in the right order - they “know” it, they have intelligence, they’ve modeled it enough to make their own reproductions, whether partial or derivative and so-on. Being consciously aware of it might be an entirely separate system. | ||