▲ | shubhamjain a day ago | |
> The primary counterargument can be framed in terms of Rich Sutton's famous essay, "The Bitter Lesson," which argues that the entire history of AI has taught us that attempts to build in human-like cognitive structures (like embodiment) are always eventually outperformed by general methods that just leverage massive-scale computation This reminds me Douglas Hofstadter, of the Gödel, Escher, Bach fame. He rejected all of this statistical approaches towards creating intelligence and dug deep into the workings of human mind [1]. Often, in the most eccentric ways possible. > ... he has bookshelves full of these notebooks. He pulls one down—it’s from the late 1950s. It’s full of speech errors. Ever since he was a teenager, he has captured some 10,000 examples of swapped syllables (“hypodeemic nerdle”), malapropisms (“runs the gambit”), “malaphors” (“easy-go-lucky”), and so on, about half of them committed by Hofstadter himself. > > For Hofstadter, they’re clues. “Nobody is a very reliable guide concerning activities in their mind that are, by definition, subconscious,” he once wrote. “This is what makes vast collections of errors so important. In an isolated error, the mechanisms involved yield only slight traces of themselves; however, in a large collection, vast numbers of such slight traces exist, collectively adding up to strong evidence for (and against) particular mechanisms.” I don't know when, where, and how the next leap in AGI will come through, but it's just very likely, it will be through brute-force computation (unfortunately). So much for fifty years of observing Freudian slips. [1]: https://www.theatlantic.com/magazine/archive/2013/11/the-man... | ||
▲ | CuriouslyC a day ago | parent [-] | |
Brute force will always be part of the story, but it's not the solution. It just allows us to take an already working solution and make it better. |