| ▲ | ainch 3 days ago | |
On that note, Terence Tao gave a good interview to Dwarkesh Patel talking about Kepler. He pointed out that the previous geocentric models were actually more accurate than Kepler's at the time, in part because they'd had so much complexity piled on to solve minor errors. Kepler's theory was more elegant, but at the time it wasn't necessarily a better model. I think important paradigm shifts can often look like this - there's not necessarily a reason to expect them to be instantly optimal. Deep Learning vs 'good old-fashioned AI' is another example of this dichotomy; it took a long time for deep learning to establish itself. | ||
| ▲ | Nevermark a day ago | parent [-] | |
I like this a lot. The Innovators Dilemma for science. The new simpler tool always competes with highly adapted complex tools to get to a region of value generation. Starting where it’s greater simplicity, despite less complementary adaptations, is of great advantage. Then slowly accumulates its own version of practical complements that let it excel overall. | ||