▲ | Nevermark 2 days ago | |
> It's only the mechanism that's unprecedented I think this is correct and also the point. Neural networks, deep learning modes, have been reliably improving year to year for a very long time. Even in the 90's on CPUs, the combination of CPU improvements and training algorithm improvements translated into a noticeable upward arc. However, they were not yet suitable for anything but small boutique problems. The computing power, speed, RAM, etc. just wasn't there until GPU computing took off. Since then, compounding GPU power, and relatively simple changes in architecture have let deep learning rapidly become relevant in ... well every field where data is relevant. And progress has not just been reliable, but noticeably accelerated every few years over the last two decades. So while you are right, today's AI varies from interesting, to moderately helpful but not Earth shattering in many areas, that is what happens when a new wave of technology crosses the threshold of usability. Past example: "Cars are really not much better than horses, and very finicky." But the cars were on a completely different arc of improvement. The limitations of current AI models aside, their generality of expertise (flawed as it might be), is unprecedented. Multi-modal systems, longer context windows, and systems for improving glitchy behavior are a given, and will make big quality differences. Those are obvious requirements with relatively obvious means. We are going to get more than that going forward, just as these models have often been surprisingly useful (at much lower levels and narrower contexts) in the far and recent past. This train has been accelerating for over three and a half decades. It isn't going to suddenly slow down because it just passed "Go". The opposite. |