| ▲ | modeless 5 hours ago |
| There seem to still be a lot of people who look at results like this and evaluate them purely based on the current state. I don't know how you can look at this and not realize that it represents a huge improvement over just a few months ago, there have been continuous improvements for many years now, and there is no reason to believe progress is stopping here. If you project out just one year, even assuming progress stops after that, the implications are staggering. |
|
| ▲ | zamadatix 3 hours ago | parent | next [-] |
| The improvements in tool use and agentic loops have been fast and furious lately, delivering great results. The model growth itself is feeling more "slow and linear" lately, but what you can do with models as part of an overall system has been increasing in growth rate and that has been delivering a lot of value. It matters less if the model natively can keep infinite context or figure things out on its own in one shot so long as it can orchestrate external tools to achieve that over time. |
|
| ▲ | chasd00 2 hours ago | parent | prev | next [-] |
| i have to admit, even if model and tooling progress stopped dead today the world of software development has forever changed and will never go back. |
|
| ▲ | uywykjdskn 25 minutes ago | parent | prev | next [-] |
| Yea the software engineering profession is over, even if all improvements stop now. |
|
| ▲ | nozzlegear 3 hours ago | parent | prev [-] |
| Every S-curve looks like an exponential until you hit the bend. |
| |
| ▲ | NitpickLawyer 3 hours ago | parent | next [-] | | We've been hearing this for 3 years now. And especially 25 was full of "they've hit a wall, no more data, running out of data, plateau this, saturated that". And yet, here we are. Models keep on getting better, at more broad tasks, and more useful by the month. | | |
| ▲ | kelnos an hour ago | parent | next [-] | | Yes, and Moore's law took decades to start to fail to be true. Three years of history isn't even close to enough to predict whether or not we'll see exponential improvement, or an unsurmountable plateau. We could hit it in 6 months or 10 years, who knows. And at least with Moore's law, we had some understanding of the physical realities as transistors would get smaller and smaller, and reasonably predict when we'd start to hit limitations. With LLMs, we just have no idea. And that could be go either way. | |
| ▲ | nozzlegear 2 hours ago | parent | prev | next [-] | | > We've been hearing this for 3 years now Not from me you haven't! > "they've hit a wall, no more data, running out of data, plateau this, saturated that" Everyone thought Moore's Law was infallible too, right until they hit that bend. What hubris to think these AI models are different! But you've probably been hearing that for 3 years too (though not from me). > Models keep on getting better, at more broad tasks, and more useful by the month. If you say so, I'll take your word for it. | | |
| ▲ | Cyphase 2 hours ago | parent | next [-] | | 25 is 2025. | | |
| ▲ | nozzlegear 2 hours ago | parent [-] | | Oh my bad, the way it was worded made me read it as the name of somebody's model or something. |
| |
| ▲ | torginus 2 hours ago | parent | prev [-] | | Except for Moore's law, everyone knew decades ahead of what the limits of Dennard scaling are (shrinking geometry through smaller optical feature sizes), and roughly when we would get to the limit. Since then, all improvements came at a tradeoff, and there was a definite flattening of progress. | | |
| ▲ | nozzlegear 2 hours ago | parent [-] | | > Since then, all improvements came at a tradeoff, and there was a definite flattening of progress. Idk, that sounds remarkably similar to these AI models to me. |
|
| |
| ▲ | fmbb 2 hours ago | parent | prev [-] | | > And yet, here we are. I dunno. To me it doesn’t even look exponential any more. We are at most on the straight part of the incline. | | |
| |
| ▲ | raincole 3 hours ago | parent | prev [-] | | This quote would be more impactful if people haven't been repeating it since gpt-4 time. | | |
| ▲ | kimixa 2 hours ago | parent | next [-] | | People have also been saying we'd be seeing the results of 100x quality improvements in software with corresponding decease in cost since gpt-4 time. So where is that? | |
| ▲ | nozzlegear 2 hours ago | parent | prev [-] | | I agree, I have been informed that people have been repeating it for three years. Sadly I'm not involved in the AI hype bubble so I wasn't aware. What an embarrassing faux pas. |
|
|