| ▲ | alex_duf 6 hours ago |
| What does "fully caught up" mean in the context of an ever evolving technology?
I think I'm in support of open weight models (though there are safety implications), but these things aren't cheap to train and run. This fact alone gives no incentive for leading labs to release cutting edge open weight models. Why spend the money then give the product for free? Now if "fully caught up" means today's level of intelligence is available for free in two years, by then that level of intelligence means very little |
|
| ▲ | vorticalbox 5 hours ago | parent | next [-] |
| It’s never free your shifting costs from paying a company for their api use vs the power costs of running it locally. |
|
| ▲ | stavros 5 hours ago | parent | prev [-] |
| Yeah I don't understand it, it's a marathon with three companies perpetually a minute ahead, and people keep saying "I expect the stragglers to catch up". The only thing I can see them meaning is what you said, "in a minute the stragglers will be where the leaders were a minute ago", which, yeah, sure. |
| |
| ▲ | lelanthran 12 minutes ago | parent | next [-] | | That's fine. I can afford to wait a minute if it means I pay $10/m instead of $5k/m. | |
| ▲ | ReliantGuyZ 3 hours ago | parent | prev | next [-] | | By my estimation, there is a point where these models are "good enough" for the vast vast majority of all appropriate tasks, after which point further investment by the major labs will have diminishing returns. While they might stay ahead by some measure, the open models will be good enough too, and I assume significantly cheaper like they are now. Or AGI hits and this theory collapses, but that's feeling less likely every day. | |
| ▲ | patrickmcnamara 5 hours ago | parent | prev | next [-] | | It's not a marathon, or any race. There is no a finish line. It doesn't matter that much that someone is a minute ahead. | |
| ▲ | mrbombastic 5 hours ago | parent | prev [-] | | It makes perfect sense if you think things cannot improve indefinitely | | |
| ▲ | PunchyHamster an hour ago | parent | next [-] | | Also, there is a good enough point where improvements for a given use case are on heavy diminishing returns | |
| ▲ | inciampati 5 hours ago | parent | prev [-] | | They do approximate any function... within the range they're trained on. And that range is human limited, at least today. |
|
|