| ▲ | mvkel 6 hours ago | |||||||
Open weight LLMs aren't supposed to "beat" closed models, and they never will. That isn’t their purpose. Their value is as a structural check on the power of proprietary systems; they guarantee a competitive floor. They’re essential to the ecosystem, but they’re not chasing SOTA. | ||||||||
| ▲ | barrell 6 hours ago | parent | next [-] | |||||||
I can attest to Mistral beating OpenAI in my use cases pretty definitively :) | ||||||||
| ▲ | pants2 5 hours ago | parent | prev | next [-] | |||||||
> Their value is as a structural check on the power of proprietary systems Unfortunately that doesn't pay the electricity bill | ||||||||
| ▲ | cmrdporcupine 5 hours ago | parent | prev | next [-] | |||||||
This may be the case, but DeepSeek 3.2 is "good enough" that it competes well with Sonnet 4 -- maybe 4.5 -- for about 80% of my use cases, at a fraction of the cost. I feel we're only a year or two away from hitting a plateau with the frontier closed models having diminishing returns vs what's "open" | ||||||||
| ||||||||
| ▲ | re-thc 6 hours ago | parent | prev [-] | |||||||
> Open weight LLMs aren't supposed to "beat" closed models, and they never will. That isn’t their purpose. Do things ever work that way? What if Google did Open source Gemini. Would you say the same? You never know. There's never "supposed" and "purpose" like that. | ||||||||
| ||||||||