| ▲ | byzantinegene 6 hours ago | |
i would argue we don't need anything near Opus to be productive. Sonnet is plenty productive enough | ||
| ▲ | root_axis 6 hours ago | parent | next [-] | |
I use Opus 4.6 as an example because it's the LLM that has been widely recognized by the public as being reliably capable of doing real work across many domains. However, the same logic applies to Opus 4.5 and even previous generations. These models have huge parameter counts and large context sizes, there's no training technique that can compensate for those qualities in small and quantized models. | ||
| ▲ | JumpCrisscross 6 hours ago | parent | prev [-] | |
> we don't need anything near Opus to be productive. Sonnet is plenty productive enough For niche applications, sure. For general use, I think the tendency towards the best model being used for everything will–to the model publishers' delight–continue. It's just much easier to get a feel for Opus and then do everything with it, versus switch back and forth and keep track of how Haiku came up with novel ways to dumbfuck this Sunday evening. | ||