| ▲ | ausbah 4 hours ago | ||||||||||||||||||||||
is it really unthinkable that another oss/local model will be released by deepseek, alibaba, or even meta that once again give these companies a run for their money | |||||||||||||||||||||||
| ▲ | zozbot234 4 hours ago | parent | next [-] | ||||||||||||||||||||||
> is it really unthinkable that another oss/local model will be released by deepseek, alibaba, or even meta that once again give these companies a run for their money Plenty of OSS models being released as of late, with GLM and Kimi arguably being the most interesting for the near-SOTA case ("give these companies a run for their money"). Of course, actually running them locally for anything other than very slow Q&A is hard. | |||||||||||||||||||||||
| ▲ | rectang 4 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
For my working style (fine-grained instructions to the agent), Opus 4.5 is basically ideal. Opus 4.6 and 4.7 seem optimized for more long-running tasks with less back and forth between human and agent; but for me Opus 4.6 was a regression, and it seems like Opus 4.7 will be another. This gives me hope that even if future versions of Opus continue to target long-running tasks and get more and more expensive while being less-and-less appropriate for my style, that a competitor can build a model akin to Opus 4.5 which is suitable for my workflow, optimizing for other factors like cost. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | amelius 4 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
I'm betting on a company like Taalas making a model that is perhaps less capable but 100x as fast, where you could have dozens of agents looking at your problem from all different angles simultaneously, and so still have better results and faster. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | embedding-shape 4 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
Nothing is unthinkable, I could think of Transformers.V2 that might look completely different, maybe iterations on Mamba turns out fruitful or countless of other scenarios. | |||||||||||||||||||||||
| ▲ | pitched 4 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
Now that Anthropic have started hiding the chain of thought tokens, it will be a lot harder for them | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | casey2 2 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
This regression put Anthropic behind Chinese models actually. | |||||||||||||||||||||||
| ▲ | slowmovintarget 4 hours ago | parent | prev [-] | ||||||||||||||||||||||
Qwen released a new model the same day (3.6). The headline was kind of buried by Anthropic's release, though. | |||||||||||||||||||||||