Remix.run Logo
ausbah 4 hours ago

is it really unthinkable that another oss/local model will be released by deepseek, alibaba, or even meta that once again give these companies a run for their money

zozbot234 4 hours ago | parent | next [-]

> is it really unthinkable that another oss/local model will be released by deepseek, alibaba, or even meta that once again give these companies a run for their money

Plenty of OSS models being released as of late, with GLM and Kimi arguably being the most interesting for the near-SOTA case ("give these companies a run for their money"). Of course, actually running them locally for anything other than very slow Q&A is hard.

rectang 4 hours ago | parent | prev | next [-]

For my working style (fine-grained instructions to the agent), Opus 4.5 is basically ideal. Opus 4.6 and 4.7 seem optimized for more long-running tasks with less back and forth between human and agent; but for me Opus 4.6 was a regression, and it seems like Opus 4.7 will be another.

This gives me hope that even if future versions of Opus continue to target long-running tasks and get more and more expensive while being less-and-less appropriate for my style, that a competitor can build a model akin to Opus 4.5 which is suitable for my workflow, optimizing for other factors like cost.

DeathArrow an hour ago | parent [-]

Have you tried GLM 5.1?

amelius 4 hours ago | parent | prev | next [-]

I'm betting on a company like Taalas making a model that is perhaps less capable but 100x as fast, where you could have dozens of agents looking at your problem from all different angles simultaneously, and so still have better results and faster.

100ms an hour ago | parent | next [-]

I'm excited for Taalas, but the worry with that suggestion is that it would blow out energy per net unit of work, which kills a lot of Taalas' buzz. Still, it's inevitable if you make something an order of magnitude faster, folk will just come along and feed it an order of magnitude more work. I hope the middleground with Taalas is a cottage industry of LLM hosts with a small-mid sized budget hosting last gen models for quite cheap. Although if they're packed to max utilisation with all the new workloads they enable, latency might not be much better than what we already have today

andai 4 hours ago | parent | prev [-]

Yeah, it's a search problem. When verification is cheap, reducing success rate in exchange for massively reducing cost and runtime is the right approach.

never_inline 4 hours ago | parent [-]

You underestimating the algorithmic complexity of such brute forcing, and the indirect cost of brittle code that's produced by inferior models

embedding-shape 4 hours ago | parent | prev | next [-]

Nothing is unthinkable, I could think of Transformers.V2 that might look completely different, maybe iterations on Mamba turns out fruitful or countless of other scenarios.

pitched 4 hours ago | parent | prev | next [-]

Now that Anthropic have started hiding the chain of thought tokens, it will be a lot harder for them

zozbot234 4 hours ago | parent [-]

Anthropic and OpenAI never showed the true chain of thought tokens. Ironically, that's something you only get from local models.

casey2 2 hours ago | parent | prev | next [-]

This regression put Anthropic behind Chinese models actually.

slowmovintarget 4 hours ago | parent | prev [-]

Qwen released a new model the same day (3.6). The headline was kind of buried by Anthropic's release, though.

https://news.ycombinator.com/item?id=47792764