| ▲ | amunozo 7 hours ago | ||||||||||||||||||||||
I want to believe it's gonna be good, but after trying GPT-5.5 even the most advanced Chinese models seem depressing. | |||||||||||||||||||||||
| ▲ | r0b05 6 hours ago | parent | next [-] | ||||||||||||||||||||||
This is a French model sir | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | ako 6 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
Then you’ll be happy to learn it’s not Chinese | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | manishsharan 6 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
I am not following this obsession with SOTA and benchmark rankings I have been using DeepSeek and GLMnmodels with OpenCode and Codex and Claudr side by side. I have not found the Chinese models lacking. I enjoy for coding and like to maintain full control of my codebade and deeply care about the GOF patterns. So I am very stringent in terms of what I want the LLM to code and how to code. So from my perspective, they are all about the same. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | lava_pidgeon 6 hours ago | parent | prev [-] | ||||||||||||||||||||||
Honestly I depends on the context which this performance matters. Mistral is quiet cheap | |||||||||||||||||||||||