| ▲ | anana_ 2 hours ago | ||||||||||||||||
It's rather surprising that a solo dev can squeeze more performance out of a model with rather humble resources vs a frontier lab. I'm skeptical of claims that such a fine-tuned model is "better" -- maybe on certain benchmarks, but overall? FYI the latest iteration of that finetune is here: https://huggingface.co/Jackrong/Qwopus3.5-27B-v3 | |||||||||||||||||
| ▲ | 1dom an hour ago | parent [-] | ||||||||||||||||
I feel that's a little bit misleading. That link doesn't have much affiliation with Qwen or anyone who produces/trained the Qwen models. That doesn't mean it's not good or safe, but it seems quite subjective to suggest it's the latest latest or greatest Qwen iteration. I can see huggingface turning into the same poisoned watering-hole as NPM if people fall into the same habits of dropping links and context like that. | |||||||||||||||||
| |||||||||||||||||