▲ | neal_ 6 days ago | |
The better the benchmarks, the worse the model is. Subjectively for me the more advanced models dont follow instructions, and are less capable of implementing features or building stuff. I could not tell a difference in blind testing SOTA models gemini, claude, openai, deepseek. There has been no major improvements in the LLM space since the original models gained popularity. Each release claims to be much better the last, and every time i have been disappointed and think this is worse. First it was the models stopped putting in effort and felt lazy, tell it to do something and it will tell you to do it your self. Now its the opposite and the models go ham changing everything they see, instead of changing one line, SOTA models rather rewrite the whole project and still not fix the issue. Two years back I totally thought these models are amazing. I always would test out the newest models and would get hyped up about it. Every problem i had i thought if i just prompt it differently I can get it to solve this. Often times i have spent hours prompting starting new chats, adding more context. Now i realize its kinda useless and its better to just accept the models where they are, rather then try and make them a one stop shop, or try to stretch capabilities. I think this release I won’t even test it out, im not interested anymore. I’ll probably just continue using deepseek free, and gemini free. I canceled my openai subscription like 6 months ago, and canceled claude after 3.7 disappointment. |