| ▲ | Moosdijk 5 hours ago | |||||||||||||||||||||||||||||||||||||||||||||||||
I wonder why glm is viewed so positively. Every time I try to build something with it, the output is worse than other models I use (Gemini, Claude), it takes longer to reach an answer and plenty of times it gets stuck in a loop. | ||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | pkulak 4 hours ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||
I've been running Opus and GLM side-by side for a couple weeks now, and I've been impressed with GLM. I will absolutely agree that it's slow, but if you let it cook, it can be really impressive and absolutely on the level of Opus. Keep in mind, I don't really use AI to build entire services, I'm mostly using it to make small changes or help me find bugs, so the slowness doesn't bother me. Maybe if I set it to make a whole web app and it took 2 days, that would be different. The big kicker for GLM for me is I can use it in Pi, or whatever harness I like. Even if it was _slightly_ below Opus, and even though it's slower, I prefer it. Maybe Mythos will change everything, but who knows. | ||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | Mashimo 4 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||
I have used GLM 4.7, 5 and 5.1 now for about 3 month via OpenCode harness and I don't remember it every being stuck in a loop. You have to keep it below ~100 000 token, else it gets funny in the head. I only use it for hobby projects though. Paid 3 EUR per month, that is not longer available though :( Not sure what I will choose end of month. Maybe OpenCode Go. | ||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | spaceman_2020 2 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||
I think it offers a very good tradeoff of cost vs competency 4.7 is better, but its also wildly expensive | ||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | Akira1364 5 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||
IDK about GLM but GPT 5.4 Extra High has been great when I've used it in the VS Code Copilot extension, I see no actual reason Opus should consume 3x more quota than it the way it does | ||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | slopinthebag 4 hours ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||||||||||||||
You're probably just holding it wrong. | ||||||||||||||||||||||||||||||||||||||||||||||||||