Remix.run Logo
DeathArrow 5 hours ago

>This model? You can run it at Q4 with 70GB of VRAM. >This beats the latest Sonnet while running locally

Not sure it will beat Sonet at Q4.

>This is approaching consumer level territory (you can get a Mac Studio with 128GB of RAM for ~3500 USD).

For $3500 I can get 7-8 years of GLM using coding plans, have a faster model and much better code quality.

simjnd 4 hours ago | parent | next [-]

> Not sure it will beat Sonet at Q4.

Very valid. Importance-weighted quantization and TurboQuant on model weights can reduce loss a lot compared to "traditional" Q4 so one can be hopeful.

> For $3500 I can get 7-8 years of GLM using coding plans, have a faster model and much better code quality

But you will own no computer, and that's also assuming prices stay what they are. Anyway my point was not whether or not it makes financial sense for everyone. A lot of people are very happy not owning their movies, software, games, cars or house. I'm just happy there is a future where the people can own and locally run the tech that was trained on their stolen data.

kobalsky 4 hours ago | parent | prev [-]

> For $3500 I can get 7-8 years of GLM

mind sharing where's the go to place to pay for open models?

simjnd 4 hours ago | parent | next [-]

I recommend using OpenRouter (openrouter.ai). Basically a broker between inference providers and you which allows you to pick, try, and switch models from a massive catalog, extremely transparent about usage and pricing.

DeathArrow 4 hours ago | parent | prev [-]

You can get GLM coding plans from Z.ai and Ollama Cloud and OpenCode Go.