| ▲ | jimbob45 4 hours ago | |
Would having a locally-hosted model offset any of these costs? | ||
| ▲ | kennywinker 3 hours ago | parent | next [-] | |
Yes, but that comes at the cost of using a dumber llm. The state of the art ones are only available via commercial api, and the best self-hostable models require $10,000+ gpus. This is a problem for coding as smarter really has an impact there, but there are so so so many tasks that an 8b model that runs on a $200 gpu can handle nicely. Scrape this page and dump json? Yeah that’s gonna be fine. This is my conclusion based on a week or so of using ollama + qwen3.5:3b self hosted on a ~10 year old dell optiplex with only the built-in gpu. You don’t need state of the art to do simple tasks. | ||
| ▲ | robthompson2018 2 hours ago | parent | prev [-] | |
Our starter plan gives you a machine with 2GB of RAM. You will not be able to run a local LLM. OpenRouter has free models (eg Z.ai: GLM 4.5 Air), I recommend those. | ||