▲ | colonCapitalDee 5 days ago | |||||||
You should consider self-hosting in the cloud. When you start coding run a script that spins up a new VM and loads the LLM of your choice, then run another script to spin it back down when you're done. For intermittent use this works great and is much cheaper than buying your own hardware, plus it's future proof. It does admittedly lack the cool factor of truly running locally though. | ||||||||
▲ | menaerus 3 days ago | parent | next [-] | |||||||
Too expensive from what I have seen - price for reasonably large GPU rigs that can host medium to large models is anywhere between ~5$/hr to ~9$/hr. That's ~40$-72$ for 8-hour working day or ~800$-1500$ for ~20 working days in a month. That's ~1000$ a month in average. This doesn't math for me. | ||||||||
| ||||||||
▲ | flashgordon 5 days ago | parent | prev [-] | |||||||
Yeah this is the setup I am thinking for now as it is all the "Freedom" with only hardware dependence. Wierdly enough I noticed Qwen3 (coder) was also almost same price as opus 4 which was wierd. | ||||||||
|