▲ | tick_tock_tick 2 days ago | ||||||||||||||||||||||||||||||||||||||||
You need a 100+gigs ram and a top of the line GPU to run legacy models at home. Maybe if you push it that setup will let you handle 2 people maybe 3 people. You think anyone is going to make money on that vs $20 a month to anthropic? | |||||||||||||||||||||||||||||||||||||||||
▲ | lelanthran 2 days ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||
> You need a 100+gigs ram and a top of the line GPU to run legacy models at home. Maybe if you push it that setup will let you handle 2 people maybe 3 people. This doesn't seem correct. I run legacy models with only slightly reduced performance on 32GB RAM with a 12GB VRAM GPU right now. BTW, that's not an expensive setup. > You think anyone is going to make money on that vs $20 a month to anthropic? Why does it have to be run as a profit-making machine for other users? It can run as a useful service for the entire household, when running at home. After all, we're not talking about specialised coding agents using this[1], just normal user requests. ==================================== [1] For an outlay of $1k for a new GPU I can run a reduced-performance coding LLM. Once again, when it's only myself using it, the economics work out. I don't need the agent to be fully autonomous because I'm not vibe coding - I can take the reduced-performance output, fix it and use it. | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||
▲ | jayd16 2 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
Can you explain to me where Anthropic (or it's investors) expect to be making money if that's what it actually costs to run this stuff? | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||
▲ | 2 days ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||
[deleted] |