| ▲ | barrkel 4 hours ago | ||||||||||||||||
People spend a lot more than that on a car they use less, especially if they're in tech. The RAM choice was because I have never regretted buying more RAM - it's practically always a better trade than a slightly faster CPU - and 96GB DIMMs were at a sweet spot compared to 128GB DIMMs. That, and the ability to have big LLMs in memory, for some local inference, even if it's slow mixed CPU/GPU inference, or paged on demand. And if not for big LLMs, then to keep models cached for quick swapping. | |||||||||||||||||
| ▲ | sfn42 2 hours ago | parent [-] | ||||||||||||||||
I bought a 4 year old car for significantly less than that. And I can get a computer that can do 99% of what your monster can do for like 10% of the price. And if I want LLM inference I can get that for like $20 a month or whatever. I don't mean to judge, it's your money but to me it seems like an enormous waste. Just like spending $100k on a car when you can get one for $15k that does pretty much exactly the same job. | |||||||||||||||||
| |||||||||||||||||