| ▲ | mifreewil 6 hours ago |
| You'd want to get something like a RTX Pro 6000 (~ $8,500 - $10,000) or at least a RTX 5090 (~$3,000). That's the easiest thing to do or cluster of some lower-end GPUs. Or a DGX Spark (there are some better options by other manufacturers than just Nvidia) (~$3000). |
|
| ▲ | mitjam 6 hours ago | parent [-] |
| Yes, I also considered the RTX 6000 Pro Max-Q, but it’s quite expensive and probably only makes sense if I can use it for other workloads as well. Interestingly, its price hasn’t gone up since last summer, here in Germany. |
| |
| ▲ | storus 6 hours ago | parent [-] | | I have MacStudio with 512GB RAM, 2x DGX Spark and RTX 6000 Pro WS (planing to buy a few of those in Max-Q version next). I am wondering if we ever see local inference so "cheap" as we see it right now given RAM/SSD price trends. | | |
| ▲ | clusterhacks 5 hours ago | parent [-] | | Good grief. I'm here cautiously telling my workplace to buy a couple of dgx sparks for dev/prototyping and you have better hardware in hand than my entire org. What kind of experiments are you doing? Did you try out exo with a dgx doing prefill and the mac doing decode? I'm also totally interested in hearing what you have learned working with all this gear. Did you buy all this stuff out of pocket to work with? | | |
| ▲ | storus 3 hours ago | parent [-] | | Yeah, Exo was one of the first things to do - MacStudio has a decent throughput at the level of 3080, great for token generation and Sparks have decent compute, either for prefill or for running non-LLM models that need compute (segment anything, stable diffusion etc). RTX 6000 Pro just crushes them all (it's essentially like having 4x3090 in a single GPU). I bought 2 sparks to also play with Nvidia's networking stack and learn their ecosystem though they are a bit of a mixed bag as they don't expose some Blackwell-specific features that make a difference. I bought it all to be able to run local agents (I write AI agents for living) and develop my own ideas fully. Also I was wrapping up grad studies at Stanford so they came handy for some projects there. I bought it all out of pocket but can amortize them in taxes. | | |
| ▲ | clusterhacks an hour ago | parent [-] | | Very cool - thanks for the info. That you are writing AI agents for a living is fascinating to hear. We aren't even really looking at how to use agents internally yet. I think local agents are incredibly off the radar at my org despite some really good additions as supplement resources for internal apps. What's deployment look like for your agents? You're clearly exploring a lot of different approaches . . . |
|
|
|
|