| ▲ | noosphr 6 hours ago |
| Home rigs like that are no longer cost effective. You're better off buying an rtx pro 6000 outright. This holds both for the sticker price, the supporting hardware price, the electricity cost to run it and cooling the room that you use it in. |
|
| ▲ | torginus 6 hours ago | parent | next [-] |
| I was just watching this video about a Chinese piece of industrial equipment, designed for replacing BGA chips such as flash or RAM with a good deal of precision: https://www.youtube.com/watch?v=zwHqO1mnMsA I wonder how well the aftermarket memory surgery business on consumer GPUs is doing. |
| |
| ▲ | dotancohen 3 hours ago | parent | next [-] | | I wonder how well the opthalmologist is doing. These guys are going to be paying him a visit playing around with those lasers and no PPE. | | |
| ▲ | CamperBob2 2 hours ago | parent [-] | | Eh, I don't see the risk, no pun intended. It's not collimated, and it's not going to be in focus anywhere but on-target. It's also probably in the long-wave range >>1000 nm that's not focused by the eye. At the end of the day it's no different from any other source of spot heating. I get more nervous around some of the LED flashlights you can buy these days. I want one. Hot air blows. |
| |
| ▲ | ThrowawayTestr 4 hours ago | parent | prev [-] | | LTT recently did a video on upgrading a 5090 to 96gb of ram |
|
|
| ▲ | throw4039 5 hours ago | parent | prev | next [-] |
| Yeah, the pricing for the rtx pro 6000 is surprisingly competitive with the gamer cards (at actual prices, not MSRP). A 3x5090 rig will require significant tuning/downclocking to be run from a single North American 15A plug, and the cost of the higher powered supporting equipment (cooling, PSU, UPS, etc) needed will pay for the price difference, not to mention future expansion possibilities. |
|
| ▲ | mikae1 6 hours ago | parent | prev [-] |
| Or perhaps a 512GB Mac Studio. 671B Q4 of R1 runs on it. |
| |
| ▲ | redrove 5 hours ago | parent [-] | | I wouldn’t say runs. More of a gentle stroll. | | |
| ▲ | storus 5 hours ago | parent | next [-] | | I run it all the time, token generation is pretty good. Just large contexts are slow but you can hook a DGX Spark via Exo Labs stack and outsource token prefill to it. Upcoming M5 Ultra should be faster than Spark in token prefill as well. | | |
| ▲ | embedding-shape 4 hours ago | parent [-] | | > I run it all the time, token generation is pretty good. I feel like because you didn't actually talk about prompt processing speed or token/s, you aren't really giving the whole picture here. What is the prompt processing tok/s and the generation tok/s actually like? | | |
| ▲ | storus 4 hours ago | parent [-] | | I addressed both points - I mentioned you can offload token prefill (the slow part, 9t/s) to DGX Spark. Token generation is at 6t/s which is acceptable. |
|
| |
| ▲ | hasperdi 5 hours ago | parent | prev [-] | | With quantization, converting it to an MOE model... it can be a fast walk |
|
|