▲ | refulgentis 2 days ago | |||||||||||||||||||||||||||||||||||||||||||
It's been such a mind-boggling decline in intellect, combined with really odd and intense conspiratorial behavior around crypto, that I went into a bit a few months ago. My weak, uncited, understanding from then they're poorly positioned, i.e in our set they're still the guys who write you a big check for software, but in the VC set they're a joke: i.e. they misunderstood carpet bombing investment as something that scales, and went all in on way too many crypto firm. Now, they have embarrassed themselves with a ton of assets that need to get marked down, it's clearly behind the other bigs, but there's no forcing function to do markdowns. So we get primal screams about politics and LLM-generated articles about how a $9K video card is the perfect blend between price and performance. There's other comments effusively praising them on their unique technical expertise. I maintain a llama.cpp client on every platform you can think of. Nothing in this article makes any sense. If you're training, you wouldn't do it on only 4 $9K GPUs that you own. If you're inferencing, you're not getting much more out of this than you would a ~$2K Framework desktop. | ||||||||||||||||||||||||||||||||||||||||||||
▲ | NitpickLawyer 2 days ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||||||||
> If you're inferencing, you're not getting much more out of this than you would a ~$2K Framework desktop. I was with you up till here. Come on! CPU inferencing is not it, even macs struggle with bigger models, longer contexts (esp. visible when agentic stuff gets > 32k tokens). The PRO6000 is the first gpu that actually makes sense to own from their "workstation" series. | ||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||
▲ | CamperBob2 2 days ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||||||||
If you're inferencing, you're not getting much more out of this than you would a ~$2K Framework desktop. Well, you're getting the ability to maintain a context bigger than 8K or so, for one thing. | ||||||||||||||||||||||||||||||||||||||||||||
|