Remix.run Logo
chis 2 days ago

A16Z is consistently the most embarrassing VC firm at any given point in time. I guess optimistically they might be doing “outrage marketing” but it feels more like one of those places where the CEO is just an idiot and tells his employees to jump on every trend.

The funny part is that they still make money. It seems like once you’ve got the connections, being a VC is a very easy job these days.

orionsbelt 2 days ago | parent | next [-]

VC is a marketing game. You want to be attractive to founders, so that the best founders/companies come to you and want to choose you.

aprilthird2021 2 days ago | parent [-]

But is gassing up founders something they want? Idk, maybe. But just remember these guys crypto play and it feels like they'll just yes man you off a cliff if you're a founder...

doctoboggan 2 days ago | parent [-]

Yes people like that even if they think it doesn't work on them. Just like people who say advertising doesn't work on them when it really does work on us all.

obscure-enigma 2 days ago | parent | prev | next [-]

YC seems to be hopping on every trend more than what A16Z does. The latter still bet on momentum and not just heat in the game

jryle70 2 days ago | parent | prev | next [-]

> consistently the most embarrassing VC firm at any given point in time

Based on what? your feelings?

> being a VC is a very easy job these days.

There you go. Why hasn't everyone who have connections became VC.

refulgentis 2 days ago | parent | prev | next [-]

It's been such a mind-boggling decline in intellect, combined with really odd and intense conspiratorial behavior around crypto, that I went into a bit a few months ago.

My weak, uncited, understanding from then they're poorly positioned, i.e in our set they're still the guys who write you a big check for software, but in the VC set they're a joke: i.e. they misunderstood carpet bombing investment as something that scales, and went all in on way too many crypto firm. Now, they have embarrassed themselves with a ton of assets that need to get marked down, it's clearly behind the other bigs, but there's no forcing function to do markdowns.

So we get primal screams about politics and LLM-generated articles about how a $9K video card is the perfect blend between price and performance.

There's other comments effusively praising them on their unique technical expertise. I maintain a llama.cpp client on every platform you can think of. Nothing in this article makes any sense. If you're training, you wouldn't do it on only 4 $9K GPUs that you own. If you're inferencing, you're not getting much more out of this than you would a ~$2K Framework desktop.

NitpickLawyer 2 days ago | parent | next [-]

> If you're inferencing, you're not getting much more out of this than you would a ~$2K Framework desktop.

I was with you up till here. Come on! CPU inferencing is not it, even macs struggle with bigger models, longer contexts (esp. visible when agentic stuff gets > 32k tokens).

The PRO6000 is the first gpu that actually makes sense to own from their "workstation" series.

refulgentis 2 days ago | parent [-]

Er, CPU inferencing? :) I didn't think I mentioned that!

The Framework Desktop thing is that has unified memory with the GPU, so much like an M-series, you can inference disproportionately large models.

CamperBob2 2 days ago | parent | prev [-]

If you're inferencing, you're not getting much more out of this than you would a ~$2K Framework desktop.

Well, you're getting the ability to maintain a context bigger than 8K or so, for one thing.

refulgentis 2 days ago | parent [-]

Well, no, at least, we're off by a factor of about 64x at the very least: 64 GB GPU M2 Max/M4 max top out at about 512K context for 20B params, and the Framework desktop I am referencing has 128 GB unified memory.

CamperBob2 2 days ago | parent [-]

What's the TTFT like on a GPU-poor rig, though, once you actually take advantage of large contexts?

refulgentis 2 days ago | parent [-]

I guess I'd say, why is the framework perceived as GPU poor? I don't have one but I also don't know why TTFT would be significantly lower than M-series (it's a good GPU!)

CamperBob2 2 days ago | parent [-]

Compared to 4x RTX 6000 Blackwell boards, it's GPU poor. There has to be a reason they want to load up a tower chassis with $35K worth of GPUs, right? I'd have to assume it has strong advantages for inference as well as training, given that the GPU has more influence on TTFT with longer contexts than the CPU does.

refulgentis 2 days ago | parent [-]

Right - I'd suggest the idea that 128 GB of GPU RAM gives you an 8K context shows us it may be worth revising priors such as "it has strong advantages for inference as well as training"

As Mr. Hildebrand used to say, when you assume, you make...

(also note the article specifically frames this speccing out as about training :) not just me suggesting it)

aprilthird2021 2 days ago | parent | prev | next [-]

Sequoia is also increasingly embarrassing. A shame because it wasn't but 10 years ago that these firms seemed like they were leading the charge of world-changing innovation, etc...

the_snooze 2 days ago | parent [-]

Increasingly? This is the Sequoia who wrote thousands of words on Sam Bankman-Fried that uncritically said little more than “he’s so quirky and smart! ^_^” https://web.archive.org/web/20221027181005/https://www.sequo...

aprilthird2021 2 days ago | parent [-]

A great example. This wasn't but just 3 years ago, so definitely part of their increasing slide into embarrassment...

2 days ago | parent | prev [-]
[deleted]