Remix.run Logo
blitzar 9 days ago

> widely-available H100 GPUs

Just looked in the parts drawer at home and dont seem to have a $25,000 GPU for some inexplicable reason.

Kurtz79 9 days ago | parent | next [-]

Does it even make sense calling them 'GPUs' (I just checked NVIDIA product page for the H100 and it is indeed so)?

There should be a quicker way to differentiate between 'consumer-grade hardware that is mainly meant to be used for gaming and can also run LLMs inference in a limited way' and 'business-grade hardware whose main purpose is AI training or running inference for LLMs".

blitzar 9 days ago | parent | next [-]

We are fast approaching the return of the math coprocessor. In fashion they say that trends tend to reappear roughly every two decades, its overdue.

egorfine 9 days ago | parent | next [-]

Yeah I would love for Nvidia to introduce faster update cycle to their hardware, so that we'll have models like "H201", "H220", etc.

I think it will also make sense to replace "H" with a brand number, sort of like they already do for customer GPUs.

So then maybe one day we'll have a math coprocessor called "Nvidia 80287".

beAbU 9 days ago | parent | prev | next [-]

I remember the building hugh end workstations for a summer job in the 2000s, where I had to fit Tesla cards in the machines. I don't remember what their device names were, we just called them tesla cards.

"Accelerator card" makes a lot of sense to me.

WithinReason 9 days ago | parent | prev [-]

It's called a tensorcore and it's in most GPUs

genewitch 9 days ago | parent | prev | next [-]

"GPGPU" was something from over a decade ago; for general purpose GPU computing

hnuser123456 8 days ago | parent [-]

Yeah, Crysis came out in 2007 and could run physics on the GPU.

AlphaSite 8 days ago | parent | prev | next [-]

I think apple calls them NPUs and Broadcom calls them XPUs. Given they’re basically the number 2 and 3 accelerator manufacturers one of those probably works.

codedokode 9 days ago | parent | prev | next [-]

By the way I wonder, what has more performance, a $25 000 professional GPU or a bunch of cheaper consumer GPUs costing $25 000 in total?

omneity 9 days ago | parent [-]

Consumer GPUs in theory and by a large margin (10 5090s will eat an H100 lunch with 6 times the bandwidth, 3x VRAM and a relatively similar compute ratio), but your bottleneck is the interconnect and that is intentionally crippled to avoid beowulf GPU clusters eating into their datacenter market.

Last consumer GPU with NVLink was the RTX 3090. Even the workstation-grade GPUs lost it.

https://forums.developer.nvidia.com/t/rtx-a6000-ada-no-more-...

sigbottle 9 days ago | parent [-]

H100s also has custom async WGMMA instructions among other things. From what I understand, at least the async instructions formalize the notion of pipelining, which engineers were already implicitly using because to optimize memory accesses you're effectively trying to overlap them in that kind of optimal parallel manner.

washadjeffmad 8 days ago | parent | prev | next [-]

I just specify SXM (node) when I want to differentiate from PCIe. We have H100s in both.

addandsubtract 9 days ago | parent | prev | next [-]

We could call the consumer ones GFX cards, and keep GPU for the matrix multiplying ones.

beAbU 9 days ago | parent [-]

GPU stands for "graphics processing unit" so I'm not sure how your suggestion solves it.

Maybe renaming the device to an MPU, where the M stands for "matrix/math/mips" would make it more semantically correct?

8 days ago | parent | next [-]
[deleted]
rebolek 8 days ago | parent | prev [-]

I think that G was changed to "general", so now it's "general processing unit".

rpdillon 8 days ago | parent | next [-]

This doesn't seem to be true at all. It's a highly specialized chip for doing highly parallel operations. There's nothing general about it.

I looked around briefly and could find no evidence that it's been renamed. Do you have a source?

fouc 8 days ago | parent | prev [-]

CPU is already the general (computing) processing unit so that wouldn't make sense

amelius 9 days ago | parent | prev [-]

Well, does it come with graphics connectors?

OliverGuy 9 days ago | parent [-]

Nope, doesn't have any of the required hardware to even process graphics iirc

diggan 9 days ago | parent [-]

Although the RTX Pro 6000 is not consumer-grade, it does come with graphics ports (four Displayports) and does render graphics like a consumer card :) So seems the difference between the segments is becoming smaller, not bigger.

simpleintheory 9 days ago | parent [-]

That’s because it’s intended as a workstation GPU not one used in servers

diggan 9 days ago | parent [-]

Sure, but it still sits in the 'business-grade hardware whose main purpose is AI training or running inference for LLMs" segment parent mentioned, yet have graphics connectors so the only thing I'm saying is that just looking at that won't help you understand what segment the GPU goes into.

namibj 8 days ago | parent [-]

I'd Like to point at the first revision AMD MI50/MI60 cards which were at the time the most powerful GPUs on the market at least by memory bandwidth.

Defining GPU as "can output contemporary display connector signal and is more than just a ramdac/framebuffer-to-cable translator, starting with even just some 2D blitting acceleration.

dougSF70 9 days ago | parent | prev | next [-]

With Ollama i got the 20B model running on 8 TitanX cards (2015). Ollama distributed the model so that the 15GB of vram required was split evenly accross the 8 cards. The tok/s were faster than reading speed.

Aurornis 8 days ago | parent [-]

For the price of 8 decade old Titan X cards, someone could pick up a single modern GPU with 16GB or more of RAM.

Aurornis 8 days ago | parent | prev | next [-]

They’re widely available to rent.

Unless you’re running it 24/7 for multiple years, it’s not going to be cost effective to buy the GPU instead of renting a hosted one.

For personal use you wouldn’t get a recent generation data center card anyway. You’d get something like a Mac Studio or Strix Halo and deal with the slower speed.

varispeed 8 days ago | parent [-]

I rented H100 for training a couple of times and I found that they couldn't do training at all. Same code worked fine on Mac M1 or RTX 5080, but on H100 I was getting completely different results.

So I wonder what I could be doing wrong. In the end I just use RTX 5080 as my models fit neatly in the available RAM.

* by not working at all, I mean the scripts worked, but results were wrong. As if H100 couldn't do maths properly.

philipkiely 8 days ago | parent | prev | next [-]

This comment made my day ty! Yeah definitely speaking from a datacenter perspective -- fastest piece of hardware I have in the parts drawer is probably my old iPhone 8.

vonneumannstan 8 days ago | parent | prev | next [-]

>Just looked in the parts drawer at home and dont seem to have a $25,000 GPU for some inexplicable reason.

It just means you CAN buy one if you want, as in they're in stock and "available", not that you can necessarily afford one.

lopuhin 9 days ago | parent | prev | next [-]

you can rent them for less then $2/h in a lot of places (maybe not in the drawer)

blueboo 8 days ago | parent | prev | next [-]

You might find $2.50 in change to use one for an hour though

KolmogorovComp 9 days ago | parent | prev [-]

available != cheap

blitzar 9 days ago | parent [-]

available /əˈveɪləbl/

adjective: available

able to be used or obtained; at someone's disposal

swexbe 9 days ago | parent [-]

You can rent one from most cloud providers for a few bucks an hour.

koakuma-chan 9 days ago | parent | next [-]

Might as well just use openai api

ekianjo 9 days ago | parent [-]

thats not the same thing at all

poly2it 9 days ago | parent [-]

That depends on your intentions.

9 days ago | parent | prev [-]
[deleted]