Remix.run Logo
throwaw12 a day ago

consumer RAM is not what's creating shortage. Data centers doesn't run electron to train the model or for inference

malfist a day ago | parent | next [-]

Sure, consumer ram isn't causing a shortage, but it's affected by the shortage.

bayindirh a day ago | parent | prev | next [-]

Every RAM producer is stopping their consumer grade RAM production to provide ECC-RAM and VRAM now. Micron discontinued and closed down Crucial brand as a whole.

So, getting systems with higher RAM capacity is getting harder (from laptops to smartphones). So, for a couple of years, we need to stop using Electron so much and use what we have efficiently.

Data centers, esp. AI hyperscalers do not care about efficiency for now, because they can suffocate consumer-grade part of the hardware marketplace and get anything and everything they want. When their bubble pops, or the whole capacity ends, they need to learn to be efficient, too.

For reference, a well-optimized cluster runs at ~90% efficiency even though they have thousands of users. AI hyperscalers are not there. Maybe 60% efficient, at most. They waste a lot of resources to keep their momentum.

spockz a day ago | parent [-]

I have a silent hope that because of this change we all will get ECC ram and that consumer CPUs will get proper support for them.

bayindirh a day ago | parent [-]

AMD's RYZEN already supports it. ASUStor's latest generation of NAS devices come with AMD x86_64 processors and ECC RAM as a standard, but ECC RAM in SODIMM format was not cheap, even when the RAM was cheap, either.

spockz 20 hours ago | parent | next [-]

I understood that support for ECC ram also depended on the motherboard but not sure. When selecting Ryzens, I only recall seeing many disclaimers for RAM support. Not sure to the causes though.

wiredpancake 14 hours ago | parent [-]

[dead]

eggsome 15 hours ago | parent | prev [-]

As someone trying to spec out a Ryzen workstation right now I can tell you it's actually harder because Ryzen (unlike EPIC) uses UDIMM ECC, not RDIMM ECC. It's a niche that very few companies wanted to service before AI ram madness. Now the only vendor I can find is v-color:

https://v-color.net/products/ddr5-ecc-oc-u-dimm-server-memor...

But they no longer have 6000mhz stuff in stock (which is ideal for Ryzen due to the 1 to 1 speed match to the memory controller).

It's frustrating :(

MagicMoonlight a day ago | parent | prev [-]

They effectively do. They’re trained by brute forcing 100TB of training data through them, rather than any logical learning technique.

A human doesn’t need 100TB of books to learn the alphabet.

rkomorn a day ago | parent [-]

> A human doesn’t need 100TB of books to learn the alphabet.

A human does need 16ish hours per day of audio/video content for several years to learn the alphabet.

bayindirh a day ago | parent | next [-]

I used a single letter stencil to learn the alphabet, actually, and nobody strapped me to a chair to watch or listen something 16h a day.

Living inside a normal home with my parents was enough for the audio part.

rkomorn a day ago | parent [-]

The 16 hours of audio/video per day was a reference to being alive and hearing/seeing things for years before you actually could learn the alphabet.

It was not meant as literally sitting at a screen with audio/video for 16 hours a day.

bayindirh a day ago | parent [-]

I know, but the density of the data is much less in human case.

IOW, humans still learn more effectively with less information, because there are innate mechanisms which process this data continuously and extract new meanings from the same data. This is part of both intelligence and consciousness.

LLMs lack both.

__turbobrew__ 20 hours ago | parent | next [-]

> I know, but the density of the data is much less in human case.

Is that really the case? How much data is it for 4k video, high bitrate auditory, spacial mapping, internal and external nervous system, emotions, and a dataset to correlate all of these in time?

rkomorn a day ago | parent | prev [-]

> humans still learn more effectively with less information

> because there are innate mechanisms which process this data continuously and extract new meanings from the same data

To me, these statements strongly contradict each other, but I also really do not care enough to debate it.

bayindirh a day ago | parent [-]

I respect your disagreement and desire to leave the debate here. So we can agree to disagree.

Have a nice day.

Betelbuddy a day ago | parent | prev [-]

[flagged]

rkomorn a day ago | parent [-]

I don't listen to Altman. Feel free to take this kind of comment somewhere else.