| ▲ | jsheard a day ago |
| > It seems we will run out of hardware by March? What happens when an unstoppable force (building everything in Electron because hardware is cheap) meets an immovable object (oh no hardware is expensive now)? |
|
| ▲ | pjmlp a day ago | parent | next [-] |
| We go back to the demoscene days, being creative with what we have instead of shipping Electron junk. |
| |
|
| ▲ | andix a day ago | parent | prev | next [-] |
| Maybe we need to let go of our auto-scaled 100 pod service mesh for a todo list app, and just deploy it bare metal on 2 servers. |
|
| ▲ | fullstop a day ago | parent | prev | next [-] |
| I guess we have to get creative again. |
| |
| ▲ | interleave a day ago | parent [-] | | I actually think you're right here. Resource constraints have often helped me come up with stuff that I'm actually proud of. |
|
|
| ▲ | throwaw12 a day ago | parent | prev | next [-] |
| consumer RAM is not what's creating shortage. Data centers doesn't run electron to train the model or for inference |
| |
| ▲ | malfist a day ago | parent | next [-] | | Sure, consumer ram isn't causing a shortage, but it's affected by the shortage. | |
| ▲ | bayindirh a day ago | parent | prev | next [-] | | Every RAM producer is stopping their consumer grade RAM production to provide ECC-RAM and VRAM now. Micron discontinued and closed down Crucial brand as a whole. So, getting systems with higher RAM capacity is getting harder (from laptops to smartphones). So, for a couple of years, we need to stop using Electron so much and use what we have efficiently. Data centers, esp. AI hyperscalers do not care about efficiency for now, because they can suffocate consumer-grade part of the hardware marketplace and get anything and everything they want. When their bubble pops, or the whole capacity ends, they need to learn to be efficient, too. For reference, a well-optimized cluster runs at ~90% efficiency even though they have thousands of users. AI hyperscalers are not there. Maybe 60% efficient, at most. They waste a lot of resources to keep their momentum. | | |
| ▲ | spockz a day ago | parent [-] | | I have a silent hope that because of this change we all will get ECC ram and that consumer CPUs will get proper support for them. | | |
| ▲ | bayindirh a day ago | parent [-] | | AMD's RYZEN already supports it. ASUStor's latest generation of NAS devices come with AMD x86_64 processors and ECC RAM as a standard, but ECC RAM in SODIMM format was not cheap, even when the RAM was cheap, either. | | |
| ▲ | spockz 20 hours ago | parent | next [-] | | I understood that support for ECC ram also depended on the motherboard but not sure. When selecting Ryzens, I only recall seeing many disclaimers for RAM support. Not sure to the causes though. | | | |
| ▲ | eggsome 15 hours ago | parent | prev [-] | | As someone trying to spec out a Ryzen workstation right now I can tell you it's actually harder because Ryzen (unlike EPIC) uses UDIMM ECC, not RDIMM ECC.
It's a niche that very few companies wanted to service before AI ram madness. Now the only vendor I can find is v-color: https://v-color.net/products/ddr5-ecc-oc-u-dimm-server-memor... But they no longer have 6000mhz stuff in stock (which is ideal for Ryzen due to the 1 to 1 speed match to the memory controller). It's frustrating :( |
|
|
| |
| ▲ | MagicMoonlight a day ago | parent | prev [-] | | They effectively do. They’re trained by brute forcing 100TB of training data through them, rather than any logical learning technique. A human doesn’t need 100TB of books to learn the alphabet. | | |
| ▲ | rkomorn a day ago | parent [-] | | > A human doesn’t need 100TB of books to learn the alphabet. A human does need 16ish hours per day of audio/video content for several years to learn the alphabet. | | |
| ▲ | bayindirh a day ago | parent | next [-] | | I used a single letter stencil to learn the alphabet, actually, and nobody strapped me to a chair to watch or listen something 16h a day. Living inside a normal home with my parents was enough for the audio part. | | |
| ▲ | rkomorn a day ago | parent [-] | | The 16 hours of audio/video per day was a reference to being alive and hearing/seeing things for years before you actually could learn the alphabet. It was not meant as literally sitting at a screen with audio/video for 16 hours a day. | | |
| ▲ | bayindirh a day ago | parent [-] | | I know, but the density of the data is much less in human case. IOW, humans still learn more effectively with less information, because there are innate mechanisms which process this data continuously and extract new meanings from the same data. This is part of both intelligence and consciousness. LLMs lack both. | | |
| ▲ | __turbobrew__ 20 hours ago | parent | next [-] | | > I know, but the density of the data is much less in human case. Is that really the case? How much data is it for 4k video, high bitrate auditory, spacial mapping, internal and external nervous system, emotions, and a dataset to correlate all of these in time? | |
| ▲ | rkomorn a day ago | parent | prev [-] | | > humans still learn more effectively with less information > because there are innate mechanisms which process this data continuously and extract new meanings from the same data To me, these statements strongly contradict each other, but I also really do not care enough to debate it. | | |
| ▲ | bayindirh a day ago | parent [-] | | I respect your disagreement and desire to leave the debate here. So we can agree to disagree. Have a nice day. |
|
|
|
| |
| ▲ | Betelbuddy a day ago | parent | prev [-] | | [flagged] | | |
| ▲ | rkomorn a day ago | parent [-] | | I don't listen to Altman. Feel free to take this kind of comment somewhere else. |
|
|
|
|
|
| ▲ | UltraSane a day ago | parent | prev | next [-] |
| Stop using Electron to save massive amounts of RAM. |
|
| ▲ | Betelbuddy a day ago | parent | prev | next [-] |
| 2026 will be the year of Rust... |
| |
| ▲ | NoiseBert69 a day ago | parent [-] | | Due to lack of memory leaks which will stop increasing RAM prices? | | |
| ▲ | nicoburns a day ago | parent | next [-] | | Because it's more memory efficient than most other languages. So you can achieve the same result with lower RAM requirements. | |
| ▲ | Betelbuddy a day ago | parent | prev | next [-] | | The efficiency... https://users.rust-lang.org/t/energy-consumption-in-programm... | | |
| ▲ | xnorswap a day ago | parent | next [-] | | I see that's from almost 10 years ago, it would be interesting to see how that's changed with improvements to V8, python and C# since. Also, Typescript 5 times worse than Javascript? That doesn't really make sense, since they share the same runtime. | | |
| ▲ | embedding-shape a day ago | parent [-] | | Why is that so unbelievable? TypeScript isn't JavaScript, and while they have the same runtime, compiled TypeScript often don't look like how you'd solve the same problem in vanilla JS, where you'd leverage the dynamic typing rather than trying to work around it. See this example as one demonstration: https://www.typescriptlang.org/play/?q=8#example/enums The TS code looks very different from the JS code (which obviously is the point), but given that, not hard to imagine they have different runtime characteristics, especially for people who don't understand the inside and outs of JavaScript itself, and have only learned TypeScript. | | |
| ▲ | xnorswap a day ago | parent [-] | | Enums are one of only a few places where there is significant deviation, I don't believe that makes it 400% less efficient. | | |
| ▲ | embedding-shape a day ago | parent [-] | | Maybe read the paper and see if you can figure out their reasoning/motivation :) https://dl.acm.org/doi/10.1145/3136014.3136031 One thing to consider, is that with JavaScript you put it in a .js file, point a HTML page at it, and that's it. TypeScript uses a ton more than that, which would impact the amount of energy usage too, not to mention everything running the package registries and what not. Not sure if this is why the difference is bigger, as I haven't read the paper myself :) But if you do, please do share what you find out about their methodology. |
|
|
| |
| ▲ | Zababa a day ago | parent | prev [-] | | This image comes from running the different versions of the benchmark games programs. Some of the difference between languages may actually be just algorithmic differences, and also those programs are in general not representative of most of the software that runs. |
| |
| ▲ | gck1 a day ago | parent | prev [-] | | That, and also because rust compiler is a very good guardrail & feedback mechanism for AI. I made 3 little tools that I use for myself without knowing how to write a single rust line myself. | | |
| ▲ | Imustaskforhelp a day ago | parent [-] | | I can see that a reality but I am more comfortable using Golang as the language compared to rust given its fast compile times and I have found it to be much more easier to create no-dependices/fewer-dependencies project plus even though I wouldn't consider myself a master in golang, maybe mediocre, I feel much easier playing with golang than rust. The resource consumption b/w rust and golang would be pretty minimal to figure out actually for most use cases imho. |
|
|
|
|
| ▲ | ieie3366 a day ago | parent | prev [-] |
| [dead] |