Remix.run Logo
thfuran 8 hours ago

This is coming from an insane demand spike, not some nefarious plot by the RAM manufacturers.

sekai 5 hours ago | parent | next [-]

> This is coming from an insane demand spike, not some nefarious plot by the RAM manufacturers.

Something something, 2000 dot-com bubble, something

pipes 5 hours ago | parent | prev | next [-]

I can never understand why so many people resort conspiracy theories when the obvious answer is supply and demand. I know well educated people, who do this when they talk about the resential property market. (Including an accountant).

inigyou 2 hours ago | parent [-]

Supply and demand can be caused by a conspiracy. OpenAI secretly bought 40% of the world's RAM on purpose. It's only a conspiracy if Anthropic and Google did something similar, though.

jonathanlydall 3 hours ago | parent | prev | next [-]

Which is in large part due to hoarding by OpenAI.

Although their stated reason for hoarding is that they "really need it", I think it was a strategic move to make their competitors' lives more difficult with little regard for the collateral consequences to non-competitors, such as regular people or companies needing new computers.

cyanydeez 8 hours ago | parent | prev [-]

Yes, it's a nefarious plot of AI producers to attempt a monopoly with a product that no one seems capable of demonstrating has the exponential value they're betting on.

adornKey 6 hours ago | parent | next [-]

Once everybody has a decent amount of VRAM they can just run local AIs and the need to mess with Ad-laden search results will fizzle. So of course they are desperate to grab a new monopoly. People haven't realised yet, that local AIs are fast and produce good results - on pretty average hardware. If they don't manage to grab a new monopoly Google will be history.

But it doesn't really need a nefarious plot for the price spikes. There is a serious lack of VRAM deployed out there. Filling that gap will take quite some time. Add to that the nefarious plot and the situation will most likely get even worse....

mrob 6 hours ago | parent [-]

LLM inference is mostly read only, so high-bandwidth flash looks like it could provide huge cost savings over VRAM. It's not yet in commercial products but there are working prototypes already. Previous HN discussion:

https://news.ycombinator.com/item?id=46700384

whosegotit 4 hours ago | parent [-]

[dead]

jug 3 hours ago | parent | prev [-]

AI companies yes, RAM manufacturers no.