Remix.run Logo
noosphr 8 hours ago

AI demand isn't going away. It will just move from the data center to the local machine. On device AI is much better for the customer than it being in the cloud. Expecting people to stick with a few dozen gb of hbm is going to be the 'no one needs more than 640kb' of the 2030s.

bilekas 6 hours ago | parent | next [-]

It's being delayed by ai companies from running on local consumer grade machines specifically by making the cost of entry too expensive. OpenAi buys 40% of wafers to ensure the price of memory stays high.

GCUMstlyHarmls 6 hours ago | parent [-]

Hmm, never considered a targeted squeeze at consumer run models by way of slowing hardware proliferation. It "made sense" to try and box out other AI companies but I guess they also have a pretty strong vested interest in keeping VRAM low or preventing some kind of high-memory PCIe ASIC from getting cheap broad adoption.

Another thread suggested that OpenAIs primary play is to get big enough that it's too big to fail, funny to think that it's not a funding runway or algorithmic moat, just a hardware vault and the longer you can stop boats crossing it the more chance you get your fingers in all the pies.

bandrami 8 hours ago | parent | prev | next [-]

> On device AI is much better for the customer than it being in the cloud

Which is exactly how you know it will always be nerfed. The last thing these guys want is to take their claws out of our data.

egorfine 4 hours ago | parent | prev [-]

> AI demand isn't going away.

I'm not sure about that. When was the last time you have used Copilot prompt in Run dialog or Notepad?

noosphr 3 hours ago | parent [-]

About 10 minutes ago in Emacs.

egorfine an hour ago | parent [-]

That's not fair. Anybody could've done that.

Now try to really sincerely use copilot prompt in the Run dialog.