Remix.run Logo
freakynit 6 hours ago

A related argument I raised a few days back on HN:

What's the moat with with these giant data-centers that are being built with 100's of billions of dollars on nvidia chips?

If such chips can be built so easily, and offer this insane level of performance at 10x efficiency, then one thing is 100% sure: more such startups are coming... and with that, an entire new ecosystem.

mlboss 24 minutes ago | parent | next [-]

If I am not mistaken this chip was build specifically for the llama 8b model. Nvidia chips are general purpose.

jzymbaluk an hour ago | parent | prev | next [-]

You'd still need those giant data centers for training new frontier models. These Taalas chips, if they work, seem to do the job of inference well, but training will still require general purpose GPU compute

bee_rider 4 hours ago | parent | prev | next [-]

I think their hope is that they’ll have the “brand name” and expertise to have a good head start when real inference hardware comes out. It does seem very strange, though, to have all these massive infrastructure investment on what is ultimately going to be useless prototyping hardware.

elictronic 3 hours ago | parent [-]

Tools like openclaw start making the models a commodity.

I need some smarts to route my question to the correct model. I wont care which that is. Selling commodities is notorious for slow and steady growth.

codebje 5 hours ago | parent | prev [-]

RAM hoarding is, AFAICT, the moat.

freakynit 5 hours ago | parent [-]

lol... true that for now though

Windchaser an hour ago | parent [-]

Yeah, just cause Cisco had a huge market lead on telecom in the late '90s, it doesn't mean they kept it.

(And people nowadays: "Who's Cisco?")