Remix.run Logo
jeffbee 4 days ago

It's quite interesting. Basically Nitro on a stick. For the "repatriation" crowd this seems appealing. But would you invest in the software necessary to exploit this, knowing that Intel could lose interest or just go bankrupt with little warning?

pwarner 4 days ago | parent | next [-]

Presumably all hyperscalers who aren't Amazon could be a customer for this? One of them might be enough to keep it viable. See sibling comment on b Google being a customer for presumably the previous generation.

lenerdenator 4 days ago | parent | prev | next [-]

I think at this point, it's clear that the US government will not let Intel go bankrupt without a serious effort to put the company in healthy financial standing first.

Whether or not that's a good thing, well, people have their opinions, but they're considered a national security necessity.

wmf 4 days ago | parent | prev | next [-]

I wouldn't be surprised if Google buys the IP since they're the only customer.

pyvpx 4 days ago | parent [-]

How, though? Does the TPU team (literally or logically) map to owning IPU h/w successfully?

(I miss having these kinds of convos on twitter as networkservice ;)

pclmulqdq 4 days ago | parent | next [-]

There's a lot more silicon at Google aside from the TPU team, including their own previous NICs.

pyvpx 4 days ago | parent [-]

Not that my memory is ironclad, but I don’t recall any custom IP or even FPGA attempts at Google re: host networking or NICs. Any good search terms I should try to enlighten myself? thanks!

jsnell 4 days ago | parent | next [-]

https://news.ycombinator.com/item?id=30757889

numpad0 4 days ago | parent | prev [-]

https://web.archive.org/web/20230711042824/https://www.wired...

https://static.googleusercontent.com/media/research.google.c...

pwarner 4 days ago | parent | prev [-]

I believe they have other custom silicon beyond TPUs so it wouldn't be crazy to take this in house if Intel really cans it.

jiggawatts 4 days ago | parent | prev [-]

That begs the question: how would one go about utilising this thing in their own deployment?

redok 4 days ago | parent | next [-]

The primary customer for this would be infrastructure providers that want to give the host full control of the hardware (bare metal, no hypervisor) while still maintaining control of the IO (network attached storage and network isolation).

Conventionally this is done in software with a hypervisor which emulates network devices for VMs (virio/vmxnet3, etc...) and does some sort of network encapsulation (vlan, vxlan, etc...). Similar things are done for virtual block storage (virtio blk, nvme, etc..) to attach to remote drives.

If the IaaS clients are high bandwidth or running their own virtualization stack, the infrastructure provider has nowhere to put this software. You can do the infrastructure network and storage isolation on the network switches with extra work but then the termination of the networking and storage has to be done in cooperation with the clients (and you can't trust them to do it right).

Here, the host just sees PCI attached network interfaces and directly attached NVMe devices which pop up as defined by the infrastructure. These cards are the compromise where you let everyone have baremetal but keep your software defined network and storage. In advanced cases you could even dynamically traffic shape bandwidth between network and storage prioritization.

wmf 4 days ago | parent | prev | next [-]

Here are some examples: https://ipdk.io/documentation/Recipes/ (keep in mind IPU = E2200 when you read this)

pwarner 4 days ago | parent | prev [-]

Presumably first hire a few developers to program it.