Remix.run Logo
heavyset_go a day ago

The silicon is sitting idle in the case of most laptop NPUs. In my experience, embedded NPUs are very efficient, so there's theoretically real gains to be made if the cores were actually used.

martinald a day ago | parent [-]

Yes but you could use the space on die for GPU cores.

heavyset_go 14 hours ago | parent [-]

At least with the embedded platforms I'm familiar with, dedicated silicon to NPU is both faster and more power efficient than offloading to GPU cores.

If you're going to be doing ML at the edge, NPUs still seem like the most efficient use of die space to me.