| ▲ | kenjackson a day ago |
| I'd love to see a thorough breakdown of what these local NPUs can really do. I've had friends ask me about this (as the resident computer expert) and I really have no idea. Everything I see advertised for (blurring, speech to text, etc...) are all things that I never felt like my non-NPU machine struggled with. Is there a single remotely killer application for local client NPUs? |
|
| ▲ | throwaway293892 a day ago | parent | next [-] |
| I used to work at Intel until recently. Pat Gelsinger (the prior CEO) had made one of the top goals for 2024 the marketing of the "AI PC". Every quarter he would have an all company meeting, and people would get to post questions on a site, and they would pick the top voted questions to answer. I posted mine: "We're well into the year, and I still don't know what an AI PC is and why anyone would want it instead of a CPU+GPU combo. What is an AI PC and why should I want it?" I then pointed out that if a tech guy like me, along with all the other Intel employees I spoke to, cannot answer the basic questions, why would anyone out there want one? It was one of the top voted questions and got asked. He answered factually, but it still wasn't clear why anyone would want one. |
| |
| ▲ | TitaRusell 16 hours ago | parent | next [-] | | The only people who are actually paying good money for a PC nowadays are gamers- and they sure as hell aren't paying 3k so that they can use copilot. | | |
| ▲ | nextaccountic 8 hours ago | parent [-] | | Also professionals that need powerful computers ("workstations") in their jobs, like video editing A lot of them are incorporating AI in their workflow, so making local AI better would be a plus. Unfortunately I don't see this happening unless GPUs come with more VRAM (and AI companies don't want that, and are willing to spend top dollar to hoard RAM) |
| |
| ▲ | skrebbel a day ago | parent | prev [-] | | So... what was the answer? | | |
| ▲ | throwaway293892 a day ago | parent | next [-] | | Pretty much the same as what you see in the comments here. For certain workloads, NPU is faster than CPU by quite a bit, and I think he gave some detailed examples at the low level (what types of computations are faster, etc). But nothing that translated to real world end user experience (other than things like live transcription). I recall I specifically asked "Will Stable Diffusion be much faster than a CPU?" in my question. He did say that the vendors and Microsoft were trying to come up with "killer applications". In other words, "We'll build it, and others will figure out great ways to use it." On the one hand, this makes sense - end user applications are far from Intel's expertise, and it makes sense to delegate to others. But I got the sense Microsoft + OEMs were not good at this either. | | |
| ▲ | hulitu 10 hours ago | parent [-] | | > For certain workloads, NPU is faster than CPU by quite a bit WTF is an NPU ? What kind of instructions does it support ? Can it add 3 and 5 ? Can it compute matrices ? |
| |
| ▲ | Mistletoe a day ago | parent | prev [-] | | Probably a lot of jargon AI word salad that boiled down to “I’m leaving in Dec. 2024, you guys have fun.” |
|
|
|
| ▲ | martinald a day ago | parent | prev | next [-] |
| The problem is essentially memory bandiwdth afiak. Simplifying a lot in my reply, but most NPUs (all?) do not have faster memory bandwidth than the GPU. They were originally designed when ML models were megabytes not gigabytes. They have a small amount of very fast SRAM (4MB I want to say?). LLM models _do not_ fit into 4MB of SRAM :). And LLM inference is heavily memory bandwidth bound (reading input tokens isn't though - so it _could_ be useful for this in theory, but usually on device prompts are very short). So if you are memory bandwidth bound anyway and the NPU doesn't provide any speedup on that front, it's going to be no faster. But has loads of other gotchas so no real "SDK" format for them. Note the idea isn't bad per se, it has real efficiencies when you do start getting compute bound (eg doing multiple parallel batches of inference at once), this is basically what TPUs do (but with far higher memory bandwidth). |
| |
| ▲ | zozbot234 a day ago | parent [-] | | NPUs are still useful for LLM pre-processing and other compute-bound tasks. They will waste memory bandwidth during LLM generation phase (even in the best-case scenario where they aren't physically bottlenecked on bandwidth to begin with, compared to the iGPU) since they generally have to read padded/dequantized data from main memory as they compute directly on that, as opposed to being able to unpack it in local registers like iGPUs can. > usually on device prompts are very short Sure, but that might change with better NPU support, making time-to-first-token quicker with larger prompts. | | |
| ▲ | martinald a day ago | parent | next [-] | | Yes I said that in my comment. Yes they might be useful for that - but when you start getting to prompts that are long enough to have any significant compute time you are going to need far more RAM than these devices have. Obviously in the future this might change. But as we stand now dedicated silicon for _just_ LLM prefill doesn't make a lot of sense imo. | | |
| ▲ | zozbot234 a day ago | parent [-] | | You don't need much on-device RAM for compute-bound tasks, though. You just shuffle the data in and out, trading a bit of latency for an overall gain on power efficiency which will help whenever your computation is ultimately limited by power and/or thermals. |
| |
| ▲ | observationist a day ago | parent | prev [-] | | The idea that tokenization is what they're for is absurd - you're talking a tenth of a thousandth of a millionth of a percent of efficiency gain in real world usage, if that, and only if someone bothers to implement it in software that actually gets used. NPUs are racing stripes, nothing more. No killer features or utility, they probably just had stock and a good deal they could market and tap into the AI wave with. | | |
| ▲ | adastra22 a day ago | parent | next [-] | | NPUs aren't meant for LLMs. There are a lot more neural net tech out there than LLMs. | | |
| ▲ | aleph_minus_one a day ago | parent [-] | | >
NPUs aren't meant for LLMs. There are a lot more neural net tech out there than LLMs. OK, but where can I find demo applications of these that will blow my mind (and make me want to buy a PC with an NPU)? | | |
| ▲ | adastra22 a day ago | parent | next [-] | | Apple demonstrates this far better. I use their Photos app to manage my family pictures. I can search my images by visible text, by facial recognition, or by description (vector search). It automatically composes "memories" which are little thematic video slideshows. The FaceTime camera automatically keeps my head in frame, and does software panning and zooming as necessary. Automatic caption generation. This is normal, standard, expected behavior, not blow your midn stuff. Everyone is used to having it. But where do you think the computation is happening? There's a reason that a few years back Apple pushed to deprecate older systems that didn't have the NPU. | | |
| ▲ | adgjlsfhk1 a day ago | parent [-] | | I've yet to see any convincing benchmarks showing that NPUs are more efficient than normal GPUs (that don't ignore the possibility of downclocking the GPU to make it run slower but more efficient) | | |
| |
| ▲ | jychang a day ago | parent | prev [-] | | Best NPU app so far is Trex for Mac. |
|
| |
| ▲ | microtonal a day ago | parent | prev [-] | | I think they were talking about prefill, which is typically compute-bound. |
|
|
|
|
| ▲ | sosodev a day ago | parent | prev | next [-] |
| In theory NPUs are a cheap, efficient alternative to the GPU for getting good speeds out of larger neural nets. In practice they're rarely used because for simple tasks like blurring, speech to text, noise cancellation, etc you can get usually do it on the CPU just fine. For power users doing really hefty stuff they usually have a GPU anyway so that gets used because it's typically much faster. That's exactly what happens with my AMD AI Max 395+ board. I thought maybe the GPU and NPU could work in parallel but memory limitations mean that's often slower than just using the GPU alone. I think I read that their intended use case for the NPU is background tasks when the GPU is already loaded but that seems like a very niche use case. |
| |
| ▲ | zozbot234 a day ago | parent [-] | | If the NPU happens to use less power for any given amount of TOPS it's still a win since compute-heavy workloads are ultimately limited by power and thermals most often, especially on mobile hardware. That frees up headroom for the iGPU. You're right about memory limitations, but these are generally relevant for e.g. token generation not prefill. |
|
|
| ▲ | Someone a day ago | parent | prev | next [-] |
| > Everything I see advertised for (blurring, speech to text, etc...) are all things that I never felt like my non-NPU machine struggled with. I don’t know how good these neural engines are, but transistors are dead-cheap nowadays. That makes adding specialized hardware a valuable option, even if it doesn’t speed up things but ‘only’ decreases latency or power usage. |
|
| ▲ | rcxdude a day ago | parent | prev | next [-] |
| I think a lot of it is just power savings on those features, since the dedicated silicon can be a lot more energy efficient even if it's not much more powerful. |
|
| ▲ | bitwize a day ago | parent | prev [-] |
| "WHAT IS MY PURPOSE?" "You multiply matrices of INT8s." "OH... MY... GOD" NPUs really just accelerate low-precision matmuls. A lot of them are based on systolic arrays, which are like a configurable pipeline through which data is "pumped" rather than a general purpose CPU or GPU with random memory access. So they're a bit like the "synergistic" processors in the Cell, in the respect that they accelerate some operations really quickly, provided you feed them the right way with the CPU and even then they don't have the oomph that a good GPU will get you. |
| |
| ▲ | cookiengineer a day ago | parent | next [-] | | My question is: Isn't this exactly what SIMD has done before? Well, or SSE2 instructions? To me, an NPU and how it's described just looks like a pretty shitty and useless FPGA that any alternative FPGA from Xilinx could easily replace. | | |
| ▲ | recursivecaveat a day ago | parent | next [-] | | You definitely would use SIMD if you were doing this sort of thing on the CPU directly. The NPU is just a large dedicated construct for linear algebra. You wouldn't really want to deploy FPGAs to user devices for this purpose because that would mean paying the reconfigurability tax in terms of both power-draw and throughput. | |
| ▲ | imtringued 17 hours ago | parent | prev [-] | | Yes but your CPUs have energy inefficient things like caches and out of order execution that do not help with fixed workloads like matrix multiplication. AMD gives you 32 AI Engines in the space of 3 regular Ryzen cores with full cache, where each AI Engine is more powerful than a Ryzen core for matrix multiplication. |
| |
| ▲ | mjevans a day ago | parent | prev | next [-] | | So it's a higher power DSP style device. Small transformers for flows. Sounds good for audio and maybe tailored video flow processing. | |
| ▲ | fragmede a day ago | parent | prev | next [-] | | Do compilers know how to take advantage of that, or do programs need code that specifically takes advantage of that? | | |
| ▲ | bfrog a day ago | parent | next [-] | | It’s more like you need to program a dataflow rather than a program with instructions or vliw type processors. They still have operations but for example I don’t think ethos has any branch operations. | |
| ▲ | blep-arsh a day ago | parent | prev [-] | | There are specialized computation kernels compiled for NPUs. A high-level program (that uses ONNX or CoreML, for example) can decide whether to run the computation using CPU code, a GPU kernel, or an NPU kernel or maybe use multiple devices in parallel for different parts of the task, but the low-level code is compiled separately for each kind of hardware. So it's somewhat abstracted and automated by wrapper libraries but still up to the program ultimately. |
| |
| ▲ | indubioprorubik a day ago | parent | prev [-] | | [flagged] |
|