Remix.run Logo
sercand 6 days ago

Where did you see the matmul acceleration support? I couldn't find this detail online.

aurareturn 6 days ago | parent [-]

Apple calls it "Neural Accelerators". It's all over their A19 marketing.

kridsdale3 6 days ago | parent | next [-]

What a ridiculous way to market "linear algebra transistor array".

jacquesm 6 days ago | parent | next [-]

Hey man, it helps you think different. You just never knew your neurons needed accelerating.

kridsdale1 6 days ago | parent [-]

I accelerate them every morning with an Americano.

liamwire 6 days ago | parent [-]

I have to ask out of curiosity, why is your first comment made with one account, and the reply with a similarly-named alt?

butlike 5 days ago | parent | prev | next [-]

I really hope someone got fired for this blunder

jimbokun 5 days ago | parent | prev | next [-]

Which means what, exactly, to someone whose not a machine learning researcher?

6 days ago | parent | prev [-]
[deleted]
kamranjon 6 days ago | parent | prev | next [-]

Don’t all of the M series chips contain neural cores?

aurareturn 6 days ago | parent [-]

Yes, they do. They're called Neural Engine, aka NPUs. They aren't being used for local LLMs on Macs because they are optimized for power efficiency running much smaller AI models.

Meanwhile, the GPU is powerful enough for LLMs but has been lacking matrix multiplication acceleration. This changes that.

astrange 6 days ago | parent | next [-]

The neural engine is used for the built-in LLM that does text summaries etc., just not third party LLMs.

And there's an official port of Stable Diffusion to it: https://github.com/apple/ml-stable-diffusion

mrheosuper 6 days ago | parent | prev | next [-]

I thought 1 of the reason we do ML on GPU is fast Matrix multiplication ?

So the new engine is accelerator for matmul accelerator ?

wtallis 6 days ago | parent [-]

From a compute perspective, GPUs are mostly about fast vector arithmetic, with which you can implement decently fast matrix multiplication. But starting with NVIDIA's Volta architecture at the end of 2017, GPUs have been gaining dedicated hardware units for matrix multiplication. The main purpose of augmenting GPU architectures with matrix multiplication hardware is for machine learning. They aren't directly useful for 3D graphics rendering, but their inclusion in consumer GPUs has been justified by adding ML-based post-processing and upscaling like NVIDIA's various iterations of DLSS.

cchance 6 days ago | parent | prev [-]

These are different these are built into the GPU Cores

emchammer 6 days ago | parent | prev [-]

Does this mean that equivalent logic for what has been called Neural Engine is now integrated into each CPU core?

rmccue 6 days ago | parent [-]

Each GPU core, but yes, this was part of what they announced today - it’s now integral rather than separate.