Remix.run Logo
Nokinside 6 days ago

The first SoC including Neural Engine was the A11 Bionic, used in iPhone 8, 8 Plus and iPhone X, introduced in 2017. Since then, every Apple A-series SoC has included a Neural Engine.

aurareturn 6 days ago | parent | next [-]

The Neural Engine is its own block. Neural Engine is not used for local LLMs on Macs. Neural Engine is optimized for power efficiency while running small models. It's not good for LARGE language models.

This change is strictly adding matmul acceleration into each GPU core where it is being used for LLMs.

5 days ago | parent [-]
[deleted]
runjake 6 days ago | parent | prev [-]

The matmul stuff is part of the Neural Accelerator marketing, which is distinct from the Neural Engine you're talking about.

I don't blame you. It's confusing.

Nokinside 6 days ago | parent [-]

It's remaining and rearrangement of the same stuff. Not a new feature.

aurareturn 6 days ago | parent | next [-]

The NPU is still there. This adds matmul acceleration directly into each GPU core. It takes about ~10% more transistors to add these accelerators into the GPU so it's a significant investment for Apple.

runjake 6 days ago | parent | prev [-]

1. It adds new features. Eg. see matmul and other to-be-detailed-soon features.

2. It moves some stuff from the external Neural Engine to the GPU, which substantially increases speeds for those workloads. That itself is a feature.

Will any of this really matter much to the average consumer at this point? Probably not. Not until Apple Intelligence gets off the ground.