| ▲ | bigyabai 3 hours ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||
Apple is basically in the same boat as AMD and Intel. They have a weak, raster-focused GPU architecture that doesn't scale to 100B+ inference workloads and especially struggles with large context prefill. TPUs smoke them on inference, and Nvidia hardware is far-and-away more efficient for training. | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | brcmthrowaway 2 hours ago | parent [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||
This doesn't get talked about enough - the GPU is weak, weak, weak. And anyone who can fix them will go to a serious AI company (for 2-3x the salary). | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||||||||