Remix.run Logo
CamJN 9 hours ago

it's this part: "The top SKU has a similar performance and efficiency profile to the base M5 processor along with faster graphics performance." that is naive, this has been the standard lie told by intel as long as Apple silicon has existed, "Ignore everything we've ever done or promised before, our NEXT gen will be as fast and power efficient as apple! We promise this time!". It has never been true, and honestly I don't think it CAN be true when they have to give over a full third of their transistor budget just to decoding the abomination that is x86_64.

dangus 7 hours ago | parent | next [-]

Proper testing and benchmarks don’t lie. I’m not sure why you think this is an impossible feat.

https://youtu.be/Xjkzb-j6nKI

12:00 mark, you can see panther lake performs better in Cyberpunk 2077 than the M5 with less power draw.

6:25, Panther Lake is barely behind the M5 chip at Cinebench. Just a slightly lower score at the same wattage.

And don’t forget, the M5 is years away from supporting Linux fully. We are just talking about the M3 getting decent support.

If you’re the kind of person that wants a thin and light laptop for productivity and also wants to fire up some light games here and there, it’s hard to argue that an M5 MacBook Air is the right system for you. Even with recent strides in game compatibility, macOS is a terrible gaming platform that really can’t hold a candle to Windows or Linux x86, and Panther Lake graphics smokes the M5.

Obviously a Mac with macOS is a better choice for things like video editing.

bigyabai 8 hours ago | parent | prev [-]

It's believable. AMD's x86 APUs were basically neck-and-neck with the M1 in performance, and when you normalize for production processes AMD was actually more efficient under load: https://www.notebookcheck.net/M1-vs-R7-4800U_12937_11681.247...

x86 is the minority of the issue compared to securing cutting-edge nodes and optimizing for big.LITTLE. And once you factor in all of the dark ops on Apple Silicon (NPU, anyone?), they've basically butt up against the same wall of wasting transistors on specialized hardware that is obsolete within 3 years of release. Minus the ability to cleanly integrate it with compiler tech for efficiency gains, a-la SSE/AVX.