| ▲ | jsheard 11 hours ago |
| Does anyone know if M3 support is likely to lead to M4 or M5 support in relatively short order? AIUI M3 took a long time because it was a substantial departure from M1/M2, especially in the GPU architecture, but I don't know if M4 or M5 made similar leaps. |
|
| ▲ | adgjlsfhk1 11 hours ago | parent | next [-] |
| The main reason M3 took a long time isn't related to m3 itself, but rather that the asahi project took on a ton of tech debt to get M1/M2 working. M3 wasn't too difficult, but before taking on the additional tech debt, the Asahi team focused on getting all of their changes upstreamed to the linux kernel. |
| |
| ▲ | monocasa 10 hours ago | parent | next [-] | | The main developer was also the target of a harassment campaign from a place that has pushed other targets to straight up suicide. That took almost all of their energy for the last year and they ended up quitting. | | |
| ▲ | xattt 10 hours ago | parent [-] | | > The main developer was also the target of a harassment campaign from a place that has pushed other targets to straight up suicide.
Is this the Torvalds/Hector dispute that comes on the Google AI summary, or was this a three-letter agency type of harassment faced by Aaron Swartz? | | |
| ▲ | gpm 10 hours ago | parent | next [-] | | Neither actually... It was an anti trans/kiwi farms brigade... The Torvalds dispute probably came about in part because of defensive behavior triggered this brigade but was really unrelated. | |
| ▲ | alright2565 10 hours ago | parent | prev [-] | | Anti-trans hate. | | |
| ▲ | ggljejejj 9 hours ago | parent [-] | | Trans hate. | | |
| ▲ | OJFord 8 hours ago | parent [-] | | GP definitely meant the same thing, i.e. 'hate [that is] anti-transsexualism' to your 'hate [against] transsexualism'. | | |
| ▲ | nrabulinski 4 hours ago | parent [-] | | FYI, transsexual is an outdated term, with transgender being generally preferred instead :) |
|
|
|
|
| |
| ▲ | tgtweak 10 hours ago | parent | prev [-] | | Prognosis is then that work for m4/m5 should be relatively straight line now that refactoring is done? |
|
|
| ▲ | OGEnthusiast 11 hours ago | parent | prev | next [-] |
| M4 is apparently even harder because of some new hardware-level page table protections. Source from Asahi contributor: https://social.treehouse.systems/@sven/114278224116678776 |
| |
| ▲ | eddyg 10 hours ago | parent [-] | | Memory Integrity Enforcement, perhaps? https://security.apple.com/blog/memory-integrity-enforcement... | | |
| ▲ | worldsavior 10 hours ago | parent [-] | | It's "Secure Page Table Monitor". https://support.apple.com/en-il/guide/security/sec8b776536b/.... The kernel requires it so they need to emulate SPTM. | | |
| ▲ | nrabulinski 4 hours ago | parent | next [-] | | This is not exactly correct. They wouldn’t need to emulate SPTM, since SPTM is already running. And to be very correct, SPTM is a “process” running in a separate privilege level to the regular privilege levels found on arm processors.
The reason it’s a pain is because pre M4 the bootloader gave you complete control over the CPU, including the Apple-exclusive extensions like GLx, the special privilege levels e.g. SPTM is running at. Since M4 the bootloader handles that, so asahi team has to either cope with being dropped after GL is already initialized and locked down, or running in a mode with all of Apple extensions disabled.
So it’s not a problem for running Linux, but it’s a problem for running macOS with a thin abstraction layer to intercept talking with devices like the GPU, which made reverse engineering for them significantly easier. | |
| ▲ | eddyg 8 hours ago | parent | prev [-] | | Thanks! |
|
|
|
|
| ▲ | zozbot234 11 hours ago | parent | prev [-] |
| The M5 reportedly has a newer generation GPU compared to the M3/M4. For one thing, the GPU-side Neural Accelerators are obviously new to the M5 series. Other stuff is harder to know for sure until it gets looked into from a technical POV. |
| |
| ▲ | mananaysiempre 10 hours ago | parent [-] | | It’s not like neural accelerators on non-Apple consumer hardware get much use on Linux, either, so that does not sound like much of a dealbreaker. | | |
| ▲ | wtallis 9 hours ago | parent [-] | | The matrix/tensor math units added to GPUs do see widespread use, both for running LLMs and for the ML-based upscaling used by most video games these days (eg. NVIDIA DLSS). The NPUs that are separate from the GPU and designed more with efficiency in mind rather than raw performance are a different thing, and that's what's still looking for a killer app in spite of all the marketing effort. |
|
|