▲ | steveklabnik 5 days ago | |||||||||||||||||||||||||||||||
"This was the right way to do it forty years ago, so that's why the experience is worse" isn't a compelling reason for a user to suffer today. Also, in this specific case, this ignores the history around LLVM offering itself up to the FSF. gcc could have benefitted from this fresh start too. But purely by accident, it did not. | ||||||||||||||||||||||||||||||||
▲ | FitCodIa 5 days ago | parent | next [-] | |||||||||||||||||||||||||||||||
> "This was the right way to do it forty years ago, so that's why the experience is worse" isn't a compelling reason for a user to suffer today. On my system, "dnf repoquery --whatrequires cross-gcc-common" lists 26 gcc-*-linux-gnu packages (that is, kernel / firmware cross compilers for 26 architectures). The command "dnf repoquery --whatrequires cross-binutils-common" lists 31 binutils-*-linux-gnu packages. The author writes, "LLVM and all cross compilers that follow it instead put all of the backends in one binary". Do those compilers support 25+ back-ends? And if they do, is it good design to install back-ends for (say) 23 such target architectures that you're never going to cross-compile for, in practice? Does that benefit the user? My impression is that the author does not understand the modularity of gcc cross compilers / packages because he's unaware of (or doesn't care for) the scale that gcc aims at. | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
▲ | AceJohnny2 5 days ago | parent | prev [-] | |||||||||||||||||||||||||||||||
I'd love to learn what accident you're referring to, Steve! I vaguely recall the FSF (or maybe only Stallman) arguing against the modular nature of LLVM because a monolothic structure (like GCC's) makes it harder for anti-GPL actors (Apple!) to undermine it. Was this related? | ||||||||||||||||||||||||||||||||
|