| ▲ | James_K 2 days ago |
| Here's a thought: just distribute source code. ABI issues should be mostly fixed. Most computers can compile source code fast enough for the user not to notice and cache the results so that it's never a problem again. If you want optimised code, you can do a source to source optimisation then zip and minify the file. You could compile such a library to approximately native speeds without much user-end lag using modern JIT methods, and maybe even run LTO in a background thread so that the exectuables outdo dynamically linked ones. |
|
| ▲ | RedShift1 2 days ago | parent | next [-] |
| I hate compiling. 9/10 something goes wrong. Sometimes I can fix it, other times I just abandon the effort and use something else. These days I just use packages or docker images and if that doesn't work out I'm moving on, ain't nobody got time for this. You really can't expect people who just want to use their computers and don't even know what a compiler is to get involved in a process like that. |
| |
| ▲ | ses1984 2 days ago | parent | next [-] | | No, end users need not get involved, it could/should be handled by the operating system. | | |
| ▲ | HKH2 2 days ago | parent [-] | | Gentoo? Compiling big packages takes ages. | | |
| ▲ | adrian_b 2 days ago | parent | next [-] | | More than 20 years, with a single-core Pentium 4, it could take indeed something like 3 days of continuous compilation to compile an entire Gentoo distribution, in order to have a personal computer with every application that one might want. However, already after the appearance of the first dual-core AMD Athlon64, 20 years ago, that time could be reduced to not much more than a half of day, while nowadays, with a decent desktop CPU from 5 years ago, most Gentoo packages can be compiled and installed in less than a minute. There are only a few packages whose compilation and installation can take a noticeable time, of up to tens of minutes, depending on the chosen options and on the number of cores of the CPU, e.g. Firefox, LibreOffice, LLVM. There is only a single package whose compilation may take ages unless you have an expensive CPU and enough memory per core: Google Chromium (including its derivatives that use the same code base). | |
| ▲ | seba_dos1 2 days ago | parent | prev | next [-] | | I can effortlessly compile Debian packages on my phone. With some limits of course. I can't compile Chromium even on my laptop. But most of stuff - I can. | |
| ▲ | James_K a day ago | parent | prev [-] | | Not Gentoo. The system I'm suggesting is one where you replace the linker with one which accepts a platform agnostic assembly code with similar semantics to C, and perhaps some additional tools for stack traversal to allow the implementation of garbage collected languages and exceptions. Compiling would be fast because the code you distribute would have already had optimisations applied to it in a pre-compilation step. It would also be automatic in the same way the present linker system is automatic. |
|
| |
| ▲ | James_K 2 days ago | parent | prev [-] | | The idea is that software would only be shipped if it actually works, and part of this would be ensuring it compiles. I'm not suggesting we use Make files or some other foul approach like that. Code should ideally be distributed as a single zip file with an executable header that triggers it to be unpacked in some cache directory by a program analogous to the linker, and compiled by the simple process of compiling all source files. Additionally this could be sped up drastically by having the compiler be a single executable with uniform flags across files. Header files could be parsed once and shared between build threads in parallel, which alone would safe much work on the part of the compiler. If this still proves insufficient, a language with superior semantics to C could be chosen which can be compiled faster. Most code these days shipped is JS which is compiled on the user's computer. This would be an order of magnitude improvement on that from a performance perspective. |
|
|
| ▲ | m463 2 days ago | parent | prev | next [-] |
| the first launch of firefox would take a few hours. let alone the first boot of the linux kernel... :) |
| |
| ▲ | adrian_b 2 days ago | parent | next [-] | | This is somewhat exaggerated. The compilation of firefox could take a few hours on some laptop dual-core Skylake CPU from 10 years ago. Nowadays, on any decent dektop CPU with many cores the compilation of Firefox should take significantly less than an hour, though it remains one of the handful of open-source applications with a really long and non-negligible compilation time. The Linux kernel is normally compiled much faster than Firefox, except when one would enable the compilation of all existing kernel modules, for all the hardware that could be supported by Linux, even if almost all of that is not present and it would never be present on the target computer system. | |
| ▲ | James_K 2 days ago | parent | prev [-] | | Just because you are used to slow compilers doesn't mean fast ones are impossible. As I said in the original post, code optimisation can be done before compilation on the developers machine, so all that need be done on the target is a simple debug build. An example is Jonathon Blow's JAI compiler which compiles around 250,000 lines of code per second. Even on very slow hardware this is reduced to perhaps 80,000LoC/s, two minutes for 10 million lines of code. Most users would be tolerant of an "install" box with a progress bar that takes at most two minutes for only the largest application on the computer. Or perhaps the browser people would opt not to distribute their software in this way. Advanced software may have advanced needs, beyond that of the average application. Most code is already written in Javascript these days, meaning it begins life being interpreted and is JIT compiled on the user's computer. Few seem to notice the compilation but many complain about the general sluggishness. If you offered users a deal: spend 1 minute installing the software and it is snappy and responsive when you use it, I suspect all would accept. | | |
| ▲ | favorited a day ago | parent [-] | | > An example is Jonathon Blow's JAI compiler I thought you were advocating "just distribute source code" – JAI is a closed-source language that, in its decade of development, has never been used for a significant project. | | |
| ▲ | James_K a day ago | parent [-] | | I fail to see how that relates to my using it as an example of a fast compiler. For that matter, my suggested approach doesn't need to to release source code, just some kind of programming language code. You could minify it, pre-optimise it, or do any number of things that would mean it isn't source code but instead becomes derived code (as I said in my original comment). Distributing textual code is not the same as being open source. I'm really just saying you should distribute something that isn't a platform specific blob with a fixed binary interface. You could, for instance, use LLVM IR code to achieve this, which is little more than a kind of portable assembly. LLVM can apply heavy optimisations to this code on the developer's machine, then take only the last step of code generation on the user's machine. |
|
|
|
|
| ▲ | mrheosuper 2 days ago | parent | prev [-] |
| a.k.a Rust approach |
| |