▲ | almostgotcaught 7 hours ago | |||||||
i have no idea what you're saying - i'm well aware that compilers do lots of things but this sentence in your original comment > compiled machine code exactly as-is, and had a instruction-updating pass implies there should be silicon that implements the instruction-updating - what else would be "executing" compiled machine code other than the machine itself........... | ||||||||
▲ | pornel 4 hours ago | parent [-] | |||||||
I was talking about a software pass. Currently, the machine code stored in executables (such as ELF or PE) is only slightly patched by the dynamic linker, and then expected to be directly executable by the CPU. The code in the file has to be already compatible with the target CPU, otherwise you hit illegal instructions. This is a simplistic approach, dating back to when running executables was just a matter of loading them into RAM and jumping to their start (old a.out or DOS COM). What I'm suggesting is adding a translation/fixup step after loading a binary, before the code is executed, to make it more tolerant to hardware changes. It doesn’t have to be full abstract portable bytecode compilation, and not even as involved as PTX to SASS, but more like a peephole optimizer for the same OS on the same general CPU architecture. For example, on a pre-AVX2 x86_64 CPU, the OS could scan for AVX2 instructions and patch them to do equivalent work using SSE or scalar instructions. There are implementation and compatibility issues that make it tricky, but fundamentally it should be possible. Wilder things like x86_64 to aarch64 translation have been done, so let's do it for x86_64-v4 to x86_64-v1 too. | ||||||||
|