▲ | crq-yml 4 days ago | |||||||
I don't disagree, but I think the optimization potential tends to be limited to trivial automations in practice. The idioms we use to code at a higher level are mostly wrappers on something we know how to do at a lower level with a macro. The compiler still helps because it guard-rails it and puts you on the path to success as you build more complex algorithms, but you have to get into very specific niches to reach both of "faster than C" and "easier to code the idiom". As ever, hardware is an underappreciated factor. We have make-C-go-fast boxes today. That's what the engineering effort has been thrown towards. They could provide other capabilities that prefer different models of computing, but they don't because the path dependency effect is very strong. The major exceptions come from including alternate modalities like GPU, hardware DSP and FPGA in this picture, where the basic aim of the programming model is different. | ||||||||
▲ | holowoodman 4 days ago | parent [-] | |||||||
> As ever, hardware is an underappreciated factor. We have make-C-go-fast boxes today. That is a very common misconception. There have been numerous attempts at architectures that cater to higher-level languages. Java-bytecode-interpreting CPUs have been tried and were slower than contemporary "normal" CPUs at executing bytecode. Phones and smartphones were a supposed hot market for those, didn't fly, native bytecode execution is dead nowadays. Object-orientation in CPUs has been tried, like in Intels iAPX432. Type-tagged pointers and typed memory has been tried. HLLCAs were all the rage for some time ( https://en.wikipedia.org/wiki/High-level_language_computer_a... ). Early CISC CPUs had linked lists as a fundamental data type. Lisp machines did a ton of stuff in hardware, GC, types, tagging, polymorphism. All of it didn't work out, more primitive hardware with more complex interpreters and compilers always won. Not because C was all the rage at the time, but because high-level features in hardware are slow and inflexible. What came of it was the realization that a CPU must be fast and flexible, not featureful. That's why we got RISC. That's why CISC processors like x86 internally translate down to RISC-like microcode. The only thing that added a little more complexity were SIMD and streaming architectures, but only in the sense that you could do more of the same in parallel. Not that HLL constructs were directly implemented into a CPU. | ||||||||
|