| ▲ | noelwelsh 2 hours ago | |||||||
If you have this attitude I hope you write everything in assembly. Except assembly is compiled into micro-ops, so hopefully you avoid that by using an 8080 (according to a quick search, the last Intel CPU to not have micro-ops.) In other words, why is one particular abstraction (e.g. Javscript, or the web browser) ok, but another abstraction (e.g. React) not? This attitude doesn't make sense to me. | ||||||||
| ▲ | pibaker 2 minutes ago | parent | next [-] | |||||||
> In other words, why is one particular abstraction (e.g. Javscript, or the web browser) ok, but another abstraction (e.g. React) not? This attitude doesn't make sense to me. Most moral panic over the Evil Big Frameworks are symptoms of mental illnesses — usually obsessive compulsive disorder. It need not make sense. We don't negotiate with mental illnesses. | ||||||||
| ▲ | kens an hour ago | parent | prev | next [-] | |||||||
Did someone ask about Intel processor history? :-) The Intel 8080 (1974) didn't use microcode, but there were many later processors that didn't use microcode either. For instance, the 8085 (1976). Intel's microcontrollers, such as the 8051 (1980), didn't use microcode either. The RISC i860 (1989) didn't use microcode (I assume). The completely unrelated i960 (1988) didn't use microcode in the base version, but the floating-point version used microcode for the math, and the bonkers MX version used microcode to implement objects, capabilities, and garbage collection. The RISC StrongARM (1997) presumably didn't use microcode. As far as x86, the 8086 (1978) through the Pentium (1993) used microcode. The Pentium Pro (1995) introduced an out-of-order, speculative architecture with micro-ops instead of microcode. Micro-ops are kind of like microcode, but different. With microcode, the CPU executes an instruction by sequentially running a microcode routine, made up of strange micro-instructions. With micro-ops, an instruction is broken up into "RISC-like" micro-ops, which are tossed into the out-of-order engine, which runs the micro-ops in whatever order it wants, sorting things out at the end so you get the right answer. Thus, micro-ops provide a whole new layer of abstraction, since you don't know what the processor is doing. My personal view is that if you're running C code on a non-superscalar processor, the abstractions are fairly transparent; the CPU is doing what you tell it to. But once you get to C++ or a processor with speculative execution, one loses sight of what's really going on under the abstractions. | ||||||||
| ▲ | sevensor 35 minutes ago | parent | prev | next [-] | |||||||
A good abstraction relieves you of concern for the particulars it abstracts away. A bad abstraction hides the particulars until the worst possible moment, at which point everything spills out in a messy heap and you have to confront all the details. Bad abstractions existed long before React and long before LLMs. | ||||||||
| ▲ | kalterdev 2 hours ago | parent | prev | next [-] | |||||||
You can learn JavaScript and code for life. You can’t learn React and code for life. Yeah, JavaScript is an illusion (to be exact, a concept). But it’s the one that we accept as fundamental. People need fundamentals to rely upon. | ||||||||
| ||||||||
| ▲ | 8 minutes ago | parent | prev | next [-] | |||||||
| [deleted] | ||||||||
| ▲ | pessimizer 2 hours ago | parent | prev [-] | |||||||
Are you seriously saying that you can't understand the concept of different abstractions having different levels of usefulness? That's the law of averages taken to cosmic proportions. If this is true, why have more than one abstraction? | ||||||||
| ||||||||