| ▲ | pornel 4 days ago |
| People innately admire difficult skills, regardless of their usefulness. Acrobatic skateboarding is impressive, even when it would be faster and safer to go in a straight line or use a different mode of transport. To me skill and effort is misplaced and wasted when it's spent on manually checking invariants that a compiler could check better automatically, or implementing clever workarounds for language warts that no longer provide any value. Removal of busywork and pointless obstacles won't make smart programmers dumb and lazy. It allows smart programmers to use their brainpower on bigger more ambitious problems. |
|
| ▲ | bayindirh 4 days ago | parent | next [-] |
| These type comments always remind me that we forget where we come from in terms of computation, every time. It's important to remember Rust's borrow checker was computationally infeasible 15 years ago. C & C++ are much older than that, and they come from an era where variable name length affected compilation time. It's easy to publicly shame people who do hard things for a long time in the light of newer tools. However, many people who likes these languages are using them longer than the languages we champion today were mere ideas. I personally like Go in these days for its stupid simplicity, but when I'm going to do something serious, I'll always use C++. You can fight me, but never pry C++ from my cold, dead hands. For note, I don't like C & C++ because they are hard. I like them because they provide a more transparent window the processor, which is a glorified, hardware implemented PDP-11 emulator. Last, we shall not forget that all processors are C VMs, anyway. |
| |
| ▲ | steveklabnik 4 days ago | parent | next [-] | | > It's important to remember Rust's borrow checker was computationally infeasible 15 years ago. The core of the borrow checker was being formulated in 2012[1], which is 13 years ago. No infeasibility then. And it's based on ideas that are much older, going back to the 90s. Plus, you are vastly overestimating the expense of borrow checking, it is very fast, and not the reason for Rust's compile times being slow. You absolutely could have done borrow checking much earlier, even with less computing power available. 1: https://smallcultfollowing.com/babysteps/blog/2012/11/18/ima... | |
| ▲ | aw1621107 4 days ago | parent | prev | next [-] | | > It's important to remember Rust's borrow checker was computationally infeasible 15 years ago. IIRC borrow checking usually doesn't consume that much compilation time for most crates - maybe a few percent or thereabouts. Monomorphization can be significantly more expensive and that's been much more widely used for much longer. | |
| ▲ | nxobject 4 days ago | parent | prev | next [-] | | > It's important to remember Rust's borrow checker was computationally infeasible 15 years ago. C & C++ are much older than that, and they come from an era where variable name length affected compilation time. I think you're setting the bar a little too high. Rust's borrow-checking semantics draw on much earlier research (for example, Cyclone had a form of region-checking in 2006); and Turbo Pascal was churning through 127-character identifiers on 8088s in 1983, one year before C++ stream I/O was designed. EDIT: changed Cyclone's "2002" to "2006". | |
| ▲ | pjmlp 4 days ago | parent | prev | next [-] | | I remember, I was there in the 1980's coding, hence why I know C and C++ were not the only alternatives, rather the ones that eventually won in the end. | |
| ▲ | unscaled 4 days ago | parent | prev | next [-] | | > the processor, which is a glorified, hardware implemented PDP-11 emulator. This specific seems like just gratuitously rewriting history. I can get how you'd feel C (and certain dialects of C++) are "closer to the metal" in a certain sense: C supports very few abstractions and with fewer abstractions, there are less "things" between you and "the metal". But this is as far as it goes. C does not represent - by any stretch of imagination - an accurate computational model or a memory of a modern CPU. It does stay close to PDP-11, but calling modern CPUs "glorified hardware emulators of PDP-11" is just preposterous. PDP-11 was an in-order CISC processor with no virtual memory, cache hierarchy, branch prediction, symmetric multiprocessing and SIMD instruction. Some modern CPUs (namely the x86/x64 family of CPUs) do emulate a CISC ISA on that is probably more RISC-like, but that's as far we can say they are trying to behave like a PDP-11 (even though the intention was to behave like a first-gen Intel Pentium). | |
| ▲ | raverbashing 4 days ago | parent | prev [-] | | > we shall not forget that all processors are C VMs This idea is some 10yrs behind. And no, thinking that C is "closer to the processor" today is incorrect It makes you think it is close which in some sense is even worse | | |
| ▲ | lelanthran 4 days ago | parent [-] | | > This idea is some 10yrs behind. Akshually[1] ... > And no, thinking that C is "closer to the processor" today is incorrect THIS thinking is about 5 years out of date. Sure, this thinking you exhibit gained prominence and got endlessly repeated by every critic of C who once spent a summer doing a C project in undergrad, but it's been more than 5 years that this opinion was essentially nullified by Okay, If C is "not close to the process", what's closer?
Assembler? After all if everything else is "Just as close as C, but not closer", then just what kind of spectrum are you measuring on, that has a lower bound which none of the data gets close to?You're repeating something that was fashionable years ago. =========== [1] There's always one. Today, I am that one :-) | | |
| ▲ | steveklabnik 4 days ago | parent | next [-] | | Standard C doesn't have inline assembly, even though many compilers provide it as an extension. Other languages do. > After all if everything else is "Just as close as C, but not closer", then just what kind of spectrum are you measuring on The claim about C being "close to the machine" means different things to different people. Some people literally believe that C maps directly to the machine, when it does not. This is just a factual inaccuracy. For the people that believe that there's a spectrum, it's often implied that C is uniquely close to the machine in ways that other languages are not. The pushback here is that C is not uniquely so. "just as close, but not closer" is about that uniqueness statement, and it doesn't mean that the spectrum isn't there. | | |
| ▲ | lelanthran 4 days ago | parent [-] | | > Some people literally believe that C maps directly to the machine, when it does not. Maybe they did, 5 years (or more) ago when that essay came out. it was wrong even then, but repeating it is even more wrong. > This is just a factual inaccuracy. No. It's what we call A Strawman Argument, because no one in this thread claimed that C was uniquely close to the hardware. Jumping in to destroy the argument when no one is making it is almost textbook example of strawmanning. | | |
| |
| ▲ | adgjlsfhk1 4 days ago | parent | prev | next [-] | | Lots of languages at a higher level than C are closer to the processor in that they have interfaces for more instructions that C hasn't standardized yet. | | |
| ▲ | lelanthran 4 days ago | parent [-] | | > Lots of languages at a higher level than C are closer to the processor in that they have interfaces for more instructions that C hasn't standardized yet. Well, you're talking about languages that don't have standards, they have a reference implementation. IOW, no language has standards for processor intrinsics; they all have implementations that support intrinsics. |
| |
| ▲ | raverbashing 4 days ago | parent | prev [-] | | > Okay, If C is "not close to the process", what's closer? LLVM IR is closer. Still higher level than Assembly The problem is thus: char a,b,c; c = a+b; Could not be more different between x86 and ARM | | |
| ▲ | lelanthran 4 days ago | parent [-] | | > LLVM IR is closer. Still higher level than Assembly So your reasoning for repeating the once-fashionable statement is because "an intermediate representation that no human codes in is closer than the source code"? |
|
|
|
|
|
| ▲ | throwawaymaths 4 days ago | parent | prev [-] |
| To me a compiler's effort is misplaced and wasted when it's spent on checking invariants that could be checked by a linter or a sidecar analysis module. |
| |
| ▲ | pornel 4 days ago | parent | next [-] | | Checking of whole-program invariants can be accurate and done basically for free if the language has suitable semantics. For example, if a language has non-nullable types, then you get this information locally for free everywhere, even from 3rd party code. When the language doesn't track it, then you need a linter that can do symbolic execution, construct call graphs, data flows, find every possible assignment, and still end up with a lot of unknowns and waste your time on false positives and false negatives. Linters can't fix language semantics that create dead-ends for static analysis. It's not a matter of trying harder to make a better linter. If a language doesn't have clear-enough aliasing, immutability, ownership, thread-safety, etc. then a lot of analysis falls apart. Recovering required information from arbitrary code may be literally impossible (Rice's theorem), and getting even approximate results quickly ends up requiring whole-program analysis and prohibitively expensive algorithms. And it's not even an either-or choice. You can have robust checks for fundamental invariants built into the language/compiler, and still use additional linters for detecting less clear-cut issues. | | |
| ▲ | throwawaymaths 2 days ago | parent [-] | | > Linters can't fix language semantics that create dead-ends for static analysis. It's not a matter of trying harder to make a better linter. If a language doesn't have clear-enough aliasing, immutability, ownership, thread-safety, etc. then a lot of analysis falls apart this assertion is known disproven. seL4 is a fully memory safe (and also has even more safety baked in) major systems programming behemoth that is written on c + annotations where the analysis is conducted in a sidecar. to obtain extra safety (but still not as safe as seL4) in rust, you must add a sidecar in the form of MIRI. nobody proposes adding MIRI into rust. now, it is true that sel4 is a pain in the ass to write,compile+check, but there is a lot of design space in the unexplored spectrum of rust, rust+miri, sel4. |
| |
| ▲ | fluoridation 4 days ago | parent | prev [-] | | If the compiler is not checking them then it can't assume them, and that reduces the opportunities for optimizations. If the checks don't run on the compiler then they're not running every time; if you do want them to run every time then they may as well live in the compiler instead. |
|