Remix.run Logo
WalterBright a day ago

I learned about DFA (Data Flow Analysis) optimizations back in the early 1980s. I eagerly implemented them for my C compiler, and it was released as "Optimum C". Then came the C compiler roundup benchmarks in the programming magazines. I breathlessly opened the issue, and was faced with the reviewers' review that Optimum C was a bad compiler because it deleted the code in the benchmarks. (The reviewer wrote Optimum C was cheating by recognizing the specific benchmark code and deleting it.)

I was really, really angry that the review had not attempted to contact me about this.

But the other compiler venders knew what I'd done, and the competition implemented DFA as well by the next year, and the benchmarks were updated.

The benchmarks were things like:

    void foo() { int i,x = 1; for (i = 0; i < 1000; ++i) x += 1; }
userbinator 20 hours ago | parent | next [-]

Embedded code where "do nothing" loops are common for timing purposes is often marked with "do not optimise" pragmas specifically because eliminating dead code has become the norm. On the other hand, many codebases have also become dependent on this default trimming behaviour and would be disgustingly bloated with dead code if it weren't for the compiler helping to remove some of it.

norir a day ago | parent | prev [-]

That's a terrible benchmark but the correct thing to do is not to eliminate the code but to issue an error or warning that this is a cpu cycle burning no-op.

WalterBright 21 hours ago | parent | next [-]

The more high level code is used, the more purposeless code is there to be eliminated. For example, an RAII object with an empty default destructor.

MatejKafka 20 hours ago | parent | prev | next [-]

That's hard to implement, because typically, constructs like this will be the result of various previous passes (macro expansion, inlining, dead code elimination,...), typically it's not written by the user directly.

munificent 21 hours ago | parent | prev | next [-]

That would make for a bad experience in the presence of macros or other compile-time configuration.

It's pretty common to have code that only exists in one configuration but not others. In those, you end up with some obviously pointless code that the compiler should silently discard. It would be no fun if you couldn't compile your release build because the compiler yelled at you that the removed `assert()`s turned some of the surrounding code into dead code.

netbioserror a day ago | parent | prev [-]

Where do you draw the line on this logic? Should we never inline or unroll and only warn? Should we never re-order and only warn? The CPU is doings its own re-ordering and that can't be manually controlled, should we pick instructions to force ordering and warn?

I can understand if we'd want to emit perfectly matching control flow logic with certain debug flags to verify correctness in specific scenarios. But I'd want it to be opt-in.