| ▲ | WalterBright a day ago | |||||||||||||||||||||||||
I learned about DFA (Data Flow Analysis) optimizations back in the early 1980s. I eagerly implemented them for my C compiler, and it was released as "Optimum C". Then came the C compiler roundup benchmarks in the programming magazines. I breathlessly opened the issue, and was faced with the reviewers' review that Optimum C was a bad compiler because it deleted the code in the benchmarks. (The reviewer wrote Optimum C was cheating by recognizing the specific benchmark code and deleting it.) I was really, really angry that the review had not attempted to contact me about this. But the other compiler venders knew what I'd done, and the competition implemented DFA as well by the next year, and the benchmarks were updated. The benchmarks were things like: | ||||||||||||||||||||||||||
| ▲ | userbinator 20 hours ago | parent | next [-] | |||||||||||||||||||||||||
Embedded code where "do nothing" loops are common for timing purposes is often marked with "do not optimise" pragmas specifically because eliminating dead code has become the norm. On the other hand, many codebases have also become dependent on this default trimming behaviour and would be disgustingly bloated with dead code if it weren't for the compiler helping to remove some of it. | ||||||||||||||||||||||||||
| ▲ | norir a day ago | parent | prev [-] | |||||||||||||||||||||||||
That's a terrible benchmark but the correct thing to do is not to eliminate the code but to issue an error or warning that this is a cpu cycle burning no-op. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||