| ▲ | norir a day ago | |
That's a terrible benchmark but the correct thing to do is not to eliminate the code but to issue an error or warning that this is a cpu cycle burning no-op. | ||
| ▲ | WalterBright 21 hours ago | parent | next [-] | |
The more high level code is used, the more purposeless code is there to be eliminated. For example, an RAII object with an empty default destructor. | ||
| ▲ | MatejKafka 20 hours ago | parent | prev | next [-] | |
That's hard to implement, because typically, constructs like this will be the result of various previous passes (macro expansion, inlining, dead code elimination,...), typically it's not written by the user directly. | ||
| ▲ | munificent 21 hours ago | parent | prev | next [-] | |
That would make for a bad experience in the presence of macros or other compile-time configuration. It's pretty common to have code that only exists in one configuration but not others. In those, you end up with some obviously pointless code that the compiler should silently discard. It would be no fun if you couldn't compile your release build because the compiler yelled at you that the removed `assert()`s turned some of the surrounding code into dead code. | ||
| ▲ | netbioserror a day ago | parent | prev [-] | |
Where do you draw the line on this logic? Should we never inline or unroll and only warn? Should we never re-order and only warn? The CPU is doings its own re-ordering and that can't be manually controlled, should we pick instructions to force ordering and warn? I can understand if we'd want to emit perfectly matching control flow logic with certain debug flags to verify correctness in specific scenarios. But I'd want it to be opt-in. | ||