Remix.run Logo
WalterBright 12 hours ago

These sorts of things are fun and interesting. Compiler optimizations fall into two categories:

1. organized data flow analysis

2. recognizing a pattern and replacing it with a faster version

The first is very effective over a wide range of programs and styles, and is the bulk of the actual transformations. The second is a never-ending accumulation of patterns, where one reaches diminishing returns fairly quickly.

The example in the linked article is very clever and fun, but not really of much value (I've never written a loop like that in 45 years). As mentioned elsewhere "Everyone knows the Gauss Summation formula for sum of n integers i.e. n(n+1)/2" and since everyone knows it why not just write that instead of the loop!

Of course one could say that for any pattern, like replacing i*2 with i<<1, but those pattern replacements are very valuable because they are generated by high level generic coding.

And you could say I'm just being grumpy about this because my optimizer does not do this particular optimization. Fair enough!

gizmo686 10 hours ago | parent | next [-]

It's not clear to me what optimizations the compiler actually did here. Years ago, I worked on a niche compiler, and was routinely surprised by what the optimizer was able to figure out; despite having personally written most of the optimization transformations myself.

steveklabnik 6 hours ago | parent [-]

I can't actually speak to the specifics here but usually this is "idiom recognition", that is, it just notices that the pattern is there and transforms it directly.

Validark 10 hours ago | parent | prev [-]

It might have more value than you think. If you look up SCEV in LLVM you'll see it's primarily used for analysis and it enables other optimizations outside of math loops that, by themselves, probably don't show up very often.

WalterBright 10 hours ago | parent [-]

You might be right.