| ▲ | ozgrakkurt 4 hours ago | |||||||||||||||||||||||||
Does this kind of thing make noticeable difference when applied to more complicated async functions? Examples in the blog seem too simple make any conclusions | ||||||||||||||||||||||||||
| ▲ | diondokter 4 hours ago | parent [-] | |||||||||||||||||||||||||
Hi, author here. I mention in the blog that I've tried to quickly hack two of the simplest optimizations in the compiler and it resulted in 2%-5% binary size savings in real embedded (async) codebases. And a quick and probably deeply flawed synthetic benchmark on the desktop showed a 3% perf increase. So yes, it does really matter. Keep in mind that optimizations stack. We're preventing LLVM from doing it's thing. So if we make the futures themselves smaller, LLVM will be able to optimize more. So small changes really compound. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||