| ▲ | mayoff 2 days ago |
| In Swift (Apple’s C++ successor), the normal operators (`+`, `-`, `*`) trap on overflow for integer types. If you want twos complement wrapping, you can use `&+`, `&-`, and `&*`. Given that Apple has been making its own CPU cores for years now, I suspect overflowing checking on Apple CPUs is virtually free (aside from code size). |
|
| ▲ | qayxc 2 days ago | parent | next [-] |
| > Given that Apple has been making its own CPU cores for years now, I suspect overflowing checking on Apple CPUs is virtually free (aside from code size). Never make guesses based on a particular programming language. In Apple's own C documentation (https://developer.apple.com/documentation/xcode/integer-over...) it is stated that "Overflows result in undefined behavior." and enabling wrapping behaviour "may adversely impact performance", indicating that overflow detection is in fact not "virtually free". |
| |
| ▲ | debugnik 2 days ago | parent [-] | | "Enabling wrapping behaviour" for signed integers disallows a lot of optimizations based on signed overflow being undefined behaviour, which is a matter of language and compiler design. This says nothing about the cost of checked arithmetic itself on the CPU. | | |
| ▲ | qayxc 2 days ago | parent [-] | | It does, though. UB and associated optimisations wouldn't be an issue if defined behaviour would not have an impact on performance. If the cost would be zero or negligible, the compiler wouldn't need to care and hence warnings like this wouldn't need to be explicitly stated. | | |
| ▲ | debugnik 2 days ago | parent [-] | | And yet Swift doesn't rely on these optimizations, preferring to trap instead. Again, the guess above was about the CPU and we're conflating language-specific UB optimisations. |
|
|
|
|
| ▲ | ozgrakkurt 2 days ago | parent | prev | next [-] |
| This approach isn’t good imo. Zig also has a similar approach. It is best to have ergonomic checked version of arithmetic functions and always use them when possible, and use the debug only checked versions on other places. Performance of checked arithmetic will basically never matter around things like allocation in my experience |
|
| ▲ | saagarjha 2 days ago | parent | prev [-] |
| Code size (and branch table entries) are not free, of course. The other thing to note is that trapping operators often need to trap precisely which can lead to missed optimizations. |
| |
| ▲ | Someone 2 days ago | parent [-] | | One example of such an optimization is that overflow checking can prevent vectorization of code. See for example this post: https://lemire.me/blog/2016/12/06/dont-assume-that-safety-co.... It is ancient, but I don’t see a reason why it would have become outdated. | | |
| ▲ | msichert 2 days ago | parent [-] | | Vector instructions usually don't have overflow flags, so a compiler can't easily vectorize loops containing overflow checks. However, detecting overflows in integer operations requires only a bit of bitwise arithmetic. In my experiments, this lead to an overhead of only 7% for vectorized additions with overflow checks: https://cedardb.com/blog/vectorized_overflows/ | | |
| ▲ | ack_complete 2 days ago | parent [-] | | That's with a simple data operation and using a recent x86 vector ISA (AVX-512) that is only available on some systems, notably excluding any current Intel desktop CPU. The real killer isn't the data operations, though, it's if the overflow checks interfere with converting the loop logic or data addressing to vectorizable form. Indexing with 32-bit signed int vs. unsigned int on a 64-bit platform in C is a classic case -- with unsigned the compiler cannot assume that addressing offsets don't wrap, which then prevents coalescing data accesses into vector loads and stores. |
|
|
|