| ▲ | norir 12 hours ago | |
This is true, which means that a language has to be designed from the ground up to deal with these problems or there will always be inscrutable bugs due to misuse of arithmetic results. A simple example in a c-like language would be that the following function would not compile:
but this would:
Assuming 32 bit unsigned and int, the type of c should be computed as the range [-0xffffffff, 0xffffffff], which is different from int [-0x100000000, 0x7fffffff]. Subtle things like this are why I think it is generally a mistake to type annotate the result of a numerical calculation when the compiler can compute it precisely for you. | ||
| ▲ | pron 12 hours ago | parent [-] | |
First, your code is about having unsigned types represent the notion of non-negative values, but this is not the intent of unsigned types in C/C++. They represent modular arithmetic types. Second, it's not as simple as you present. What is the type of c? Obviously it needs to be signed so that you could compare it to zero, but how many bits does it have? What if a and b are 64 bit? What if they're 128 bit? You could do it without storing the value and by carrying a proof that a >= b, but that is not so simple, either (I mean, the compiler can add runtime checks, but languages like C don't like invisible operations). | ||