'What does -fwrapv do?

Can anyone provide some code examples that act differently when compiled with -fwrapv vs without?

The gcc documentation says that -fwrapv instructs the compiler to assume that signed arithmetic overflow of addition, subtraction and multiplication wraps around using twos-complement representation.

But whenever I try overflowing the result is the same with or without -fwrapv.



Solution 1:[1]

Think of this function:

int f(int i) {
    return i+1 > i;
}

Mathematically speaking, i+1 should always be greater than i for any integer i. However, for a 32-bit int, there is one value of i that makes that statement false, which is 2147483647 (i.e. 0x7FFFFFFF, i.e. INT_MAX). Adding one to that number will cause an overflow and the new value, according to the 2's compliment representation, will wrap-around and become -2147483648. Hence, i+1>i becomes -2147483648>2147483647 which is false.

When you compile without -fwrapv, the compiler will assume that the overflow is 'non-wrapping' and it will optimize that function to always return 1 (ignoring the overflow case).

When you compile with -fwrapv, the function will not be optimized, and it will have the logic of adding 1 and comparing the two values, because now the overflow is 'wrapping' (i.e. the overflown number will wrap according to the 2's compliment representation).

The difference can be easily seen in the generated assembly - in the right pane, without -fwrapv, function always returns 1 (true).

Solution 2:[2]

for (int i=0; i>=0; i++)
    printf("%d\n", i);

With -fwrapv, the loop will terminate after INT_MAX iterations. Without, it could do anything since undefined behavior is unconditionally invoked by evaluation of i++ when i has the value INT_MAX. In practice, an optimizing compiler will likely omit the loop condition and produce an infinite loop.

Solution 3:[3]

The ISO standard working group WG14 exists to establish a convention that all C compilers must adhere to. Some compilers may (and do) also implement extensions. According to ISO standard C, those extensions are considered one of the following:

  • implementation-defined, meaning the compiler devs must make a choice, document that choice and maintain the lot in order to be considered a compliant C implementation.
  • C11/3.4.3 establishes a definition for undefined behaviour and gives an extremely familiar example, which is vastly superior to anything I could write:

    1 undefined behavior behavior, upon use of a nonportable or erroneous program construct or of erroneous data, for which this International Standard imposes no requirements

    2 NOTE Possible undefined behavior ranges from ignoring the situation completely with unpredictable results, to behaving during translation or program execution in a documented manner characteristic of the environment (with or without the issuance of a diagnostic message), to terminating a translation or execution (with the issuance of a diagnostic message).

    3 EXAMPLE An example of undefined behavior is the behavior on integer overflow.


There's also an unspecified behaviour, though I'll leave it as an exercise to you to read about that in the standard.

Be careful where you tread. This is one of the few generally accepted undefined behaviours where it's typically expected that a LIA-style wrapping will occur upon a twos complement representation without a trap repesentation. It's important to realise that there are implementations that use a trap representation corresponding to the bit representation containing all ones.

In summary, fwrapv and ftrapv exist to pass on a choice to you, a choice which the developers would have otherwise had to make on your behalf, and that choice is what happens when signed integer overflow occurs. Of course, they must select a default, which in your case appears to correlate to fwrapv rather than ftrapv. That needn't be the case, and it needn't be the case that these compiler options change anything what-so-ever.

Solution 4:[4]

When using a new version of the gcc compiler I stumbled into a problem that made the flag -fwrapv clear.

char Data[8]={0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF};
int a=256*(256*(256*Data[3]+Data[2])+Data[1])+Data[0];

The result will be -1 if you use the -fwrapv flag or not. But is you do:

if(a<0)
  printf("the value of a %d is negative\n",a);
else
  printf("the value of a %d is positive\n",a);

It will print "the value of a -1 is positive" because when optimising it removes the first part because + - and * of positive numbers will always be positive (char is unsigned).
When you compile with the -fwrapv flag it will print the correct answer.
If you use this:

int a=(Data[3]<<24)+(Data[2]<<16)+(Data[1]<<8)+Data[0];

The code works as expected with or without the flag.

Solution 5:[5]

-fwrapv tells the compiler that overflow of signed integer arithmetic must be treated as well-defined behavior, even though it is undefined in the C standard.

Nearly all CPU architectures in widespread use today use the "2's complement" representation of signed integers and use the same processor instructions for signed and unsigned addition, subtraction and non-widening multiplication of signed and unsigned values. So at a CPU architecture level both signed and unsigned arthimetic wrap around modulo 2n.

The C standard says that overflow of signed integer arithmetic is undefined behavior. Undefined behavior means that "anything can happen". Anything includes "what you expected to happen", but it also includes "the rest of your program will behave in ways that are not self-consistent".

In particular, when undefined behavior is invoked on modern compilers, the optimiser's assumptions about the value in a variable can become out of step with the value actually stored in that variable.

Therefore if you allow signed arithmetic overflow to happen in your programs and do not use the fwrapv option, things are likely to look ok at first, your simple test programs will produce the results you expect.

But then things can go horribly wrong. In particular checks on whether the value of a variable is nonnegative can be optimised away because the compiler assumes that the variable must be positive, when in fact it is negative.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1
Solution 2 R.. GitHub STOP HELPING ICE
Solution 3
Solution 4 Peter v.d. Vos
Solution 5 plugwash