'Change an integral value's data type while keeping it normalized to the maximum value of said data type in C#

I want to change a value, of, let's say, type int to be of type short, and making the value itself be "normalized" to the maximum value short can store - that is, so int.MaxValue would convert into short.MaxValue, and vice versa.

Here's an example using floating-point math to demonstrate:

public static short Rescale(int value){
    float normalized = (float)value / int.MaxValue; // normalize the value to -1.0 to 1.0
    float rescaled = normalized * (float)(short.MaxValue);
    return (short)(rescaled);
}

While this works, it seems like using floating-point math is really inefficient, and can be improved, as we're dealing with binary data here. I tried using bit-shifting, but with to no avail.

Both signed and unsigned values are going to be processed - that isn't really an issue with the floating point solution, but when bit-shifting and doing other bit-manipulation, that makes things much more difficult.

This code will be used in quite a performance heavy context - it will be called 512 times every ~20 milliseconds, so performance is pretty important here.

How can I do this with bit-manipulation (or plain old integer algebra, if bit manipulation isn't necessary) and avoid floating-point math when we're operating on integer values?



Solution 1:[1]

You should use the shift operator. It is very fast. int is 32bits, short is 16, so shift 16 bits right to scale your int to a short:

int x = 208908324 ;
//32 bits vs 16 bits.
short k = (short) (x >> 16);

Just reverse the process for scaling up. Obviously the lower bits will be filled with zeros.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Auction God