'C++ Bitshift 4 int_8t into a normal integer (32 bit )

I had already asked a question how to get 4 int8_t into a 32bit int, I was told that I have to cast the int8_t to a uint8_t first to pack it with bitshifting into a 32bit integer.

int8_t offsetX = -10;
int8_t offsetY = 120;
int8_t offsetZ = -60;

using U = std::uint8_t;
int toShader  = (U(offsetX) << 24) | (U(offsetY) << 16) | (U(offsetZ) << 8) | (0 << 0);

std::cout << (int)(toShader >> 24) << " "<< (int)(toShader >> 16) << " " << (int)(toShader >> 8) << std::endl;

My Output is

-10 -2440 -624444

It's not what I expected, of course, does anyone have a solution?

In the shader I want to unpack the int16 later and that is only possible with a 32bit integer because glsl does not have any other data types.

 int offsetX = data[gl_InstanceID * 3 + 2] >> 24;
 int offsetY = data[gl_InstanceID * 3 + 2] >> 16 ;
 int offsetZ = data[gl_InstanceID * 3 + 2] >> 8 ;

What is written in the square bracket does not matter it is about the correct shifting of the bits or casting after the bracket.



Solution 1:[1]

If any of the offsets is negative, then the shift results in undefined behaviour.

Solution: Convert the offsets to an unsigned type first.

However, this brings another potential problem: If you convert to unsigned, then negative numbers will have very large values with set bits in most significant bytes, and OR operation with those bits will always result in 1 regardless of offsetX and offsetY. A solution is to convert into a small unsigned type (std::uint8_t), and another is to mask the unused bytes. Former is probably simpler:

using U = std::uint8_t;
int third  = U(offsetX) << 24u
           | U(offsetY) << 16u
           | U(offsetZ) << 8u
           | 0u         << 0u;

Solution 2:[2]

I think you're forgetting to mask the bits that you care about before shifting them.

Perhaps this is what you're looking for:

int32 offsetX = (data[gl_InstanceID * 3 + 2] & 0xFF000000) >> 24;
int32 offsetY = (data[gl_InstanceID * 3 + 2] & 0x00FF0000) >> 16 ;
int32 offsetZ = (data[gl_InstanceID * 3 + 2] & 0x0000FF00) >> 8 ;
if (offsetX & 0x80) offsetX |= 0xFFFFFF00;
if (offsetY & 0x80) offsetY |= 0xFFFFFF00;
if (offsetZ & 0x80) offsetZ |= 0xFFFFFF00;

Without the bit mask, the X part will end up in offsetY, and the X and Y part in offsetZ.

Solution 3:[3]

on CPU side you can use union to avoid bit shifts and bit masking and branches ...

int8_t x,y,z,w; // your 8bit ints
int32_t i;      // your 32bit int

union my_union  // just helper union for the casting
 {
 int8_t i8[4];
 int32_t i32; 
 } a;

// 4x8bit -> 32bit
a.i8[0]=x;
a.i8[1]=y;
a.i8[2]=z;
a.i8[3]=w;
i=a.i32;

// 32bit -> 4x8bit
a.i32=i;
x=a.i8[0];
y=a.i8[1];
z=a.i8[2];
w=a.i8[3];

If you do not like unions the same can be done with pointers...

Beware on GLSL side is this not possible (nor unions nor pointers) and you have to use bitshifts and masks like in the other answer...

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1
Solution 2
Solution 3 Spektre