'Making a NaN on purpose in WebGL

I have a GLSL shader that's supposed to output NaNs when a condition is met. I'm having trouble actually making that happen.

Basically I want to do this:

float result = condition ? NaN : whatever;

But GLSL doesn't seem to have a constant for NaN, so that doesn't compile. How do I make a NaN?


I tried making the constant myself:

float NaN = 0.0/0.0; // doesn't work

That works on one of the machines I tested, but not on another. Also it causes warnings when compiling the shader.

Given that the obvious computation didn't work on one of the machines I tried, I get the feeling that doing this correctly is quite tricky and involves knowing a lot of real-world facts about the inconsistencies between various types of GPUs.



Solution 1:[1]

Pass it in as a uniform

Instead of trying to make the NaN in glsl, make it in javascript then pass it in:

shader = ...
    uniform float u_NaN
    ...

call shader with "u_NaN" set to NaN

Solution 2:[2]

Fool the Optimizer

It seems like the issue is the shader compiler performing an incorrect optimization. Basically, it replaces a NaN expression with 0.0. I have no idea why it would do that... but it does. Maybe the spec allows for undefined behavior?

Based on that assumption, I tried making an obfuscated method that produces a NaN:

float makeNaN(float nonneg) {
    return sqrt(-nonneg-1.0);
}

...
    float NaN = makeNaN(some_variable_I_know_isnt_negative);

The idea is that the optimizer isn't clever enough to see through this. And, on the test machine that was failing, this works! I also tried simplifying the function to just return sqrt(-1.0), but that brought back the failure (further reinforcing my belief that the optimizer is at fault).

This is a workaround, not a solution.

  1. A sufficiently clever optimizer could see through the obfuscation and start breaking things again.
  2. I only tested it in a couple machines, and this is clearly something that varies a lot.

Solution 3:[3]

The Unity glsl compiler will convert 0.0f/0.0f to intBitsToFloat(int(0xFFC00000u) - since intBitsToFloat is supported from OpenGL ES 3.0 onwards, this is a solution that works in WebGL2 but not WebGL1

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Craig Gidney
Solution 2 Craig Gidney
Solution 3 matthias_buehlmann