'Algorithm that results in Scale2x-like result but is robust with floating-point values
Scale2x is a simple algorithm that can fill diagonals so they appear "thick" after upscaling.
This algorithm requires checking for equality:
auto const a01 = src(l, k - 1).value();
auto const a10 = src(l - 1, k).value();
auto const a11 = src(l, k).value();
auto const a12= src(l + 1, k).value();
auto const a21 = src(l, k + 1).value();
ret(2*l, 2*k).value() = a10 == a01 && a01 != a21 && a10 != a12 ? a10 : a11;
ret(2*l + 1, 2*k).value() = a01 == a12 && a01 != a21 && a10 != a12 ? a12 : a11;
ret(2*l, 2*k + 1).value() = a10 == a21 && a01 != a21 && a10 != a12 ? a10 : a11;
ret(2*l + 1, 2*k + 1).value() = a21 == a12 && a01 != a21 && a10 != a12 ? a12 : a11;
Which may not be a robust operation if the values are floats. Besides quantization to an integer by scaling values by a factor 2^k, and then rounding, or introducing tolerances, is there a way to achieve similar output in a more robust way. That is, an algorithm that works without equality.
Solution 1:[1]
I think you touched on the only workable answer in your question: use a tolerance. It's not hard to do if you turn it into a function.
bool eq(double d1, double d2)
{
const double tolerance = 8.0;
return abs(d1-d2) <= tolerance;
}
ret(2*l, 2*k).value() = eq(a10, a01) && !eq(a01, a21) && !eq(a10, a12) ? a10 : a11;
Choose the tolerance based on values that are barely distinguishable from each other. I chose 8 for the common scale of 0-255.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | Mark Ransom |

