'Computing the distance at which one cell of an axis aligned grid projects to one pixel on screen

Given an axis-aligned uniform grid in the X/Y plane (world space) in a 3D scene and a virtual camera looking at this grid from a certain position and direction. How can I calculate the distance I need to move the camera along its line of sight so that one of the grid cells is projected onto one pixel on the screen (fills one screen pixel)? The camera projection parameters (field of view, near and far clip plane) and the width and height of the screen are known.

This base distance is to be used to determine the level of detail for rendering/raycasting a heightmap (uniform grid of elevation values). The algorithm I am trying to implement is described in the paper "Maximum Mipmaps for Fast, Accurate, and Scalable Dynamic Height Field Rendering" by Tevs et al., 2008 (see Sect. 3.3). During raycasting, the current distance between the camera and an intersection point of the ray is compared with the base distance. If the current distance is smaller than the base distance, a higher level of detail is rendered (lower mipmap level). If the current distance is greater than the base distance, a lower level of detail is rendered (higher mipmap level).



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source