Sometimes we need to know, in a fragment program, a pixel’s position in eye space. We usually need this in postprocessing operations, where we will tipically send an eye-facing quad filling the whole screen, executing an specific fragment program along with information about the scene (usually textures that contain surface normals, depth, materials, etc) that has been previously rendered in a previous pass.
So, during this postprocessing step, each fragment belonging to this quad corresponds to a little area on the surface of the previously rendered scene, although by now we only know its position within the eye-facing quad being rendered.
The question is: Which was its exact position in the scene in eye space?
First rendering pass
In this pass, the original scene’s geometry is rendered in a FBO, and the depth of each fragment is stored into an output texture (attached to the frame buffer object). In the following example, we can see how this depth is stored in the output texture’s red channel:
// Storing Z in non linear range [0,1]
gl_FragColor.r = gl_FragCoord.z;
It is important that the output texture is configured properly so it has at least a precission of 16 bits per color channel, as 8 bits per channel are not enough to compute most postprocessing effects involving depth properly.
Second rendering pass
Here, an eye-facing quad filling the whole screen is rendered. Its texture coordinates must be properly defined in its vertices. The previous rendering pass’ output texture is used now as input (inputTexture). Also, we are going to need the values defining the vision frustum: znear, zfar, left, right, bottom and top.
// Input texture
uniform sampler2D inputTexture;
// Frustum definition values
uniform float n; // znear
uniform float f; // zfar
uniform float l; // left
uniform float r; // right
uniform float b; // bottom
uniform float t; // top
// Quad/texture size in pixels
uniform float textureWidth;
uniform float textureHeight;
We can calculate the Z coordinate in eye space by taking the depth value rendered in the first rendering pass, converting it into NDC (normalized device coordinates) and applying the following formula:
// z in non linear range [0,1]
float zpixel = inputTexture.r;
// conversion into NDC [-1,1]
float zndc = zpixel * 2.0 - 1.0;
// conversion into eye space
float zeye = 2*f*n / (zndc*(f-n)-(f+n));
X and Y reconstruction
First of all we need to know X and Y coordinates in NDC space:
// Converting from pixel coordinates to NDC
float xndc = gl_FragCoord.x/textureWidth * 2.0 - 1.0;
float yndc = gl_FragCoord.y/textureHeight * 2.0 - 1.0;
Once we have all this stuff, we can compute X and Y in eye space by using the following formulae (we are simply unprojecting X and Y from NDC to eye space):
float xeye = -zeye*(xndc*(r-l)+(r+l))/(2.0*n);
float yeye = -zeye*(yndc*(t-b)+(t+b))/(2.0*n);
vec3 eyecoords = vec3(xeye, yeye, zeye);