###
**Intro**

Parallax occlusion mapping is a popular technique used in many real time applications and modern video games such as Grand Theft Auto V, The Elder Scrolls: Skyrim and Crysis, where the goal is to make realistic looking textures. Parallax occlusion encodes the surface information in textures to reduce its geometric model's complexity while giving a fairly realistic look. To manipulate the surface details a height map is used, which represents displacement in the texture or surface. A height map is often based on the original texture but in gray scale, where the difference between black and white represent the material's displacement. More about height maps later. The texture's details are reconstructed in the pixel shader, using the height map, when the model is rendered, so that it creates an illusion of displacement (in most cases depth).*A good example of a parallax occlusion shader applied in GTA V. As you can see, the stone bricks really seems to have depth, but it's really just a 2D texture.*

Have in mind though, that you need a bunch of other techniques to generate something like that, like anti-aliasing for example. That said, this would look totally boring without the parallax occlusion applied.

## The algorithm

We know that the idea is, with the help of a height map, to generate volumetric shapes for each pixel of the rendered surface. This sounds pretty advanced, but with some basic linear algebra math it's not that hard. Before we get into the algorithm itself, I'll show a illustration about what we want to with it and how our height map works.As I stated before, the height map is based on the original texture, but covered in gray scale. If we say that the scale is ranging from 0.0 to 1.0, where 0.0 is black and 1.0 is white, we could describe the height in the texture. The more precise the height map are, the more precise will the illusion of depth in the surface be.

*Example of a well done height map for some brick wall texture.*

Back to the algorithm - I hope my poorly made illustration that follows, is enough to get the basic idea of how parallax occlusion works.

What you see here, is the actual ray trace in the pixel shader that we want to implement. The thin lines under the flat surface represents the height map values, and the red line represents the ray that's being traced. The point 1 is where the ray first intersects with the surface, this point represents the texture coordinates we would normally render. Instead, we wait until the ray intersects with the height map. When it does, we can calculate our illusionary coordinates, which gets rendered instead. Lets move on to some more technical (linear algebra) stuff!

The main loop of this shader is going to find the intersection of the camera vector with the height map. To skip some extra processing, we are going to quit the loop as soon as any intersection is found.

**The vertex shader**

Here is what's going to happen in the vertex shader:

__1. Calculate the vector from the camera to the vertex.__

1.1. Transform the vertex position into world space and subtract its position from the camera position.

1.2. Subtract the world space vertex position from the light position to find the light direction vector.

__2. Transform camera vector, light direction vector and vertex normal to tangent space.__

2.1. Create the transformation matrix with the binormal, tangent vectors and vertex normal together with the world matrix. Be sure to take the transpose of the matrix as we want to transform from world to tangent and not from tangent to world.

2.2. Use the transformation matrix just created to transform the vectors to tangent space.

2.3. Multiply the incoming vertex position with the world view projection matrix to get the correct output position.

**The pixel shader**

This is where the actual parallax occlusion mapping takes place.

__1. Calculate the parallax constants.__

1.1 Calculate the maximum parallax offset and it's direction vector with the help of the texture coordinate, the value our height map is showing and the tangent vector from the pixel to the camera. The camera vector must be normalized to give the offset direction vector.

1.2. Determine the number of samples that should be used.

1.3. Calculate the step size. This is simply done by divide the maximum height (1.0 as stated earlier) with the number of samples.

__2. Set up the core.__

2.1. Initialize the variables used in the main loop: currentRayHeight, currentOffset, lastOffset, lastSampledHeight. currentSampledHeight, currentSample.

2.2. Create the main loop. The main loop should run while the current sample is less then the total number of samples.

2.2.1. Calculate currentSampledHeight.

2.2.2. Check if currectSampledHeight is bigger then currentHeightRayHeight.

If it is, set values:

2.2.2.1. Calculate delta of currentSampledHeight and currentRayHeight, also the delta of currentRayHeight plus the step size and lastSampledHeight.

2.2.2.2. Calculate the ratio which should be applied to lastOffset and currentOffset by dividing the first delta with the first delta plus the second delta.

2.2.2.3. Calculate currentOffset with the ratio applied to lastOffset and currentOffset. (ratio * lastOffset + (maxHeight - ratio) * current offset).

2.2.2.4 Set currentSample to maximum number of samples to quit the loop.

If not, step:

2.2.3.1. Decrement currentRayHeight by step size, increment currentSample by one.

2.2.3.2. Set lastOffset = currentOffset and add (stepSize * maxOffset) to currentOffset.

2.2.3.3 Set lastSampledHeight = currentSampledHeight.

__3. Set the finals.__

3.1. Set the final coordinates to texture coordinates + currentOffset.

3.2. Calculate the final normal and the final color with the help of the different maps used.

3.3 Optional: Manipulate the outgoing color and other parameters.

--

Thats pretty much it. The tricky part is in the pixel shader, but these steps together with a linear algebra book should make it doable.

## No comments:

## Post a Comment