Lighting

In this lesson you will implement a direct illumination model in OpenGL. You will learn to model classical light types in GLSL. In the second part you will implement the same lighting model using a deferred rendering technique.

Consult this page  to download the base code, or start from the code you wrote during Lesson 1.

Forward shading

Forward Shading

Forward shading is often defined by opposition to Deferred Shading. It is the traditional real-time rasterization and shading pipeline: A draw call is emitted, Vertices are shaded by the vertex shader, primitives are rasterized, and shading samples are computed by the fragment shader.

  1. What is the difference between a pixel and a fragment?
  2. Define Early-Z Rejection.
  3. Define direct lighting.

Blinn Phong

We will implement a normalized Blinn-Phong lighting model.

Blinn_Vectors

  1. List other illumination models.
  2. Define the notion of BRDF

1) Build a scene looking like this.

1_1

2) Define an arbitrary \vec{L} vector in a fragment shader.

3) We define \L_o(v) as the energy from the \vec{L}  reflected toward the eye according to this equation.

L_o(v) = (\frac{c_{diff}}{\pi} + (\vec{n}\cdot\vec{h})^{\alpha_p} c_{spec}*\frac{\alpha_p+2}{8\pi})(\vec{n}\cdot\vec{l})

Where c_{diff} is the diffuse color, c_{spec} the specular color, 0.1 < \alpha_p < 100.0 the specular power, and \vec{h} = \parallel \vec{l} + \vec{v} \parallel.

Point Light

pointlight

1) Light the scene with a  point light.

We arbitrarly define a  point light,  with a  position, a color c_{light} and an intensity l_{i}. Use uniform variables to pass these data to the fragment shader.

To compute the illumination you need to compute the vector \vec{l}, from the surface to the light. We modify the normalized Blinn-Phong equation to look like this.

L_o(v) = c_{light} * l_{i} * \pi * ((\frac{c_{diff}}{\pi} + (\vec{n}\cdot\vec{h})^{\alpha_p} c_{spec}*\frac{\alpha_p+2}{8\pi})(\vec{n}\cdot\vec{l})

In which geometric space did you define your data?

3) Animate the values and make them tweakable with the UI.

4) Implement a quadratic light attenuation.

We compute the quadratic attenuation by multpilying L_o(v)  by  \frac{1.0}{d^2} with d the distance between the light and the surface.

We  usually implement linear, quadratic and cubic attenuation. Which one is physically correct?

1_2

5) Add another light of the same type with different parameters.

You should probably compute your illumination in a function.

1_3

Directional Light

directionallight

1) Light the scene with a  directional light.

We arbitrarily define a  directional light, with a direction, a color, and an intensity.  Computing the vector \vec {l} is trivial.

1_4

Spot Light

spotlight

1) Light the scene with a spot light

We arbitrarily define a  spot light, with a position,  a direction, an angle, a color, and an intensity.

The main difference between a point light and a spot light is that the spotlight will light the surface if the angle between the spot direction vector and  -l is greater than the half angle of the spot light.

2) Implement a soft transition between shadow and light.

You can compute an angular attenuation factor or falloff  f = \frac{cos(\theta)-cos(\Phi)}{cos(\Phi)-cos(\phi)}^4 between 0 and 1 with \theta the angle between l and the direction of the spot, \Phi the angle of the spot, and \phi the starting angle of the  falloff with \phi > \Phi .  Pass the value of \phi as a uniform.

1_5

3) Implement quadratic attenuation.

4) Light your scene with 3 types of lights.

1_6

Deferred shading

Deferred shading

Deferred shading is a two-pass rendering technique decoupling rasterization from lighting computation. This technique is ubiquitous in video game engines since the PS3/XBox360 console generation.

The scene is first rendered into a G-Buffer (G is for Geometry). The G-Buffer is a set of textures where we store in screen-space the geometric properties -normals, depth- and material properties -diffuse color, specular power- of the scene.

In the second pass, we render a screen space rectangle for each light, in each fragment shader call we rebuild the material and geometric properties of the object being rendered using data stored in the G-Buffer.  The fragment shader then compute lighting in screen space. The process is repeated for each light using additive blending.Deferred shading reduces fragment shader calls, especially in cases where overdraw is important. It also facilitates rendering scenes with many lights, by reducing fragment shader costs and making it easier to cull lights which are not contributing to rendered objects.

The technique presents major inconveniences, first it can make implementing multiple materials with different BRDFs inefficient,  it is difficulty compatible with  geometric antialiasing techniques like MSAA. It is also impossible to render transparent objects with modification forcing engines to support transparent objects with a dedicated forward rendering pipeline. Finally if the technique can reduce fragment shader calls it can also be very onerous in memory bandwidth due to multiple  G-Buffer reads,  it is therefore necessary to compress data stored in the G-Buffer which can reduce quality.

We will implement a very basic version of the algorithm. The curious will soon discover hundreds of articles and papers on optimization, G-Buffer compression, light-culling and support for additional features.

Fist pass : G-Buffer

The G-Buffer is built using Framebuffer Objects in OpenGL. Until this point, we used the default framebuffer implicitly created by the window toolkit. In this exercise we will create our own framebuffer, built for rendering the scene into the G-Buffer textures.

1) Build a scene looking like this.

1_1

2) Successively display normals, diffuse color,  specular power and depth.

You can use the builtin variable gl_FragCoord.z to read fragment depth in the fragment shader.

3) Render the scene inside a framebuffer object.

You will render a G-Buffer three textures, a color texture, a normal texture, and a depth texture. You will need to modify the initialization part of your program an the fragment shader.

Build the necessary OpenGL objects.

Initialize the color texture with using the size of the window, without passing it any data.

Create the normal texture in RGBA32F.

The  depth texture.

Create the framebuffer and specify that you will write into two DrawBuffers.

Then attach the three textures to the framebuffer.

OpenGL use a specific function to test framebuffer object  status.

Warning, when the FBO is bound, any render will happen inside it and it will not appear inside the default framebuffer.  Check that your program still displays something.

You now need to modify the rendering loop to render the scene inside the framebuffer you just created. Do not forget that glClear must be performed for earch framebuffer, including the default framebuffer.

Check that your program does not display anything anymore.

Modify the output variables inside your fragment shader.

Modifiez votre shader pour écrire dans Color, la couleur diffuse en rgb et la couleur spéculaire dans l’alpha et dans Normal, la valeur de normale en rgb et le SpecularPower en alpha.

Modify your shader to write the diffuse color inside Color.rgb, and the specular color inside Color.a, the normal in Normal.rgb, and the specular power inside Normal.a.

To verify your results you will need to complete the following exercise.

Blit

Use the blit shader to display a texture on the current viewport.

1) Display the G-buffer color texture as an overlay.

2_1

Create a new shader using this code in the vertex shader and fragment shader.

Do not forget to fill the Texture uniform.

You need now to apply this new shader on a surface. Create a quad, since this object is not meant to be transformed or animated, we can specify it directly in clip space.

Observe the difference with how other objects from the scene are defined.

Why  do we not need to define normals and texture coordinates? Define clip space.

Create a vao for the quad using this code.

We now need to display the color texture of the G-Buffer, in order to verify its content and debug it. It will be more practical to dislay it as an overlay, meaning on top of the rendering but under the UI. We therefore disable the depth test. Do not forget to reactivate it in the right place.

Le shader de blit affiche la texture sur la totalité du viewport, pour ne pas recouvrir la totalité de la fenêtre, il faut définir un viewport de plus petite taille, tout en essayant de conserver le ratio original. N’oubliez pas de rétablir le viewport à la taille de la fenêtre.

The blit shader will display the texture on the whole viewport, in order to cover only a smaller part of the windows you need to define a smaller viewport keeping the image ratio. Do not forget to reset viewport size before the main rendering draw calls.

You then need to use the blit shader, bind the right texture and draw the quad.

2) Display the other buffers.

Modify the previous code in order to display simultaneously the  three G-Buffer textures. Check your results.

Second pass : Lighting

Rendering the second pass is done after G-Buffer rendering but before drawing the overlay and UI.

1) Light your scene with a point light

Create a  new shader for computing point light illumination in screen space. Reuse the blit vertex shader and use the following code as a base for the fragment shader.

In the default framebuffer, render a full screen quad using the point light shader.

Verify that you see the fragment shader color on the screen.

In the shader you need to build the three vectors \vec{v}, \vec{n}, \vec{l} and material properties. Start by reading the values stored in the three textures of the G-Buffer.

You can then easily build the diffuse color, specular color, specular power and normal.

Display those values to check your results.

To compute \vec{l} and \vec{v} you need to compute the position p of the geometry corresponding to the current fragment. This operation consists in converting the position x,y,z  of the fragment in the space chosen to compute illumination, here in world space. Start by computing the transform matrix to convert from clip space to world space and pass it as a uniform in the fragment shader.

In the shader, we use this matrix to compute p.

Il ne reste plus qu’à passer la position de la lumière en uniform pour calculer l et la position de la caméra pour calculer v.

Then you only need to pass the light position as a uniform to compute \vec{l} and the camera position to compute \vec{v}.

You have now every necessary data to compute the point light illumination in screen space.

2) Render multiple point lights.
2_2

Pour afficher plusieurs lumières du même type, il suffit d’une modification très simple du code existant. Il suffit de dessiner un rectangle pour chaque lumière en blending additif. N’oubliez pas de désactiver le blending à la fin du calcul de l’illumination.

To display multiple lights of the same type, you need to modify your rendering loop. For each light, draw a quad using additive blending. Do not forget to deactivate blending at the end of the light loop.l

3) Use the same approache to light your scene with spot lights and directional lights.

 

all_lights_deferred

Create two more shaders to compute spot light and directional light illumination.

Use a simple light loop modification to render multiple light types.

 Optimisation

1) Reduce G-Buffer size.

Compress your normals.

http://aras-p.info/texts/CompactNormalStorage.html

http://c0de517e.blogspot.fr/2015/01/notes-on-g-buffer-normal-encodings.html

2) Display a quad only if the light contributes to lighting.

3) Compute in screen space the bounding box of the light and drawn the quad accordingly.

Student work

Leave a Reply