Beginner's Guide to Graphics Lighting and Shading

Glossary

Wikipedia - Computer Graphics Glossary

Shader: programs running on gpu that describes general computation (vertex transformation: vertex shader; shading calculation: fragment shader)

G-buffer: a screen space representation of geometry and material information (e.g. color, normal, position/depth)

Fragment: is the corresponding pixel generated by geometric primitives, but a pixel on screen can be a product of more than one fragment due to Z-buffering, blending etc.

Vertex lighting vs. Per-pixel lighting

modern graphics pipeline

Vertex Lighting

  • Lighting is computed per-vertex

  • calculation happens in the vertex shader

  • lighting/color information is then linearly interpolated across faces and rasterized

  • it is cheaper, faster (since there are fewer vertices compared to pixels) but noticeable artifact with low-poly objects

Per-Pixel Lighting

  • Lighting is computed per-pixel/fragment (but what is this exactly? see the example below)

  • Calculation happens in the fragment shader

  • normal information (passed from vertex shader) is interpolated on the faces, lighting/color is calculated and rasterized.

  • it is more expensive but less artifact

Example of a per-pixel lighting shader

Vertex Shader

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aNormal;

out vec3 FragPos;
out vec3 Normal;

uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;

void main()
{
FragPos = vec3(model * vec4(aPos, 1.0));
Normal = mat3(transpose(inverse(model))) * aNormal;

gl_Position = projection * view * vec4(FragPos, 1.0);
}

Fragment Shader

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
#version 330 core
out vec4 FragColor;

in vec3 Normal;
in vec3 FragPos;

uniform vec3 lightPos;
uniform vec3 viewPos;
uniform vec3 lightColor;
uniform vec3 objectColor;

void main()
{
// ambient
float ambientStrength = 0.1;
vec3 ambient = ambientStrength * lightColor;

// diffuse
vec3 norm = normalize(Normal);
vec3 lightDir = normalize(lightPos - FragPos);
float diff = max(dot(norm, lightDir), 0.0);
vec3 diffuse = diff * lightColor;

// specular
float specularStrength = 0.5;
vec3 viewDir = normalize(viewPos - FragPos);
vec3 reflectDir = reflect(-lightDir, norm);
float spec = pow(max(dot(viewDir, reflectDir), 0.0), 32);
vec3 specular = specularStrength * spec * lightColor;

vec3 result = (ambient + diffuse + specular) * objectColor;
FragColor = vec4(result, 1.0);
}

As we can see, fragment position and normal information are passed into the fragment shader from vertex shader (where the pre-compute happens), the lighting calculation (Phong lighting) is calculated per-fragment.

On a related note: gpu cost are related to

  1. how many vertices are passed in to the gpu from buffer and
  2. how complex the lighting calculation is in the fragment shader (this could be the algorithm/model complexity, and the number of passes);

Forward vs. Deferred Rendering

Forward Rendering

The standard, out-of-the-box rendering technique

Geometries are passed to gpu, going through vertex shader and fragment shader, with each geometry and each light computed separately one at a time to form the final render.

Render complexity: O(num of geometry fragments * num of lights)

forward rendering

Deferred Rendering

Render is deferred until all geometry has been processed

Geometries are passed to gpu, going through vertex shader and fragment shader (without lighting pass), final rendering is computed/combined with multiple render passes (one pass for getting all geometry information to G-buffer, second pass for compute lighting based on the G-buffer).

Render complexity: O(screen resolution * num of lights)

deferred rendering

Discussion

Everything all comes down to lighting, as gpu can easily handle vertex information, but the most expensive are lighting calculation which can easily slow down the rendering. Forward rendering iterates and compute each fragments of each geometry, no matter if it overlaps or is hidden by other fragments. So for each pixel, we could have already run multiple fragment shaders.

This is where deferred rendering come in handy, the G-buffer stores information such as color, normal and depth. The lighting later on can know how to produce the final render by combining all the information (For example: depth test can also cull out all the fragments that are being obscured). So essentially, each pixel only runs a single fragment shader.


Rasterization vs. Ray tracing

The forward, deferred rendering techniques are all in the realm of rasterization, which is the most popular and traditional real-time rendering technique. With the advance of hardware, ray tracing, which is computationally demanding (usually used in films/animation) can now be used in real-time video games.

Rasterization

We gather objects information and projects each one by one on screen (per fragment), fragment shader computes the final color of every fragment to form pixels on screen.

The fundamental of rasterizing is that for each object/geometry, we look at (rasterize it) it’s verts/triangles to cover pixels.

So, as discussed in deferred rendering, every object is drawn but not all of them are displayed on screen. This overdraw can be accelerated by using deferred shading’s depth test.

rasterization

Ray tracing

We cast rays from our eyes (camera) for each pixel and gather information of those rays as they travel through/intersects with objects and interacts with lights to form the final render on screen.

The fundamental of ray tracing is that for each pixel, we look at (shoot a ray) each object/geometry to see how they contribute to the color of that pixel.

As for ray tracing, we need to shoot many rays for each pixel, and more when there are reflection and refraction. One way to accelerate this is to use bounding volume.

ray tracing

Discussion

The most significant difference in visual is that ray tracing is physically more accurate than rasterization, thus more realistic. Which is more apparent in dynamic environment with objects that reflects and refract. Rasterization needs many estimation techniques to handle lighting and shadowing such as more render passes, baked light map, cubemap reflection, but ray tracing gets all the results out of the box.

rasterization vs. ray tracing


Reference

Envato tuts+ - Forward Rendering vs. Deferred Rendering

Learn OpenGL - Deferred Shading

Wikipedia - Deferred Lighting

Learn OpenGL - Basic Lighting

Knowww - Per-vertex vs. per-fragment lighting

Unity Forum - What’s the difference between per-pixel and per-vertex lit in Forward Rendering?

Youtube - OpenGL Fragment Shaders | How Do Fragment Shaders Work?

Nvidia - Ray Tracing Essentials

Stack Exchange - Mirror Reflections: Ray Tracing or Rasterization?

Quora - What is the difference between ray tracing and very high shader details?