Forward rendering

  1. Each triangle is split into fragments
  2. Each fragment is colored by a fragment shader
  3. Fragments are discarded based on depth test
  4. Fragments are written to the framebuffer

For a given pixel, there is a 1:1 count of number of fragments to the number of triangles occupying that pixel.

  • Step 2 wastes a lot of compute time, because we may just discard the fragment we’ve calculated for another because it’s hidden.
  • Compute time is even worse when the fragment shader is very complicated, like our PBR shader, or when the scene has hundreds of light sources.
  • We could cull objects beforehand on the CPU side, but that also has a cost.

Deferred rendering

Generally the default technique used in modern day rendering.

Unlike forward rendering, we can render each fragment attribute to a different frame buffer first, and then combine all the attributes in a second render pass to compute the final color.

The geometry buffer (G-buffer) is a standard term that refers to the collection of textures that stores the fragment attributes. This includes

  • Position
  • Normal
  • Albedo
  • Metallic
  • Roughness
  • Anything else you want to keep track of (e.g. velocity)

Deferred rendering is performed in two draw calls: the geometry pass, and then the lighting pass. In the geometry pass, we draw all our primitives, and write the geometry information to the G-buffer. In the lighting pass, we combine all our G-buffer data into a single, final pixel color.

Like the name says, we defer calculating the lighting of a pixel until we finish rasterizing the whole scene and determining only visible fragments to the camera.

  • This will be much faster for expensive lighting calculations than running it once for every single fragment in a pixel.
  • We can store more information about the scene for other purposes. For example, rendering spheres to the G-buffer to act as a mask for the rest of the image.