OpenGL performs rendering via rasterization. (Rasterization is a form of rendering; ray tracing is another method, but up until very recently has been too slow for real time rendering.)
Vertex shader
The final vertex value should be stored in gl_Position
, which is a vec4
location defined by OpenGL which it will use later in the pipeline.
gl_Position
expects your vertex to be in Clip space.- In practice, it would be really annoying to define all your geometry in clip space coordinates. Therefore, we usually work in world space and then use a projection matrix to bring us to clip space. We multiply our vertex by the projection matrix in the vertex shader.
We also usually define a camera so that we can “move around” our scene. So we first multiply by the view matrix to get to camera space first, before multiplying by the projection matrix.
Given that vector is in world space, the most basic vertex shader we could have looks like this:
#version 460 core
uniform mat4 u_ProjMat;
uniform mat4 u_ViewMat;
in vec3 vs_Pos;
in vec3 vs_Nor;
in vec3 vs_Col;
void main() {
gl_Position = u_ProjMat * u_ViewMat * vec4(vs_Pos, 1.0f);
}
Remember that when multiplying matrices, we do it right to left.
In the context of the vertex shader, all the in
variables are also called vertex attributes. We need to specifically enable them and specify their size, type, stride, etc. via glVertexAttribPointer()
. Also see VBOs, VAOs, vertex attributes.
Geometry shader
Potentially can generate new vertices and therefore create or update the primitives that we receive from the vertex shader output. This step is optional.
Primitive assembly
OpenGL gives us some options as to how to interpret these vertices. We usually think of them as GL_TRIANGLES
but GL_POINTS
and GL_LINE_STRIP
also exist.
Therefore, after the geometry shader OpenGL finally assembles the vertices into primitives based on whichever option we passed in. In this case our choice of primitives are triangles, points, and lines.
I’ll just assume we’re doing triangles from this point on though.
Rasterization
The resulting primitives are mapped to actual pixels on our screen, which we specified the dimensions of via glViewport()
. This results in fragments.
A fragment is basically a data bundle that stores information about a triangle at the given pixel. This can include its color, texture coordinate (UV), and the -coordinate, aka depth.
Clipping
At this point, clipping is done. OpenGL checks, for each vertex where the following is true:
and discards any fragments that fail this test. This means that sometimes a triangle may have to be broken into smaller triangles in order to capture the part of the triangle that is visible within the perspective frustum.
Perspective divide
Perspective divide is done for us automatically. OpenGL takes the -component and divides the whole vector by it, bringing us to NDC.
- , , and will now all span the range .
Fragment shader
At this point, many triangles may overlap over a single pixel. Before we decide which triangle to ultimately use, we store what is called a fragment for each triangle.
A single pixel on your screen may have multiple fragments associated with it.
The fragment shader is run at this point to determine the final color associated with this fragment.
Specifying output color
Since the fragment shader is run at the end of the pipeline, defining an out
variable is critical so you can set it and tell OpenGL what color this fragment contains.
gl_FragColor
was deprecated with the release of OpenGL 3.0, and doesn’t exist if we’re only targeting the core profile. Don’t use it.
This confused me for the longest time: by default, if we just want to directly render to screen (aka default framebuffer), unlike the vertex shader, we don’t need to “specify” the out
variable on the CPU side via e.g. glVertexAttribPointer()
. We just include one. Even the name of the variable doesn’t matter.
- This should make sense: since we’re at the end of the pipeline, the “out” variable will represent the final fragment color.
- OpenGL looks for at least one
out vec4
variable. Otherwise weird things will happen. - If we’re rendering to a custom framebuffer with multiple color buffers, then multiple
out vec4
variables (with layout qualifiers!) is also normal and expected.
Depth test and alpha blending
Ultimately, information from one or more fragment(s) are chosen to color the pixel. Which fragments do we pick for this pixel?
A depth test is performed for each fragment to determine whether it’s behind or in front of objects. This determines whether it will get discarded or shown.
An additional factor to consider is that multiple fragments will be used if the overlapping triangles are transparent in some way (), meaning blending between colors is required.
Depth test is performed after the fragment shader
This fact is important if, for example, our fragment shader is very expensive to compute (e.g. we’re doing PBR or realistic lighting or whatever). What’s the point of rerunning this time-consuming shader for every fragment if we know beforehand that it will be discarded?
One solution to this problem is Deferred rendering.