OpenGL performs rendering via rasterization. (Rasterization is a form of rendering; ray tracing is another method, but up until very recently has been too slow for real time rendering.)
Vertex shader
The final vertex value should be stored in gl_Position
, which is a vec4
location defined by OpenGL which it will use later in the pipeline.
gl_Position
expects your vertex to be in Clip space.- In practice, it would be really annoying to define all your geometry in clip space coordinates. Therefore, we usually work in world space and then use a projection matrix to bring us to clip space. We multiply our vertex by the projection matrix in the vertex shader.
We also usually define a camera so that we can “move around” our scene. So we first multiply by the view matrix to get to camera space first, before multiplying by the projection matrix.
Given that vector is in world space, the most basic vertex shader we could have looks like this:
Remember that when multiplying matrices, we do it backwards.
In the context of the vertex shader, all the in
variables are also called vertex attributes. We need to specifically enable them and specify their size, type, stride, etc. via glVertexAttribPointer()
. Also see VBOs, VAOs, vertex attributes.
Somewhere after vertex shader
At this point, clipping is done. OpenGL checks, for each vertex where the following is true:
And discards any triangles that fail this test. This means that sometimes a triangle may have to be broken into smaller triangles in order to capture the part of the triangle that is visible within the perspective frustum.
Perspective divide is done for us automatically. OpenGL takes the -component and divides the whole vector by it, bringing us to NDC.
- , , and will now all span the range .
Fragments
At this point, many triangles may overlap over a single pixel. Before we decide which triangle to ultimately use, we store what is called a fragment for each triangle.
A fragment is basically a data bundle that stores information about a triangle at the given pixel. This can include its color, texture coordinate (UV), and the -coordinate, aka depth.
- A single pixel on your screen may have multiple fragments associated with it.
- Ultimately, information from one or more fragment(s) are chosen to color the pixel. Multiple fragments are used if the triangles are transparent in some way () and blending is required. Otherwise, a depth test to check which fragment is the “closest” or “on top” works.