During the geometry pass, we render to the G-buffer. Minimum info required are position and normals. Data should be either in camera or world space, so before the perspective divide
- Both camera and world work because the only difference here is what is considered the origin. Important part is that both are in 3D space
- Perspective divide makes the -values uniform and effectively makes everything a 2D space. Bad.
- If we do use world space, we need to convert our march point and G-buffer point to camera space before calculating depth.
In the second render pass, we first calculate the reflected ray, . This can be done by finding the world/camera position of the geometry at a given fragment position, finding the vector from the camera to that position (this is ), and reflecting it across the normal to get .
Raymarching
Next, we perform raymarching along in order to find an intersection in the scene.
- Can be done in both 3D and 2D space, tradeoffs with either. We will do 3D here
- Define an end point for our march, because we may never hit an intersection (but we can also miss really distant objects as a result)
- Add a small epsilon to our beginning position, to avoid accidentally intersecting with our starting geometry position
2D
Convert the start and end points to pixel coords by multiplying by projection and view matrix, perspective divide, etc. We can then determine the slope of these two pixel points, and determine along which axis we should march.
- For example, if the slope is more horizontal, we march 1 pixel every time along -axis. Analogous for vertical slope, -axis.
- We now know along which axis to march. “Marching” here means iterating 1 pixel at a time from our starting pixel to our end pixel.
For each pixel, compare the ray’s current -value to the -value at the current pixel. Stop if the ray’s -value is greater than or equal to the -value at the pixel. We also check if the difference between -values is within our intersection tolerance.
We can scale this threshold relative to the size of the scene.
To increase accuracy, we can perform binary search along the ray between the last missed point and our current point (assuming that we overstepped our march and are now “behind” the geometry).
We need to compare the -value of our march point to the G-buffer depth. To do that, we reproject our -value using perspective-correct interpolation. Also convert coordinates of march point back to world space, so we can sample the correct albedo from our G-buffer. This will be the reflected color that we see from our starting point.
Ray march goes offscreen/out of bounds
To handle this case, terminate the ray march when the current march point’s UV coordinates are out of the bounds of .
Dealing with screen space limitations
Since we can only sample from objects visible within the viewport frustum, sometimes objects will be cut off near the edges of the screen.
- Can use
smoothstep
to blend between the cutoff into transparency for a cleaner transition - Do the same if we reach our maximum raymarching distance, but we’re in the middle of geometry
We can also reduce the opacity of intersections that may be less accurate (check against our intersection tolerance), as well as surfaces that are really thin, so the intersection may actually be sampling the back of the object, because we accidentally went too far.