Expresses the amount of light scattered from and emitted by a point . There are two main parts:
- , which expresses that point’s inherent emissive light (e.g. if it’s a source). It depends on the intersection point and the outgoing direction .
- The integral , which sums over all rays within the hemisphere centered at . It calculates each ray’s contribution to the light that leaves .
We add these together to get , the output of the function. While correct, this equation does not capture how we implement it practically:
- Instead of an integral, we sample a finite number of rays to sum up.
- Because we perform sampling over the sample space (the hemisphere at intersection ), we must also involve each sample’s PDF.
Incoming, outgoing rays
We know about , which is the output direction from point . It’s the ray direction from back to the camera, which we can call . is the incoming light direction, which is what we need to pick/sample, and what we’re integrating over.
Essentially, we’re working backwards from the camera, randomly sampling a at each intersection point to travel to the previous ray.
For intersection point , is the ray coming in while is the ray going out. The reason that we sometimes draw both and going out of is because it makes finding the dot product with ‘s surface normal much easier, when it’s not really true.
Integral
Defined by .
There are four main parts to the integral, multiplied together. We are integrating over the domain , which can either be a sphere or a hemisphere .
- A sphere would be for materials like glass.
- A hemisphere is for most materials, which does not receive light from “both sides” like glass.
How is it possible we integrate over when is technically a ray, which mathematically has no “width?” We pretend it does. We should treat as a cylinder with very, very, very small radius, so when it hits some geometry, we are considering the differential area that the ray specifies.
This makes sense when we consider that the random variable is continuous, and the PDF is defined for a range of values, not a single point.
Surface material properties
The bidirectional scattering distribution function (BSDF) represents the fraction of light that would leave along after hitting the surface from the direction from .
It calculates the amount of light that is propagated, and how it is transferred, and how the material interacts with light.
Depending on and , this function returns a different value. For example, mirrors require to be a direct reflection of across the surface normal.
Some material properties that the BSDF may use:
- Reflectivity
- Metallicness
- Albedo, the intrinsic color of the material
- Roughness
- Index of refraction
These properties are derived from the material that the ray intersected with.
In reality, researchers calculate these values through many repeated experiments to obtain specific values, which are different for every material.
Sub-BxDFs
BSDF describes reflections in general, but it can be broken down into sub-BxDFs.
The bidirectional reflection distribution function (BRDF) valuates the light emitted along ray given a point of intersection and the direction of the incoming light . It is entirely dependent on the properties of the materials sampled at .
For example, glass materials have both a BRDF and BTDF, and we calculate the Fresnel to find the percentage of light that is reflected off of the surface.
Implementations
In practice, there exist may ways to implement , with some being less realistic than others, but may be easier to calculate/run faster.
- Blinn-Phong specular reflection
- Lambertian diffusion model, which is simply
- Microfacet BRDF models like the Torrance-Sparrow microfacet model
- Cook-Torrance BRDF model
- Oren-Nayar BRDF model
Incoming light energy
Defined by .
This is from the input ray , from the perspective of . For example, if is in shadow from ‘s direction, then is 0.
There are many ways to collect this energy. Throughout the past few homeworks, we’ve been implementing various ways of calculating this value.
Li_Naive()
: Naive implementation
Use cosine-weighted sampling to get our , and bounce the ray for a set number of iterations to get indirect lighting. The path we trace only returns meaningful results if we eventually intersect with a light source.
Li_DirectSimple()
: Direct light sampling
Randomly but directly sample light energy from one of the light sources from our scene, giving us .
Li_DirectMIS()
: Direct light sampling, but with multiple importance sampling
Sample direct lighting two ways, once using Sample_f()
, and then once using Sample_Li()
. We then take the weighted average of the two ‘s.
Li_Full()
: Full integrator
Considers both direct and indirect lighting. The final, most physically realistic form of the four, because it also considers global, indirect illumination.
- The naive implementation wasted a lot of traced paths because there was no guarantee that we would hit a light source, meaning no color would be produced.
- By having our light rays have a greater chance of contributing light back to the camera (since we’re sampling for direct light at each ray bounce), a full integrator will produce a less noisy image in the same amount of time as
Li_Naive()
.
Everytime our ray bounces in our geometry, we will now:
- Pick a to factor in indirect light
- Pick a random light source and spawn another ray to measure direct light
- Also utilize multiple importance sampling while performing direct light sampling
Essentially, we sample direct illumination on each ray bounce, in addition to randomly sampling the next ray for indirect illumination.
Algorithm
Initialize accum_color = vec3(0, 0, 0)
. We will do a for loop with MAX_BOUNCES
again. Inside the loop, we check for the three cases in which we stop:
- We’ve reached the max number of bounces. Then the loop just ends. outside the for loop, we return
accum_color
. - The ray intersects with nothing. in this case, break loop early and return
accum_color
. - The ray directly hits a light source/emissive source. In that case, we update
accum_color
and then return it.
If the intersection point’s material is specular, then there is a 0% chance that the sampled ray will perfectly mirror , and so we shouldn’t use Li_DirectMIS()
, because we can’t sample two different rays.
Probability density function
We sample continuous random variables defined on to get our sample . They follow a standard Uniform distribution.
In order to make sure that our render has as little bias as possible, we want our PDF to be a close match to our method of sampling .
We increase the weight of rays hitting at a more tangent angle and decrease the weight of rays hitting at a more perpendicular trajectory.
- This is because it’s more likely for rays to hit at an angle p
- We treat as a small, infinitesimal, non-zero area, because the PDF measures a range of probabilities, not discrete values.
Example
The PDF of cosine-weighted sampling is . For uniform hemisphere sampling, the PDF is .
Warning
Due to floating point errors, the PDF might be calculated as 0. If so, we simply discard this sample and return black for this iteration.
Visibility test
Between our point and the point that our output ray will hit. Returns true (multiply by 1) if unobstructed, 0 (multiply by 0, get black) otherwise.
Note that in practice, this step is usually performed when we calculate one of the other terms (like the BSDF), and is usually not explicitly apparent like in the LTE.
Lambert’s cosine law
Also called the Lambertian term or the term due to the way it’s calculated.
Regardless of the material, this term will always exist. Unlike the BSDF, the Lambertian term is a measure of the light itself, not the material. It describes how light increasingly diffuses at the intersection point the more tangent the incoming light is with respect to the geometry.
To calculate the term, take the absolute value of the dot product between and surface normal .
Implementation
Monte Carlo sampling
Integrating the LTE requires an infinite number of samples for every ray hitting , and is too complex. In reality, we take a finite summation of random samples .
- Each iteration is divided by the PDF associated with our sampling method to get in the first place.
- We “weigh” each iteration depending on how significant its contribution is to our scene.
We then average the results to estimate the integral’s value. The more samples we take, the more accurate the estimation becomes.
How to distribute the samples “evenly” and intelligently is another topic of discussion.