Hrm... I'll try to briefly describe photon mapping, and (hopefully) clear up some misconceptions as well. Fortunately, I just picked up the authorative text
, although most of the information is already available in the pdf A Practical Guide to Global Illumination using Photon Mapping
Photon mapping also tries to answer the question of how much illumination a particular point in a scene should get. It's a two step process. The first step involves having each the light sources shoot out lots and lots of photons, and stores this information into something called a "photon map". In the second step, for every point in your scene that needs to be displayed, it checks in the photon map how much illumination each point should get.
That's the simple explanation. The more technical answer is... well, a bit more technical.
First of all, I should note that the term "photon map" is a bit of a misnomer. A photon map is just a list of all the locations in our scene that a photon bumped into something. For each point that a photon "bumps" into something, we keep track of:
- where the photon hit
- what color the photon was
- what direction it was coming from
It's interesting to note what we don't
keep track of. We don't remember which
photon it was, or what
it bumped into.
So how do we fill up this photon map? Basically, you do a simulation. Pick a light source, shoot out a photon, and have it bounce around for a while. One of several things can happen:
- It hits a mirrored surface and reflects off it.
- It hits a transparent surface and refracts (bends) through it.
- It bumps into an opaque (non-transparent) surface and bounces off.
- It bumps into an opaque surface and is absorbed.
- It "escapes" into the sky and we're done.
- It bumped into lots of stuff already, so the next photon gets a turn.
When a photon "bumps" into an opaque surface, we make a note of it in the photon map. Also, the takes on the color of the object it bumped into, so a white photon bumping into a green wall will turn into a green photon. You get real cool effects this way.
If each photon gets a chance to bump into four or five things before we send out another photon, we'd eventually get a pretty good idea of what the room looked like after sending out thousands and thousands of photons.
Keep it up, and you'd eventually get a complete image - but you'd have to send out millions
of photons - sort of like what this guy
did. But we don't have days to wait, so we'll only send out a few hundred thousand photons instead, and estimate
the results based on that.
So to figure out the lighting for the scene, you might think that you find out how many
photons are nearby. The more photons, the brighter the point, right? No, because then you'd end up with the same problem you have with ambiant occlusion: "noisy" points in the scene that are undersampled (too many photons = too bright) and points that are undersampled (too few photons = too dark). Remember, the photons are shot out at random, so even if we send out millions of photons, we'll still have a noisy image.
Instead, you find the nearest n
points (where n
is some number). For example, the program searches through the photon map looking for the five points closest hits in the photon map.
Once you have the location of these photon hits, you need to find out how much illumination your point gets. And this is where photon mapping gets clever: you figure out how large a bounding volume (let's say it's a sphere) it would take to contain those five photon hits. If they are all nearby, it'll be a very small sphere and the point gets lots of light; if they are far apart, it'll be a large disk and the point gets less illumination. So the illumination of the point is proportional to the size of the volume that contains them
This turns out to be really clever, because it interpolates lighting nicely. This prevents photon mapped images from having the "salt and pepper" grain that sampling methods like pathtracing and ambiant occlusion get.
I've simplified a number of points - for example, you need to take into account the direction of the photons, I didn't describe how you allocate photons between light sources, I didn't mention russian roulette or irradience caching, balanced kd trees... but that's the gist of it:
- Shoot out lots of photons
- Build a "photon map" tracking where (not what) the photons hit
- For each rendered point, determine how large a volume would need to be to enclose the n nearest points.