Page 1 of 2

A Question about Ambient Occlusion

Posted: Sun Apr 18, 2004 6:04 pm
by poutsa
My Question:
Is this new Feature for Blender" Ambient Occlusion" the same like the Metropolis Light Transport renderer???

I see this Page in Virtual Light Global Illumination Renderer and i Think the rendering results are the same!!??? or not? what you mean!!??

follow this Link and Look under the Option Gallery:

If someone can Answer me......... Thats Great!Thanks


Posted: Mon Apr 19, 2004 12:42 am
by ideasman
Im not sure, correct me if Im wrong but Ambient Occlusing is a kind of fake GI- Thats why its so much quicker then normal GI.

Ambient Occlusion is the technical name for the rendering-
Metropolis Light Transport renderer- sounds like some kind of made up wizz bang name so mabe ask on the virtual light forum.

Posted: Mon Apr 19, 2004 10:17 pm
by dcuny
No, it's not the same thing at all.

Ambiant Occlusion simulates "ambiant" lighting, the sort of lighting you get on a cloudy day, where light is coming from all directions. In contrast, normal lighting is directed from a particular lightsource.

The "occlusion" part refers to how the method works - it refers to an object being occluding (blocking) a lightsource.

What makes OA unique is that it doesn't use light sources to determine how much lighting a point gets. Instead, at every point in the image it sends out a bunch of rays. If a ray runs into something, it assumes that ray is a "shadow" ray. If the ray doesn't run into anything, it's a "light" ray (in theory, the ray has "escaped" out to the sky, where illumination comes from).

So if you send out 100 rays, and 10 bump into things, the point is 10% shadowed, so it gets 90% illumination.

These rays are send out at random (biased toward the surface normal, but that's more detail than you probably care about). Now, since this is a random sample, if you only send out a few rays, you're likely to miss a lot of things. More rays = better sample = less noise.

Unfortunately, more rays also mean that it takes longer to get an image. So you basically play with the values until you get an image that balances speed against noise.

It's all a bit of a cheat, but it's pretty effective. And it's relatively cheap, when compared to "real" global illumination algorithms. It's sort of skydome lighting, but in reverse. Instead of sending out rays from the skydome and seeing if they illuminate a point, you send out rays from the point, and see if they reach the skydome.

It turns out that only objects that are close are likely to shadow something, so AO only counts a ray as a "shadow" ray if the object it hits is nearby. So that's the "closeness" parameter you can set.

The Metropolis Light Transport algorithm is this wicked-cool method of raytracing that does this neat adaptive sampling stuff... It's really complicated to explain, and even more complicated to implement, too. It turns out that most of your scenes don't need anything as complicated as MLT, and Photon Mapping is generally a better approach.

Posted: Tue Apr 20, 2004 7:15 am
by cessen
Well said, Dcuny. :-)

Posted: Tue Apr 20, 2004 4:11 pm
by leinad13
So dcunny can you explain photon mapping for me

Posted: Tue Apr 20, 2004 7:00 pm
Thanks dcuny, that's cool to see that some people are taking time to explain some feature of Blender. That's great. :D

Your explanation is good, and show that the ambiant occlusion is near a real radiosity solution ! Because, when rays are cast to see if the point is occulted or not, it could be possible to see te lightning of the hitten object. And then, simulate a real radiosity.

Maybe one day in Blender ? :wink:

Posted: Tue Apr 20, 2004 7:39 pm
by Carnivore
GFA-MAD wrote:Maybe one day in Blender ? :wink:
Someone please explain to me what this "real" radiosity is? Blender has two types of radiosty (as I categorized them) - the really simple and the really complicated :D

Posted: Tue Apr 20, 2004 7:58 pm
Ok Carnivore, I will try to do my best. My english is horrible, so please, excuse it !

Radiosity is a way to simulate what Ray-tracing can't: Indirect illumination. I explain: In ray tracing, you cast a ray from the 'eye' (camera) passing through each pixel of the screen. When you hit object, you calculate the illumination of this point by casting a ray from that point to each light. If the ray is occulted, you are in shadow, and if not, in light. OK ?

The problem with this is that you can't take in account illumination coming from over point of the scene. For exemple, in cloudy days, there is no direct illumination, but only indirect: Photon of the sun are boucing in the athmosphere before arriving on earth.

Blender had one complete simulation of indirect illumination: Radiosity has a postprocess effect. It works like this:
-Each face containing 'emit' value shoot energy at his environnment. And than, other face 'catch' this energy and then, throw this energy after... At the end, you obtain a lot of face with different 'Vertex color' corresponding to the illumination it had received. (Horrible english... poor reader :( )

The other radiosity method that exist (Not in Blender...But Ao is quite near !) is calculated during raytracing process:
Each time the ray touch a surface, it cast many rays at random direction (around the normal, of course), and see what it touch. If the ray touch a red material (for exemple), than it give a little 'red' to the point.

This method makes a grainy picture, but doesn't change the geometry of objects, and is slllooowww to render. Lightwave use this method.

Well, there is much more to say, but it's hard for a French to write about those technics in English...Sorry


Posted: Tue Apr 20, 2004 8:38 pm
by poutsa
Thanks for your help about Ambient Occlusion!! Great Job explain..!


Ciao 8)

Posted: Wed Apr 21, 2004 5:11 pm
by cessen
Ambient occlusion is not radiosity, nor is it very similar to it. What ambient occlusion does is it treats the entire sky as one huge area light source. In the case of Blender, it will take the colors and textures of the sky into account, but it does not simulate indirect illumination.

Ambient occlusion can be useful for *faking* indirect illumination, because it can be used as a very effective fill-light. But it's no more radiosity than point light-sources used to fake indirect illumination.

AO is is effectively another type of area light source, *not* radiosity or global illumination.

Posted: Wed Apr 21, 2004 5:18 pm
by cessen
Radiosity is a method of simulating indirect, diffuse lighting. For instance, light that bounces off of one object onto another. It only works on the matte aspect of a surface, though, not the shiny, reflective aspects.

Radiosity, as a word, is also often used to refer to indirect diffuse (matte) illumination in general, though technically it only refers to a specific algorithm.

Posted: Sat Apr 24, 2004 12:21 am
by dcuny
Hrm... I'll try to briefly describe photon mapping, and (hopefully) clear up some misconceptions as well. Fortunately, I just picked up the authorative text, although most of the information is already available in the pdf A Practical Guide to Global Illumination using Photon Mapping.

Photon mapping also tries to answer the question of how much illumination a particular point in a scene should get. It's a two step process. The first step involves having each the light sources shoot out lots and lots of photons, and stores this information into something called a "photon map". In the second step, for every point in your scene that needs to be displayed, it checks in the photon map how much illumination each point should get.

That's the simple explanation. The more technical answer is... well, a bit more technical.

First of all, I should note that the term "photon map" is a bit of a misnomer. A photon map is just a list of all the locations in our scene that a photon bumped into something. For each point that a photon "bumps" into something, we keep track of:
  • where the photon hit
  • what color the photon was
  • what direction it was coming from
It's interesting to note what we don't keep track of. We don't remember which photon it was, or what it bumped into.

So how do we fill up this photon map? Basically, you do a simulation. Pick a light source, shoot out a photon, and have it bounce around for a while. One of several things can happen:
  • It hits a mirrored surface and reflects off it.
  • It hits a transparent surface and refracts (bends) through it.
  • It bumps into an opaque (non-transparent) surface and bounces off.
  • It bumps into an opaque surface and is absorbed.
  • It "escapes" into the sky and we're done.
  • It bumped into lots of stuff already, so the next photon gets a turn.
When a photon "bumps" into an opaque surface, we make a note of it in the photon map. Also, the takes on the color of the object it bumped into, so a white photon bumping into a green wall will turn into a green photon. You get real cool effects this way.

If each photon gets a chance to bump into four or five things before we send out another photon, we'd eventually get a pretty good idea of what the room looked like after sending out thousands and thousands of photons.

Keep it up, and you'd eventually get a complete image - but you'd have to send out millions of photons - sort of like what this guy did. But we don't have days to wait, so we'll only send out a few hundred thousand photons instead, and estimate the results based on that.

So to figure out the lighting for the scene, you might think that you find out how many photons are nearby. The more photons, the brighter the point, right? No, because then you'd end up with the same problem you have with ambiant occlusion: "noisy" points in the scene that are undersampled (too many photons = too bright) and points that are undersampled (too few photons = too dark). Remember, the photons are shot out at random, so even if we send out millions of photons, we'll still have a noisy image.

Instead, you find the nearest n points (where n is some number). For example, the program searches through the photon map looking for the five points closest hits in the photon map.

Once you have the location of these photon hits, you need to find out how much illumination your point gets. And this is where photon mapping gets clever: you figure out how large a bounding volume (let's say it's a sphere) it would take to contain those five photon hits. If they are all nearby, it'll be a very small sphere and the point gets lots of light; if they are far apart, it'll be a large disk and the point gets less illumination. So the illumination of the point is proportional to the size of the volume that contains them.

This turns out to be really clever, because it interpolates lighting nicely. This prevents photon mapped images from having the "salt and pepper" grain that sampling methods like pathtracing and ambiant occlusion get.

I've simplified a number of points - for example, you need to take into account the direction of the photons, I didn't describe how you allocate photons between light sources, I didn't mention russian roulette or irradience caching, balanced kd trees... but that's the gist of it:
  • Shoot out lots of photons
  • Build a "photon map" tracking where (not what) the photons hit
  • For each rendered point, determine how large a volume would need to be to enclose the n nearest points.
Whew! :D

Posted: Sat Apr 24, 2004 1:57 am
Thanks dcuny for those explanations. They are very clear. :D

The way you describe photon mapping is a 'two pass' method and use photon mapping.

But is there another way to simulate radiosity ? Because when AO arrived in Blender, it brings with itself, maybe, another way to produce radiosity, and I will explain in details what I am thinking about. I may be wrong, and if it is, tell me please what is wrong.

Ray tracing process with GI:
-For 'ray' in each ray cast (pixel...):
- -For 'hit' in each intersection ray-object:
- - -Calculate normal illumination (Standard Raytracing method)
- - -'nRay'=Number of samples to take
- - -'GIIllum=0'
- - -for 'GI'=1 to nRay:
- - - -'GIRay'=a ray near the normal starting from 'hit'
- - - -If 'GIRay' hit the sky:
- - - - -GIIllum+=SkyColor
- - - -If 'GIRay' hit an object:
- - - - -GIIllum+=Color of the object hitten with direct illumination but without specularity !


In this exemple, GIIllum represents the quantity of light brought by the environment for ray.
Of course, the ammount of light taken in account by GI had to be corrected by a factor.
That's why I think that AO is near a GI method (with no recursivity, I do agree !)

What are you thinking about it ? :?:

Posted: Sat Apr 24, 2004 8:20 am
by dcuny
You're basically describing Monte Carlo Path Tracing, but without having any "bounces" in the rays.

It'll work, but it's got the same defect as AO - without a lot of samples, you get a lot of grain in your image. Check out these images for some examples.

YafRay probably supports this, but Art of Illusion is a free Java based raytracer with a nice GUI interface. It's pretty easy to create a basic scene and then mess with various rendering parameters.

To simulate the method you described, choose Scene from the menu, and the Render Scene submenu. In the Rendering Options dialog, set Antialiasing to Medium. Choose Illumination... and set Global Illumination to Monte Carlo and close the dialog. In Advanced, set the Max Tree Depth to 1. Render away! :)

Then try messing with various parameters and see what kind of results you get - and how long it takes to render! AoI also supports Photon Mapping - set Global Illumination to Photon Mapping in the Rendering Options.

Posted: Sat Apr 24, 2004 11:23 am
Thanks for the reply dcuny ! And thanks for those links, they are very interesting.

Well, I was using ligthwave years ago, and I saw what this kind of rendering method gives. And yes, it's grainy ! Like AO :wink: ! And yes, it is very slowww :?

With Blender, it is quite easy to avoid grainess, by adding Motion blur. But of course, again, it increase rendering time.

So, event if it's slow, even if it's grainy, could it be a good idea to add the MC method in Blender ? I think it could...

And again, thanks for those links