Faking G.I. through calculated bounce lights

Blender's renderer and external renderer export

Moderators: jesterKing, stiv

Post Reply
Posts: 0
Joined: Sat Nov 06, 2004 6:20 pm

Faking G.I. through calculated bounce lights

Post by Toon_Scheur » Fri Sep 09, 2005 2:30 am

When modeling a scene, one could use bounce lights (standard point lamp) to fake G.I. a little. I was thinking why not let Blender calculate (python or source?) where to place bounce lights?

I've seen pictures where only 4 bounce lights make a big diference. I gather that it shouldn't be that hard to calculate where to place 10 to 20 bounce lights through a simplified radiosity-like method on the fly. It doesn't have to be 100% physical accurate, just good enough to trick the eye.

What do you think of this?

Posts: 53
Joined: Fri Oct 18, 2002 1:35 am
Location: Oceanside, California

Post by Sutabi » Sat Sep 10, 2005 1:19 am

.... just use Blender's Radio Rendering with a Dome...

Posts: 0
Joined: Sat Nov 06, 2004 6:20 pm

Post by Toon_Scheur » Sat Sep 10, 2005 2:59 am

Too expensive and then still you have to do some post work.

I'll will see if something can be done in python. I'm thinking about some parameters like: number of bounces, number of lamps, type of lamp

In animation the bounce lights will be floating around, changing intensity, color and range to keep the GI going.

Posts: 53
Joined: Fri Oct 18, 2002 1:35 am
Location: Oceanside, California

Post by Sutabi » Sat Sep 10, 2005 5:21 am

Hah... your on crack then... you wont get much detail since radio is backed to vertex colors, the more you have of them the more detail you get, so if you haven't noticed those vids have 4k polygons for the cornell box, something that is highly unlikely though a user as we tend to optimize our scene. Simply use blender's radio [and subsurf], Also please note that pdf takes use of opengl with shadows and so on, something that highly unlikely to be used in blender, unless you are software rendering.

A Python impliment would only be helpful, if you use opengl though pyopengl and glx modules for the interactive scene, and render it in its own window, as the BGL module does not have those extensions used for ray shadows.

Posts: 0
Joined: Sat Nov 06, 2004 6:20 pm

Post by Toon_Scheur » Sat Sep 10, 2005 9:43 pm

JEEZ! Thanks for torpedoing my idea.
I was just giving a vague example in the direction that with bounce lights you could achieve a lot of realism. This is a manual process, but if this process can be automated, one can achieve a G.I. like realism.

I know I've said "radiosity", but that is just to illustrate my idea. I don't care if it is radiosity, light density fields, path tracing, photon mapping or whatever G.I. thechnique that exists out there.

My point is:
1) Where to place the bounce lights?
2) Figure out where the lightrays from the lightsources bounces
3) Figure out what the absorption is
4) Evaluate if a bounce afflicts a bounce light already placed somewhere. If so, adjust the color and intensity of the bounce light accordinly
5) Store parameters (maybe field density parameters) to act as a lookup table to change the position, intensity and color of the bouncelights when animating (which should be fast. In G.I./ radiosity the long calculations start from scratch again).

Now, This process can be programmed with a very simplified/naive routine to figure out a rough G.I. Why does it have to be a simplyfied routine? Because when placing bounce lights, the lights will keep illuminating the surrounding areas. With G.I. you have to keep calculating on the surrounding areas. I assume that the coloring of the pixels in the surrounding area through a bounce light will visualy match in a high degree a G.I. solution

And now comes the catch: I can only ASSUME that this method will not diverge to much from a physical correct G.I. calculation, even so I very well hope so that even with a large deviation, the human eye accept it as being "A photorealistic image"

I think that this argument has merrit because we all have seen many still pictures made by great artists that certainly looks photorealistic without the usage of G.I. Just plain old scanline or raytracing. Thus that proves that by cleverly placing lights around, one can fake G.I.

Thus comes my argument full circle.

P.S.= I don't want to hear of no light domes or arrays of spotlights. You still have one bounce, capicce?

Posts: 289
Joined: Wed Oct 16, 2002 2:38 am

Post by z3r0_d » Sat Sep 10, 2005 10:52 pm

Toon_Scheur wrote:JEEZ! Thanks for torpedoing my idea.
I was just giving a vague example in the direction that with bounce lights you could achieve a lot of realism. This is a manual process, but if this process can be automated, one can achieve a G.I. like realism.
one would assume that if it is so simple to automate that there'd be a research paper on it

[not that that is exactly true, but with what we know it isn't trivial. If your polygon count in blender is sufficently low you can easily use radiosity for ambient lighting and you'd also be able to change the lighting over time [which 'bounce lights' don't make as easy]]

Posts: 0
Joined: Sat Nov 06, 2004 6:20 pm

Post by Toon_Scheur » Sun Sep 11, 2005 2:46 am

I'm just claiming that G.I. is precision number crunching, and placing bounce lights through some calculation should yield visualy acceptable results.

Jesus H. Christ! You guys are very critical without reason. Now I understand why Eeshlo didn't announce his Ambient Occlusion (wich produces nice results, and is also fake G.I.). He would have gotten reactions like:
Radiosity is better (yah.. DUH!) or "if it was a good thing, why haven't I seen research papers on this?"

I'll leave it to this, when I have time, I'll try to make something of it, or maybe not.

Posts: 289
Joined: Wed Oct 16, 2002 2:38 am

Post by z3r0_d » Sun Sep 11, 2005 8:39 am

Toon_Scheur wrote:I'm just claiming that G.I. is precision number crunching, and placing bounce lights through some calculation should yield visualy acceptable results.
I like to be a pessimist, please don't take me as being too critical... rambling is about all you'll get out of me

skip down to the second hyphen line to ignore my more pointless rambling

----------- begin excessive ramble mode -----------

okay, placement of 'bounce lights' isn't really something that's trivial. Try to picture the ideal situation

the artist places several [static] lamps from which you wish to calculate good positions for 'bounce lights'. You'd probably not want to limit the type of lamp so bounce light calculation would have to allow point, spot, sun, and area lamps. These lamps would remain in the final render, and would cast shadows.

essentially your 'bounce lamps' are fill lights...

your fill lamps mimic the reflection of direct light, and since the lamps can cast shadows you'd have to subdivide the mesh somewhat to know where the light is bouncing from

this sounds exactly like a radiosity calc

so you take the faces reciving light directly and cast light from them and see where you get. Again you are casting shadows, but this time you are casting light from many more places [the scene has more faces than artist-placed lamps]. Then you do this again for another iteration, as an optimization you can cast reflected light from faces that recieved more light to less [which allows you to stop earlier with fewer artifacts]. This is still sounding exactly like radiosity

okay, so how would exactly we get to the ideal positions/settings of fill lamps? how exactly would a fill lamp work?

your fill lamp would be a sphere lamp, probably set to diffuse and casting no shadows. it would be placed near regions which recieve light indirectly, like the undersides of bridges or tables. It isn't placed on a surface, but rather near one. So, there isn't some clear way of deciding where to place it.

How about a simple example. A room, with a table. On the table facing the cieling is the lamp. Light bounces off the cieling and lights the table and walls. A single sphere lamp would be insufficent to light properly the table and walls [the table would be too bright, or walls too dark], but then using a smaller radius you'd need several for each wall and perhaps two for the table. The next bounce of light would get under the table and the remainder of the cieling and would be lit in a similar fashion. A more complex situation ...

blah, I'm bored

I've been playing with radiosity in the renderer in a simple room with suzanne, an emissive plane, and a table. The results I'm getting are horrible. At subsurf 1 the radio calc decides that regions of faces have a large enough angle to be a hard edge. At subsurf 2 the hemirez doesn't go high enough to not have artifacts. Wait, cool! it uses the autosmooth angle defined per-object for that... I can use subsurf 1. Render times are coming in at 10 seconds for this incredibly simple scene, and it would be easy for me to move the light source [an invisible plane] around over time.

a research paper would have shown that both this idea is feasable [with results], and how to properly implement it, some of its drawbacks and possible ideas of where to go from this point.

------------------- begin less-ramble mode ------------------------

after playing with the idea the drawbacks seem like:
* static, the lights causing ambient lighting can't move without [expensive]recalculation
* calculation is far from physically correct [its more of a compression of the radiosity solution]
* dynamic objects don't work well with this [if a character moves near one of these 'bounce lights' it would look very odd. Same if the character were not lit by these lights when the proper distance from them.

actually, that last one is probably the most annoying issue with this.

so, what does this get you:
* very fast renders, once the calculation is done

hrm, seems like you might as well do a radiosity calculation and just keep the vertex-color lit mesh around. Use seperate lights for your characters and stuff [you'd probably do that with bounce lights anyway]

I personally don't see a way you could make this work better than radiosity

Posts: 0
Joined: Mon Jan 27, 2003 11:22 pm

Post by dcuny » Mon Sep 12, 2005 7:36 pm

It seems to me that the basic idea here is quite sound. You should be able to run a quick photon lighting simulation, and pare down the lights to some fraction based on how much the contribute to the scene.

It doesn't matter if the result is correct - just as long as it's plausible, and remains consistantly lit for the duration of the scene.

The catch is when you animate: objects interact with lighting, or the actual lightsource may move. This can cause some unintended artifacts.

You might want to have a look at Instant Radiosity, which is a similar idea. The catch with Instant Radiosity is that you need to have photons follow similar paths for each frame, which can be done by using quasi-random numbers, which generate sequences which have properties similar to random numbers, but can generate the same set of values again and again.

Similar to your suggestion, Instant Radiosity can have unintended artifacts when objects interact with lighting.

Posts: 98
Joined: Fri Oct 18, 2002 2:47 pm
Location: Pennsylvania, USA

Post by harkyman » Mon Sep 12, 2005 10:21 pm

Why doesn't someone do a test as proof of concept?

Do a scene (NOT the Cornell box - go find someone's nice Yafray living room render and ask to borrow the .blend file) with hand-placed lamps to simulate bounce lighting. See how good it looks compared to its GI cousin. How acceptable are the results?

If you can't get good enough results tweaking by hand, I'd be surprised if you can come up with an algorithm to do it for you, as it seems to me that it's more of an artistic, subjective evaluation of success/failure.

That said, if you CAN get good enough results, then you're onto something. Get a couple more scenes. In fact, get twenty more. Hand light them all until you're happy with them.

If there's a pattern, and you're any kind of engineer worth your poop, algorithms for automating that process you've been doing by hand should begin to suggest themselves. After that, you just write it up, hit alt-P and show us the magic!

Personally, I'd want to see a good proof of concept image or two before I spent one single minute writing code.

Posts: 109
Joined: Tue Oct 15, 2002 11:43 pm

Post by cessen » Sat Oct 15, 2005 1:31 am

z3r0_d wrote:one would assume that if it is so simple to automate that there'd be a research paper on it
There are research papers about similar things, I think. I'd have to check. They're not the exact same thing, of course, but similar.

As was mentioned by dcuny, photon mapping could be used quite effectively here to get a rough idea of the GI in the scene, and it would be quite fast. In most cases probably much faster than radiosity. However, the difficulty isn't in getting the rough GI. It's taking the rough GI and fitting light sources to it. That's not a trivial problem by any means--especially choosing how many light sources to create and where to place them. It's a very complicated multi-dimensional fitting problem.

Another possibility would be to have the user place the light sources and then have Blender try to estimate their color, intensity, etc. This would simplify the fitting calculations since you would only have to worry about the intensity, color, etc. of the light sources. Still not a trivial problem, but it would reduce the dimensionality.

Toon_Scheur's idea isn't entirely without merit, it's just a lot more involved than he realizes. I think he's run into the problem that a lot of people do (even programmers) where they assume that something that's simple and intuitive for people to do must also be simple for a computer to do. Then they try to figure out how to actually do it and they realize that it's not easy at all.

Posts: 0
Joined: Sat Nov 06, 2004 6:20 pm

Post by Toon_Scheur » Thu Oct 20, 2005 2:37 am

And how right you are, I've been fiddling wiyh some formulas and it is dificult indeed. It is not as trivial. But you've said that a rogh photon mapping can give you an idea of a rough G.I. Maybe combined with some simplifiactions like allowing the user place an x ammount of lamps at some suggested location, and firing off the second step of the calculation.

With this method I think results are too unnexpected, so it can be used for total artistic control of a lighting rig, but it can yield nice results when it doesn't matter how the lighting looks like, as long it looks nice. Good for noobs amongst other when you just want to model and see a quick relust with a nice light setup.

Posts: 0
Joined: Wed Jun 02, 2004 6:34 pm

Post by osxrules » Fri Nov 04, 2005 5:03 am

I had actually considered this idea too. My first thought was that you could simulate the bounce light from a wall using a large area light. But the problem of course when you extend that to more complex scenes is that the bounce light isn't uniform. After reading some material on radiosity, the idea just sounded so similar that it looked like that was the only way to do it.

However, radiosity has problems in that it is slow and seems to cause a lot of artifacts. Also, the old school artists at the big companies were able to fake such lighting long before GI was available so I think there has to be a way. The problem here I think is that humans are able to fake scenes because they know what they should look like. A computer doesn't. So trying to write a program to fake GI properly could be near impossible. You might have to program some AI first. I also assumed that far smarter people than me (anything from monkey upwards) have thought the idea up too and if they've come up with radiosity then that must be the best automated way to do it.

Radiosity isn't all that bad anyway. For animated scenes, the majority of the radiosity should be baked into the scene because it's static. Ideally you would render it to a separate pass and composite at the end so you could adjust shadow properties.

I think someone said that any solution along the lines outlined would be just a compression of the radiosity solution and I partly agree. However I think that by using mathematical objects such as lights are, there would be far fewer artifacts than using radiosity with fewer calculations so I still think it would be a good thing to look into.

I think the problem with the current radiosity setup is it seems to be discrete so artifacts are always there unless a lot of samples are used. What if instead of that, surfaces were treated as continuous and intensity variations were functions? The geometry of all the objects in the scene could be simplified to it's most basic counterpart - a head simplified to a sphere; a leg to a cylinder. It's important that they are simple surfaces for fast calculation. Then instead of sending out discrete rays, you could make a non-uniform procedural texture be generated in real-time based on the simplified scene geometry and then subsequently bounced again. The procedural texture would relate the intensity of the light.

Think of projecting a procedural texture onto a wall: lets say you have an area light shining onto a wall then if a person is in front of it, all you get is a simplified outline of the body usually so why not use simplified continuous maths objects to project shadows (radiosity makes really soft shadows anyway)? Then produce a mathematically defined texture and reproject that always adding them together. It should always avoid discrete artifacts. Also, there would be an option to blur the shadow outline based on the object proximity - like the penumbra on spotlights.

At the end, you would get a mathematically well-defined procedural texture showing the light intensities over all the scene and then all you have to do is render the brightness of those points distributing the global light accordingly.

Post Reply