Better shadow buffer implimentation?

Blender's renderer and external renderer export

Moderators: jesterKing, stiv

Post Reply
cessen
Posts: 109
Joined: Tue Oct 15, 2002 11:43 pm

Better shadow buffer implimentation?

Post by cessen »

Blender's shadow-buffer shadows need some work.

It would be nice to at the very least see the Woo trick implimented (also known as the mid-point trick), where the two closest depths of a pixel are averaged to get the final depth stored in the shadow buffer (this makes bias issues nearly obsolete).

Secondly, it would be nice to interpolate the depth between pixels of the shadow (probably bi-linearly). This would lessen the need for the bias value even further, and would also make pixelization in the shadows a little less noticable.

Anyway, just a throwing around ideas. I have no idea how one would go about implimenting these (especially the Woo trick) in Blender's source, but I figured I'd bring the topic up anyway, just incase someone else might get inspired to actually do something. ;-)

z3r0_d
Posts: 289
Joined: Wed Oct 16, 2002 2:38 am
Contact:

Re: Better shadow buffer implimentation?

Post by z3r0_d »

cessen wrote:Blender's shadow-buffer shadows need some work.

It would be nice to at the very least see the Woo trick implimented (also known as the mid-point trick), where the two closest depths of a pixel are averaged to get the final depth stored in the shadow buffer (this makes bias issues nearly obsolete).
that only works when everything recieving shadows also casts them

blender doesn't work that way

if blender did that, objects that recieve shadows but don't cast them can pass through the depth value stored as where a shadow starts very easily

essentially, by being able to have objects which reieve shadows but don't cast them the value in the shadow buffer MUST be the value of the closest face, else there will always be that problem
cessen wrote:Secondly, it would be nice to interpolate the depth between pixels of the shadow (probably bi-linearly). This would lessen the need for the bias value even further, and would also make pixelization in the shadows a little less noticable.
I'm confused as to what you suggest

uhh, oversampling:
http://mywebpage.netscape.com/YinYangEv ... shadow.png
or simply interoplating sorta:
http://mywebpage.netscape.com/YinYangEv ... ow_ati.png

well anway, it is I guess just a faster way of adding more samples, except it wouldn't allow the shadow to be blurred as much as is currently possible

cessen
Posts: 109
Joined: Tue Oct 15, 2002 11:43 pm

Post by cessen »

essentially, by being able to have objects which reieve shadows but don't cast them the value in the shadow buffer MUST be the value of the closest face, else there will always be that problem
Hmm. Well, like I said, I don't have a very good sense of how Blender's shadow buffer implimentation works. I assume, then, that at the moment it only bothers to render shadow-casting objects to the shadow buffer, rather than trying to keep track of what to ignore. That makes sense.

I suppose, then, in order to impliment the Woo trick, that would have to be changed. If it were changed such that it still rendered the non-shadow-casting depths, but didn't store them unless they were the second-closest depth for a pixel, then the Woo trick would still work.
The implimentation for that would be a bit annoying, and would involve the ability to keep track of no less than three depth values per pixel, but it would still be doable. Ideally one would render the shadow-casting objects in a first pass, and the non-shadow casting objects in a second.

As for interpolation, what I'm suggesting is that instead of simply assuming the entire area of a shadow-buffer pixel has the same depth, you interpolate the depths between the mid-points of the shadow buffer pixels. It's simple interpolation. And it wouldn't interfere with blurred shadows from the shadow buffer, because that's based on taking multiple samples of the shadow buffer itself only *after* the shadow buffer has been rendered already.

z3r0_d
Posts: 289
Joined: Wed Oct 16, 2002 2:38 am
Contact:

Post by z3r0_d »

cessen wrote:
essentially, by being able to have objects which reieve shadows but don't cast them the value in the shadow buffer MUST be the value of the closest face, else there will always be that problem
Hmm. Well, like I said, I don't have a very good sense of how Blender's shadow buffer implimentation works. I assume, then, that at the moment it only bothers to render shadow-casting objects to the shadow buffer, rather than trying to keep track of what to ignore. That makes sense.

I suppose, then, in order to impliment the Woo trick, that would have to be changed. If it were changed such that it still rendered the non-shadow-casting depths, but didn't store them unless they were the second-closest depth for a pixel, then the Woo trick would still work.
The implimentation for that would be a bit annoying, and would involve the ability to keep track of no less than three depth values per pixel, but it would still be doable. Ideally one would render the shadow-casting objects in a first pass, and the non-shadow casting objects in a second.
actually, you'd only need two shadow buffers

probably one to have the closest values, and the other to have the next closest.

once you've rendered everything to both of them, you'd average the values in one of the buffers.

it probably would be kind of awkward though, a shadow recieving face can't be the front face in a shadow buffer [because then it could cast shadows], but can only be the face used as the second closest for averaging.

cessen
Posts: 109
Joined: Tue Oct 15, 2002 11:43 pm

Post by cessen »

z3r0_d wrote:actually, you'd only need two shadow buffers

probably one to have the closest values, and the other to have the next closest.
I think that only applies if you can be gaurenteed that you're rendering the to the shadow buffer in a front to back order. If it's random, or a back to front order, then I think you need to have three shadow buffers (I'm not sure how Blender renders to the shadow buffer, mind you).

For instance, let's say you have three depth values, one of which only recieves shadows. And let's say that the non-shadow casting one is closest.

Say we render the two shadow-casting depths first (and one is stored in each shadow buffer). When we recieve the closer non-shadow casting value, what do we do?
We could discard it, leaving the two other values there. But then what if we recieved a shadow-casting depth that was closer than all of them? We would have then lost the second clostest value (the non-shadow caster).
Or we could keep it, storing it in the shadow buffer for the closest value. But then if we *didn't* recieve a shadow-casting depth that's closer, we would also be doing the incorrect thing.

So I think we need three shadow buffers, unless Blender renders the shadow buffer in a precise front to back order.

z3r0_d
Posts: 289
Joined: Wed Oct 16, 2002 2:38 am
Contact:

Post by z3r0_d »

why not back to front?

and only render to one buffer in same manner, but copy old value to secondary buffer when doing so

then, go through the shadow-recieving faces, and if further than value in first buffer, but closer than value in second buffer, write value into second buffer

then average the two buffers

front to back ordering minimizes overdraw though, and it is not very quick to ensure a perfect front to back or back to front order.

I'll try to think about this one more later

cessen
Posts: 109
Joined: Tue Oct 15, 2002 11:43 pm

Post by cessen »

Oh, oops. Silly me. I was assuming that shadow-casting and non-shadow-casting objects would be rendered to the shadow-buffer in a single pass, even though I has suggested otherwise.

Yes, if you rendered all of the shadow-casting objects in a first pass, and the non-shadow-casting objects in a second pass, then you would only need two shadow buffers, regardless of the depth-order of rendering (even random depth-ordered rendering would work in that case). You are absolutely right.

Post Reply