Image based motion blur

Blender's renderer and external renderer export

Moderators: jesterKing, stiv

env
Posts: 12
Joined: Thu Oct 17, 2002 9:58 pm
Location: Italy
Contact:

Image based motion blur

Post by env » Tue Feb 08, 2005 12:03 pm

I wrote a short paper about the possibility to implement image based motion blur in Blender. I'm not a coder, so I collected different fonts to have a general idea of the feature and some proposal to make it work as a postprocess effect in the Blender Sequencer.
Here's the link to the paper:
http://www.enricovalenza.com/mb.html

Thanks for the attention

EnV

harkyman
Posts: 98
Joined: Fri Oct 18, 2002 2:47 pm
Location: Pennsylvania, USA
Contact:

Post by harkyman » Tue Feb 08, 2005 3:01 pm

Hear ye, hear ye, this man speaks the truth.

After careful observation of much good animation, one of the conclusions I have come to is that good motion blur is essential to a watchable animation. Of course, it's not the only thing you need, but it gives an amazing quality boost if used properly.

env, read your proposal and the linked documentation. I really think that the one that combined a lower level of frame-based blur (like Blender does now) with image-based blur is great. I could see that working very well in Blender. Also, if vectorial blur were available as a sequence strip, you could use it in any combo you liked with the frame based blur Blender already has, tweaking it to your scene's complexity and your time constraints.

The real hurdle to doing this would be getting the motion data out of Blender. As the image-based system performs its evaluations on a pixel-by-pixel basis, the logical place to generate the motion data would be within the renderer. In other words, you would have to turn on a "include vector info" button in the Render buttons, and write the code that evaluated and constructed the secondary image that represented the motion vector of each pixel. Once that was working, I think that the actual sequence and code for blurring would be relatively simple.

So, as you're not a coder, and my plate in so full right now that stuff is falling off the table and onto the floor, the next step would be to collect research and ideas on how exactly one stores motion vector information in an something resembling an image format in a useful way.

env
Posts: 12
Joined: Thu Oct 17, 2002 9:58 pm
Location: Italy
Contact:

Post by env » Tue Feb 08, 2005 4:26 pm

Hi harkyman.
Never tought of the vectorial motion datas as image datas (if I well understand), actually it could work great in combination with the lower level of frame-based blur (as you said). Also, I've heard somewhere that Ton is seriously thinking about to rewrite from scratch the renderer of Blender, so it could also be the right moment...
Being merely a freelace illustrator, I can't go further on with this :) , but I posted a link on the italian site Kino3d to bring the thing in evidence, and I'm going to do the same thing on ElYsiun.
Thanks for your attention.

EnV

matt_e
Posts: 410
Joined: Mon Oct 14, 2002 4:32 am
Location: Sydney, Australia
Contact:

Post by matt_e » Tue Feb 08, 2005 4:35 pm

harkyman wrote:the next step would be to collect research and ideas on how exactly one stores motion vector information in an something resembling an image format in a useful way.
Something like this would be a good start :)

http://www.alamaison.fr/3d/lm_2DMV/lm_2DMV.htm

bullx
Posts: 0
Joined: Mon Jan 05, 2004 9:25 pm

Post by bullx » Tue Feb 08, 2005 6:12 pm

hi env reading your paper about vectorial motion blur has been really interesting.
just some times ago i give up rendering an animation test with mblur on, too many time to wait if compared whit a not-motionblurred one.

i hope that some coder can have some time to find a solution for this, just because this is one of that features that other BIG applications JUST have, and i cant' be really really happy to play with my softbody animation or with my new-very-advanced-feature knowing that i miss things like motion blur or DOF.

hey don' t misunderstand me... softbodies is great :D :D but having these features can make blender more "complete".

tnx for your time EnV & developpers & contributors.
bye

harkyman
Posts: 98
Joined: Fri Oct 18, 2002 2:47 pm
Location: Pennsylvania, USA
Contact:

Post by harkyman » Tue Feb 08, 2005 7:21 pm

Good motion blur really is needed for any sense of plausible realism, even in non-photo real rendering. Like it says in one of the papers env linked to, we're so used to seeing it everywhere, even with our own eyes, that we don't even notice it until it's missing.

env
Posts: 12
Joined: Thu Oct 17, 2002 9:58 pm
Location: Italy
Contact:

Post by env » Wed Feb 09, 2005 9:33 am

Something like this would be a good start

http://www.alamaison.fr/3d/lm_2DMV/lm_2DMV.htm
Hey, that's what exactly what I saw times ago and I couldn't find it anymore! :) Thanks for the link, broken.

Harkyman, I tried to post this also in the feature request at Blender3d.org, but I couldn't success... does someone know how to do it?

EnV

joeri
Posts: 96
Joined: Fri Jan 10, 2003 6:41 pm
Contact:

Post by joeri » Wed Feb 09, 2005 10:18 am

Nice stuff.
I was wondering if it would be important to use the z-buffer as well.
Should the 2d blur routine do the back pixels first?

harkyman
Posts: 98
Joined: Fri Oct 18, 2002 2:47 pm
Location: Pennsylvania, USA
Contact:

Post by harkyman » Wed Feb 09, 2005 1:11 pm

It shouldn't matter what order they were done in, as long as you're generating your blurred image into a seperate buffer. The additive transforms on the pixels in the blurred should be uniform, and probably reversible, so order wouldn't matter.

The best link in the stuff that env posted was the paper on generating motion blur strictly from evaluating the vectorial change between two static images, as those that would be produced through stop-motion animation. Their results are really fantastic, but when you go to their website, you see a big fat "Patent Pending" notice. Damn.

bullx
Posts: 0
Joined: Mon Jan 05, 2004 9:25 pm

Post by bullx » Wed Feb 09, 2005 2:01 pm

let me better understand.
we need some sort of information from the moving object in blender to affect the post-pro-motion-blur parameters, am i right?

---i'm not a coder---

so we need something like direction, and speed (distance/frames)

maybe an ipotetical scrip can be based on the movement of one (or more) emptyes parented to the object.

let's say we have a cube that moves from A to B in 20 frames.

if the info we have to collect are about direction we have to evalutete the variation between the spatial xyz of A & B.

then we need to collect some info about speed, so we can evalutate the distance between the two pions in relation to the number of frames that the cube need to finish it's path.

---in real blender life this shuold be much complicate, i.e. if my object is a rotating cube and if the path is not a straight line---
but if the first part of my post has a sense you can solve this someway, using more than one empty, or evalutating distance and speed every x frames.

just some simple ideas.
bye

env
Posts: 12
Joined: Thu Oct 17, 2002 9:58 pm
Location: Italy
Contact:

Post by env » Wed Feb 09, 2005 5:23 pm

Hi Bullx. :)
Mm... probably a nice idea but this could work for a whole "rigid" object moving in space, i.e. a spaceship or a car, but what about a character, where the oscillation of the arms must have more blurring than the shoulder and so on?
As harkyman said, the best way could be the evaluation of the differences between two static poses (two frames, of course), that is between each frame and the following one, but "how" is a different question...

EnV

harkyman
Posts: 98
Joined: Fri Oct 18, 2002 2:47 pm
Location: Pennsylvania, USA
Contact:

Post by harkyman » Wed Feb 09, 2005 5:55 pm

Here's how I see it:

1. Scanline renderer determines what face in the scene is closest to the viewpane according to the z-buffer, then calculates the proper color for that pixel. (This occurs already).

2. Renderer then determines the location of the center of the face.

3. Calculations are done to determine the timeslice to be evaluated (essentially, how long to leave the "shutter" open), which would be a combination of direct and indirect user settings.

4. Blender calculates the position of the face for the time-slices before and after the current one, then uses the difference in positions between the three face locations to calculate a motion vector for the pixel being evaluated in step 1.

5. This motion vector is stored in a secondary buffer that rides along with the standard render buffer, which is then applied with a pretty simple blurring algorithm after the render is finished.

The question then becomes: how to store this motion vector info? Each pixel of the final image will have a 3D vector associated with it. New file format? I suppose you could normalize the info into 3 8-bit channels, and include the overall normalizing transform in a header so the true scale of motion could be reconstructed later, but you'd lose a lot of resolution in your info. When you're doing blurring, though, it may not matter a lot.

In fact, it almost seems like this would need to be an integrated render option, like in a REYES renderer, and not post production, unless you can pass an arbitrary amount of scene info to the sequence editor.

Argggghhhhh. Why am I thinking about this? It makes me salivate, and I have no time to pursue it.

bullx
Posts: 0
Joined: Mon Jan 05, 2004 9:25 pm

Post by bullx » Wed Feb 09, 2005 6:35 pm

----:twisted: -----



hey harkyman, my friend, i know you want it, ...yes i know...

can you resist the temptation to try if the "8 bit solution" can work? can you?

mmmmhh don't make me think how sweet a feature like this could be...

don't you want it?

/----:twisted: -----

-efbie-
Posts: 0
Joined: Wed Oct 27, 2004 9:47 pm

Post by -efbie- » Wed Feb 09, 2005 6:38 pm

Computing with the frame before would require to render one more frame. I don't know if it is that useful...

I would suggest using a 'RGBA' sheme for the vector information : the RGB indicates the vector direction, and the A is used as a scaling factor. it would be more precise in this way :). And couldn't this image be included as a new channel in the output image. I heard that OpenEXR will be included. It could hold a special layer like this.

Speed informations are useful not only for motion blur, but for color adjustments because the eye desaturate the moving objects. It would add more realism to be able to adjust this :)

harkyman
Posts: 98
Joined: Fri Oct 18, 2002 2:47 pm
Location: Pennsylvania, USA
Contact:

Post by harkyman » Wed Feb 09, 2005 7:15 pm

RGBA with A being the scaling factor for each vector would definitely work.

And you wouldn't have to render the frames on either end of the time slice. You just need to query the location in 3D space of the center of the appropriate face. So you'll only be doing transforms, and you can cache the ones you've already done for this frame, so other pixels rendered on the same face would just pull from the cache. You'd have to do just two queries for 3D location per visible face in the final render, but no more more shading calculation than you were doing before. You know - you could do this with Python, just to start and just to generate the image as a proof of concept, as the whole thing could be face-based.

Arggggghhh. This is tempting to pursue. And shut up bullx. :D My wife will absolutely kill me if I start working on something else in addition to the four things I'm working on right now.

Post Reply