Image based motion blur
Moderators: jesterKing, stiv
Image based motion blur
I wrote a short paper about the possibility to implement image based motion blur in Blender. I'm not a coder, so I collected different fonts to have a general idea of the feature and some proposal to make it work as a postprocess effect in the Blender Sequencer.
Here's the link to the paper:
http://www.enricovalenza.com/mb.html
Thanks for the attention
EnV
Here's the link to the paper:
http://www.enricovalenza.com/mb.html
Thanks for the attention
EnV
Hear ye, hear ye, this man speaks the truth.
After careful observation of much good animation, one of the conclusions I have come to is that good motion blur is essential to a watchable animation. Of course, it's not the only thing you need, but it gives an amazing quality boost if used properly.
env, read your proposal and the linked documentation. I really think that the one that combined a lower level of frame-based blur (like Blender does now) with image-based blur is great. I could see that working very well in Blender. Also, if vectorial blur were available as a sequence strip, you could use it in any combo you liked with the frame based blur Blender already has, tweaking it to your scene's complexity and your time constraints.
The real hurdle to doing this would be getting the motion data out of Blender. As the image-based system performs its evaluations on a pixel-by-pixel basis, the logical place to generate the motion data would be within the renderer. In other words, you would have to turn on a "include vector info" button in the Render buttons, and write the code that evaluated and constructed the secondary image that represented the motion vector of each pixel. Once that was working, I think that the actual sequence and code for blurring would be relatively simple.
So, as you're not a coder, and my plate in so full right now that stuff is falling off the table and onto the floor, the next step would be to collect research and ideas on how exactly one stores motion vector information in an something resembling an image format in a useful way.
After careful observation of much good animation, one of the conclusions I have come to is that good motion blur is essential to a watchable animation. Of course, it's not the only thing you need, but it gives an amazing quality boost if used properly.
env, read your proposal and the linked documentation. I really think that the one that combined a lower level of frame-based blur (like Blender does now) with image-based blur is great. I could see that working very well in Blender. Also, if vectorial blur were available as a sequence strip, you could use it in any combo you liked with the frame based blur Blender already has, tweaking it to your scene's complexity and your time constraints.
The real hurdle to doing this would be getting the motion data out of Blender. As the image-based system performs its evaluations on a pixel-by-pixel basis, the logical place to generate the motion data would be within the renderer. In other words, you would have to turn on a "include vector info" button in the Render buttons, and write the code that evaluated and constructed the secondary image that represented the motion vector of each pixel. Once that was working, I think that the actual sequence and code for blurring would be relatively simple.
So, as you're not a coder, and my plate in so full right now that stuff is falling off the table and onto the floor, the next step would be to collect research and ideas on how exactly one stores motion vector information in an something resembling an image format in a useful way.
Hi harkyman.
Never tought of the vectorial motion datas as image datas (if I well understand), actually it could work great in combination with the lower level of frame-based blur (as you said). Also, I've heard somewhere that Ton is seriously thinking about to rewrite from scratch the renderer of Blender, so it could also be the right moment...
Being merely a freelace illustrator, I can't go further on with this
, but I posted a link on the italian site Kino3d to bring the thing in evidence, and I'm going to do the same thing on ElYsiun.
Thanks for your attention.
EnV
Never tought of the vectorial motion datas as image datas (if I well understand), actually it could work great in combination with the lower level of frame-based blur (as you said). Also, I've heard somewhere that Ton is seriously thinking about to rewrite from scratch the renderer of Blender, so it could also be the right moment...
Being merely a freelace illustrator, I can't go further on with this

Thanks for your attention.
EnV
Something like this would be a good startharkyman wrote:the next step would be to collect research and ideas on how exactly one stores motion vector information in an something resembling an image format in a useful way.

http://www.alamaison.fr/3d/lm_2DMV/lm_2DMV.htm
hi env reading your paper about vectorial motion blur has been really interesting.
just some times ago i give up rendering an animation test with mblur on, too many time to wait if compared whit a not-motionblurred one.
i hope that some coder can have some time to find a solution for this, just because this is one of that features that other BIG applications JUST have, and i cant' be really really happy to play with my softbody animation or with my new-very-advanced-feature knowing that i miss things like motion blur or DOF.
hey don' t misunderstand me... softbodies is great
but having these features can make blender more "complete".
tnx for your time EnV & developpers & contributors.
bye
just some times ago i give up rendering an animation test with mblur on, too many time to wait if compared whit a not-motionblurred one.
i hope that some coder can have some time to find a solution for this, just because this is one of that features that other BIG applications JUST have, and i cant' be really really happy to play with my softbody animation or with my new-very-advanced-feature knowing that i miss things like motion blur or DOF.
hey don' t misunderstand me... softbodies is great


tnx for your time EnV & developpers & contributors.
bye
It shouldn't matter what order they were done in, as long as you're generating your blurred image into a seperate buffer. The additive transforms on the pixels in the blurred should be uniform, and probably reversible, so order wouldn't matter.
The best link in the stuff that env posted was the paper on generating motion blur strictly from evaluating the vectorial change between two static images, as those that would be produced through stop-motion animation. Their results are really fantastic, but when you go to their website, you see a big fat "Patent Pending" notice. Damn.
The best link in the stuff that env posted was the paper on generating motion blur strictly from evaluating the vectorial change between two static images, as those that would be produced through stop-motion animation. Their results are really fantastic, but when you go to their website, you see a big fat "Patent Pending" notice. Damn.
let me better understand.
we need some sort of information from the moving object in blender to affect the post-pro-motion-blur parameters, am i right?
---i'm not a coder---
so we need something like direction, and speed (distance/frames)
maybe an ipotetical scrip can be based on the movement of one (or more) emptyes parented to the object.
let's say we have a cube that moves from A to B in 20 frames.
if the info we have to collect are about direction we have to evalutete the variation between the spatial xyz of A & B.
then we need to collect some info about speed, so we can evalutate the distance between the two pions in relation to the number of frames that the cube need to finish it's path.
---in real blender life this shuold be much complicate, i.e. if my object is a rotating cube and if the path is not a straight line---
but if the first part of my post has a sense you can solve this someway, using more than one empty, or evalutating distance and speed every x frames.
just some simple ideas.
bye
we need some sort of information from the moving object in blender to affect the post-pro-motion-blur parameters, am i right?
---i'm not a coder---
so we need something like direction, and speed (distance/frames)
maybe an ipotetical scrip can be based on the movement of one (or more) emptyes parented to the object.
let's say we have a cube that moves from A to B in 20 frames.
if the info we have to collect are about direction we have to evalutete the variation between the spatial xyz of A & B.
then we need to collect some info about speed, so we can evalutate the distance between the two pions in relation to the number of frames that the cube need to finish it's path.
---in real blender life this shuold be much complicate, i.e. if my object is a rotating cube and if the path is not a straight line---
but if the first part of my post has a sense you can solve this someway, using more than one empty, or evalutating distance and speed every x frames.
just some simple ideas.
bye
Hi Bullx.
Mm... probably a nice idea but this could work for a whole "rigid" object moving in space, i.e. a spaceship or a car, but what about a character, where the oscillation of the arms must have more blurring than the shoulder and so on?
As harkyman said, the best way could be the evaluation of the differences between two static poses (two frames, of course), that is between each frame and the following one, but "how" is a different question...
EnV

Mm... probably a nice idea but this could work for a whole "rigid" object moving in space, i.e. a spaceship or a car, but what about a character, where the oscillation of the arms must have more blurring than the shoulder and so on?
As harkyman said, the best way could be the evaluation of the differences between two static poses (two frames, of course), that is between each frame and the following one, but "how" is a different question...
EnV
Here's how I see it:
1. Scanline renderer determines what face in the scene is closest to the viewpane according to the z-buffer, then calculates the proper color for that pixel. (This occurs already).
2. Renderer then determines the location of the center of the face.
3. Calculations are done to determine the timeslice to be evaluated (essentially, how long to leave the "shutter" open), which would be a combination of direct and indirect user settings.
4. Blender calculates the position of the face for the time-slices before and after the current one, then uses the difference in positions between the three face locations to calculate a motion vector for the pixel being evaluated in step 1.
5. This motion vector is stored in a secondary buffer that rides along with the standard render buffer, which is then applied with a pretty simple blurring algorithm after the render is finished.
The question then becomes: how to store this motion vector info? Each pixel of the final image will have a 3D vector associated with it. New file format? I suppose you could normalize the info into 3 8-bit channels, and include the overall normalizing transform in a header so the true scale of motion could be reconstructed later, but you'd lose a lot of resolution in your info. When you're doing blurring, though, it may not matter a lot.
In fact, it almost seems like this would need to be an integrated render option, like in a REYES renderer, and not post production, unless you can pass an arbitrary amount of scene info to the sequence editor.
Argggghhhhh. Why am I thinking about this? It makes me salivate, and I have no time to pursue it.
1. Scanline renderer determines what face in the scene is closest to the viewpane according to the z-buffer, then calculates the proper color for that pixel. (This occurs already).
2. Renderer then determines the location of the center of the face.
3. Calculations are done to determine the timeslice to be evaluated (essentially, how long to leave the "shutter" open), which would be a combination of direct and indirect user settings.
4. Blender calculates the position of the face for the time-slices before and after the current one, then uses the difference in positions between the three face locations to calculate a motion vector for the pixel being evaluated in step 1.
5. This motion vector is stored in a secondary buffer that rides along with the standard render buffer, which is then applied with a pretty simple blurring algorithm after the render is finished.
The question then becomes: how to store this motion vector info? Each pixel of the final image will have a 3D vector associated with it. New file format? I suppose you could normalize the info into 3 8-bit channels, and include the overall normalizing transform in a header so the true scale of motion could be reconstructed later, but you'd lose a lot of resolution in your info. When you're doing blurring, though, it may not matter a lot.
In fact, it almost seems like this would need to be an integrated render option, like in a REYES renderer, and not post production, unless you can pass an arbitrary amount of scene info to the sequence editor.
Argggghhhhh. Why am I thinking about this? It makes me salivate, and I have no time to pursue it.
Computing with the frame before would require to render one more frame. I don't know if it is that useful...
I would suggest using a 'RGBA' sheme for the vector information : the RGB indicates the vector direction, and the A is used as a scaling factor. it would be more precise in this way
. And couldn't this image be included as a new channel in the output image. I heard that OpenEXR will be included. It could hold a special layer like this.
Speed informations are useful not only for motion blur, but for color adjustments because the eye desaturate the moving objects. It would add more realism to be able to adjust this
I would suggest using a 'RGBA' sheme for the vector information : the RGB indicates the vector direction, and the A is used as a scaling factor. it would be more precise in this way

Speed informations are useful not only for motion blur, but for color adjustments because the eye desaturate the moving objects. It would add more realism to be able to adjust this

My Portfolio :: www.mentalwarp.com/~fred/
RGBA with A being the scaling factor for each vector would definitely work.
And you wouldn't have to render the frames on either end of the time slice. You just need to query the location in 3D space of the center of the appropriate face. So you'll only be doing transforms, and you can cache the ones you've already done for this frame, so other pixels rendered on the same face would just pull from the cache. You'd have to do just two queries for 3D location per visible face in the final render, but no more more shading calculation than you were doing before. You know - you could do this with Python, just to start and just to generate the image as a proof of concept, as the whole thing could be face-based.
Arggggghhh. This is tempting to pursue. And shut up bullx.
My wife will absolutely kill me if I start working on something else in addition to the four things I'm working on right now.
And you wouldn't have to render the frames on either end of the time slice. You just need to query the location in 3D space of the center of the appropriate face. So you'll only be doing transforms, and you can cache the ones you've already done for this frame, so other pixels rendered on the same face would just pull from the cache. You'd have to do just two queries for 3D location per visible face in the final render, but no more more shading calculation than you were doing before. You know - you could do this with Python, just to start and just to generate the image as a proof of concept, as the whole thing could be face-based.
Arggggghhh. This is tempting to pursue. And shut up bullx.
