Image based motion blur
Moderators: jesterKing, stiv
Deriving (2D) motion vectors per pixel isn't difficult, although it will have same issues as with Z values in postprocessing (meaning: no transparency or halo/particle support).
What I don't know still is the proper filter mask to be calculated.. especially on pixels moving in Z direction (imagine a plane flying into or away from the camera, the motion vectors are insuficient then). This is especially because you only use a single image, with no information stored on what's behind something that moves.
What you see a lot in animation productions, is simply render in 2 layers (like what steve enginpost mentions). By applying very simple blur filters on the layers individually, and then assembling it back, you might get the best results. Same is true for DOF btw.
What I don't know still is the proper filter mask to be calculated.. especially on pixels moving in Z direction (imagine a plane flying into or away from the camera, the motion vectors are insuficient then). This is especially because you only use a single image, with no information stored on what's behind something that moves.
What you see a lot in animation productions, is simply render in 2 layers (like what steve enginpost mentions). By applying very simple blur filters on the layers individually, and then assembling it back, you might get the best results. Same is true for DOF btw.
Hi Ton.
From a user point of view, halos and particles, in the very most of cases, don't need to have motion blur (besides objects dupliverted on particles, but it's a different case); maybe it's not "realistically" accurate, but they look good even not blurred, IMO (see, for example, the first part of this animation of mine: http://www.enricovalenza.com/anim/aland.mpg.
The flying saucer has a low averaged motion blur applied, while the halos, added in postprocess, not).
EnV
From a user point of view, halos and particles, in the very most of cases, don't need to have motion blur (besides objects dupliverted on particles, but it's a different case); maybe it's not "realistically" accurate, but they look good even not blurred, IMO (see, for example, the first part of this animation of mine: http://www.enricovalenza.com/anim/aland.mpg.
The flying saucer has a low averaged motion blur applied, while the halos, added in postprocess, not).
EnV
The biggest problem I see with vectoral motion blur is that it fails miserably when it comes to transparency. However, in many scenes that won't be an issue.
Also, there may be some ways to alleviate that problem--though not solve it--by giving the user control over how transparency is dealt with (perhaps on a per-object basis; maybe scaling the motion vectors based on transparency and a user-controlled variable).
I really like the suggestion of combining vectorial motion blur with the current motion blur implimentation. Doing it that way means that the vectorial motion blur is essentially being used to get rid of strobing artifacts, rather than being used as stand-alone motion blur solution.
Also, there may be some ways to alleviate that problem--though not solve it--by giving the user control over how transparency is dealt with (perhaps on a per-object basis; maybe scaling the motion vectors based on transparency and a user-controlled variable).
I really like the suggestion of combining vectorial motion blur with the current motion blur implimentation. Doing it that way means that the vectorial motion blur is essentially being used to get rid of strobing artifacts, rather than being used as stand-alone motion blur solution.
Ai, troubles on the path.
I think the solution should stay away from half solutions, we've already got the half solution.
If transparant objects don't get motion blur because they are rendered in a different pass then that's a problem, this makes Mblur useless for most architectual animations and most shorts use a window somewhere.
But I don't see how it's a problem with the vector solution, I thought it's a postprocess that blurs the rendered images.
Pixels that move (straight) away from the camera wouldn't reveal new pixels, that's true, maybe it should blend in neightbours, or just don't blur them at all and see how noticeble it is.
Another good comment I do believe in is that it could be combined with the current MBLUR. So instead of OSA use the newMblur and still render 5 images per frame. Now I'm talking gibrish, no idea what this would look like.
I think the solution should stay away from half solutions, we've already got the half solution.
If transparant objects don't get motion blur because they are rendered in a different pass then that's a problem, this makes Mblur useless for most architectual animations and most shorts use a window somewhere.
But I don't see how it's a problem with the vector solution, I thought it's a postprocess that blurs the rendered images.
Pixels that move (straight) away from the camera wouldn't reveal new pixels, that's true, maybe it should blend in neightbours, or just don't blur them at all and see how noticeble it is.
Another good comment I do believe in is that it could be combined with the current MBLUR. So instead of OSA use the newMblur and still render 5 images per frame. Now I'm talking gibrish, no idea what this would look like.
I hadn't thought about transparency. I guess what you'd have to do when generating the vector motion map would be to use the deepest visible renderface for each pixel, allowing for transparency in foreground objects.
Also, joeri's right in that we don't know how good or bad any of this would look. We're just guessing. This sort of thing is hard to mock up.
Also, joeri's right in that we don't know how good or bad any of this would look. We're just guessing. This sort of thing is hard to mock up.
This is how Lightwave seems to work. It looks like one can choose how many samples to take, whether to use the vector blur, whatever.joeri wrote:So instead of OSA use the newMblur and still render 5 images per frame. Now I'm talking gibrish, no idea what this would look like.
How hard is it to save the vector image? And let the seq plugin guys do some blur tests?
(where's the darn 16 bit / floating point file format
)
This is Maya on the 2d blur (and 3d for reference). Balls moving away from the camera don't reviel backdrop (as expected) but that does not seem to be a big problem.

(where's the darn 16 bit / floating point file format

This is Maya on the 2d blur (and 3d for reference). Balls moving away from the camera don't reviel backdrop (as expected) but that does not seem to be a big problem.

i agree with that.harkyman wrote:The fact that it's not exact is okay, because this is used for motion graphics, wherein each picture is only seen for 1/30 of a second. The stills don't look so hot, but it looks great in motion. Your eye has enough to work with that it fills in the gaps.
as before having a look to the examples provided by EnV can demonstrate that even if a technique is not perfect you can obtain nice results if you are doing animations.
-
- Posts: 0
- Joined: Sat Nov 06, 2004 6:20 pm
If I understood correctly, distributed raytracing should produce nice motion blur, nice blurry reflections and nice DOF.
www.cs.arizona.edu/classes/cs534/Distributed.pdf
There is a siggraph 84 paper somewhere. I lost the link. I believe that distibuted raytracing is Blackmage's baby?
There is also a paper somewhere by Kawagishi, Hatsuyama & Kondo on Cartoon style motionblur.
www.cs.arizona.edu/classes/cs534/Distributed.pdf
There is a siggraph 84 paper somewhere. I lost the link. I believe that distibuted raytracing is Blackmage's baby?
There is also a paper somewhere by Kawagishi, Hatsuyama & Kondo on Cartoon style motionblur.