One of the results of the Render Pipeline restructuring was the possibility to calculate, render, store and process pixel based motion vectors for a motion-blur post-process effect.

The most obvious approach for vector-blur is to make, for each pixel, a directionally shaped filter kernel, and accumulate this into a new image. Initial tests with this method proved to give unsatisfactory results, so something else I had in mind has been implemented, which is, as far as I know, either completely new or not documented in a paper yet. :)

# A new approach

Simply said; all pixels in the entire image are converted into a 3d model with quad polygons, including the depth information per pixel. Then, by incrementally applying the speed vectors, per vertex in these polygons, you can reconstruct the motion quite effeciently. By rendering and accumulating an X amount of steps, an image can be constructed closely resembling motion blur.

An animated 3D scene
The rendered image
Z depth information
A 3D scene made of a quad polygon per pixel is constructed, each polygon positioned according to its Z depth
Each quad is moved according to its speed vector, and accumulated
The final vector blurred result

Below, the steps involved with Blender Vector Blur are explained in detail.

Pre-processing

In preprocess, before the rendering geometry is created, the speed vectors, of all vertices, to the previous and next frame are calculated. Speed vectors are screen-aligned and expressed in pixel size. The result is four floating point point values: (X,Y) to previous position, and (X,Y) to next position.

Because these values are based on the projection on a screen, it is important to use relatively small faces, to prevent large (visible) faces being half in front and half behind the camera.

(Note: you have to enable the "Vec" pass in a RenderLayer to invoke this pre-process).

Rendering

In a similar manner to how vertex-normals are interpolated inside a face, the speed vectors are calculated per pixel sample, and accumulated in a vector pass buffer. This buffer is initialized at maximum speed, and only lower speed values are allowed to fill in, preventing anti-aliased pixels get too much speed.

Transparent (Ztransp) faces are rendered separately in Blender, and are handled differently when generating speed vectors. Here, the alpha value is taken into account as well; when the alpha value of a transparent face is larger than a specific threshold (0.95 now) it copies the speed into the buffer, otherwise it fills in the minimum speed.

Compositor

After a render, you have to add a "Vector Blur" node in the Compositor, and link the Z and Vec (speed) pass outputs of the RenderResult to it. The Compositor then calls the vector blur function:

Convert to 3D model

The all pixels in the imageare then converted to quad polygons. The vertices for the quads store the averaged Z of the pixels and the averaged speed vectors (of the 4 corner faces), using a 'minimum but non-zero' speed rule.

This minimal speed rule works very well to prevent 'tearing' apart when multiple faces move in different directions in adjoining pixels, or to be able to separate moving pixels clearly from non-moving ones.

This stage ends with correcting speed vectors for the edges of the image, by setting the X or Y speed component of all edge vertices to zero.

This is the most important stage, because we need to separate the non-moving part from the moving part. Not only because the background has to remain un-blurred, but we also want to exclude antialiased pixels in the background from giving motion 'streaks'.

This is done using a "tag buffer", where the quads are tagged if they will move or not. A user can set a manual threshold for a minimum speed in the Vector Blur Node too, this is to cope with, for example, camera movements. This tag buffer then is the actual 'mask', resembling a black/white image of what moves (white) and what doesn't (black).

The tag buffer then goes through an anti-alias routine, which assigns alpha values to quads, based on the method we used in the past to antialias bitmaps (still in our code, check the antialias.c in imbuf!). The result is that we can apply the masks with good antialiasing too.

Accumulating

The acumulator performs a 'samples' number of steps with speed vectors multiplied with an increasing factor. For each sample both the past and future vectors are accumulated.

Each sample does the following sequence:

- The tag buffer is used to to initialize a Zbuffer, with Z values of non-moving pixels copied (so moving parts can move behind it).

- A temporal drawing buffer is created/cleared.

- The tag buffer is used to draw all quads in this temporal buffer, using the Z values to efficiently mask out invisible parts.

- Then based on the sample step (a decreasing weight value) and the quad alpha (antialiased tag buffer) the temporal buffer is accumulated on top of the original image.

# Using Vector Blur

1) Make sure the RenderLayer has passes "Z" and "Vec" assigned.

2) In the Compositor (see this page too) add a Vector Blur node, and connect the Image, Z and Vector sockets.

3) Make sure the Vector Blur output socket goes to a Composite Node

4) Render!

Buttons

- Samples: the number of steps used for a Vector Blur. The more steps, the more blurry the result.

- MinSpeed: (expressed in pixels per frame) set this to a small value to enforce efficient 'masking', separating moving parts from non-moving parts. Is especially useful for camera movements, or for slightly moving backgrounds. This value will also decrease all motion vectors by this amount, to ensure a consistantly blurred image.

- MaxSpeed: the maximum length in pixels of a blur, use this to enforce extremely fast moving objects to be blurred better. Zero means no maximum.

- BlurFac: to control the 'shutter speed', it actually works as scaling factor for the speed vectors.

Tips

- Vector blur uses (projected) screen coordinates, so when one half of a plane is behind the camera, and the other half in front, the vectors go wrong. Remember to subdivide giant planes (like floors or ceilings) a little bit.

- Best vector blur results you get when you separate fast moving parts from still (or slow moving) parts. Use two Render Layers for that, and vector blur them seperately.

(Driven Hand .blend created by Robert Ives)