An idea to improve video rendering

Blender's renderer and external renderer export

Moderators: jesterKing, stiv

teo
Posts: 3
Joined: Sat Oct 26, 2002 7:18 pm

An idea to improve video rendering

Postby teo » Tue Oct 14, 2003 8:28 pm

Not sure if this has ever been done, anywhere, and I'm not sure what to call it, so hard to search for it, but...
Seems to me that any frame in a video file is a still image from an instant in time, frozen (or in the case of interlaced video two halves of still images from two distinct frozen points of time). But, in my TV, the gun scans across the screen 60 times every second (maybe you'd call it thirty, interlacing again). Seems to me that some amount of detail in the scene might have moved in that 60th of a second (think scenes with lots of action).
So now my idea. Modify the scanline renderer to include a time value for each pixel. Now the first pixel calculated is at time x+0 and the last is at x+0.033 (30 fps, interlaced video, etc.). I think then that when you look at a still of the video, it would look somewhat distorted, especially in a 'dynamic' scene (I imagine a rectangle moving quickly across the screen would appear as a parallelogram), but I think that it would more accurately emulate the scanning process in TV's for a moving image.
Of course, this would only be useful for rendering for television. But it seems like it would be easy enough to do, so it might be worth it. Could reduce the need for mblur rendering as well?
So what do you think. Would this be worthwhile? Have I gone insane? Can the human eye even detect the kind of effect I'm hoping for?

z3r0_d
Posts: 1522
Joined: Wed Oct 16, 2002 2:38 am
Contact:

Postby z3r0_d » Wed Oct 15, 2003 4:59 am

this was an issue with _old_ (well, ones that could be exposed in fractions of a second, as opposed to the antiques that took minutes) cameras where the shutter opening could be visible in the final picuture. For example, a car moving from the right to the left of the camera's view will have oval shaped wheels that tilted to the right. (I wish I could show you, it looks cool (kind of like a photo of cartoon car physics, the car is accelerating...))

but it would not be worth the effort. It would mean that each scanline must be rasterized (sometimes takes seconds for high poly scenes) and the shadows recalculated, and the envioronment maps if we really want to get it right.

besides, 1/60th of a second isn't really long enough to notice something like that, unless the motion was really quick, in which case it would be blurred by a shutter, or out of focus.

(I would rather be able to set motion blur factor to a user-defined value, possibly as high as 4096 for extreme depth of field [using the moving camera technique])

teo
Posts: 3
Joined: Sat Oct 26, 2002 7:18 pm

Postby teo » Wed Oct 15, 2003 9:02 pm

Oh yeah, never got as far as thinking about shadow and environment maps. Definitely not worth it in that case. Guess I'm still thinking in ray-tracing terms where each rendered pixel goes through the whole process anyways.

I too would like to see more flexibility in the mblur area, specifically a break between OSA and mblur. Is there some reason that they are linked?

LukeW
Posts: 64
Joined: Mon Mar 03, 2003 1:14 pm

Postby LukeW » Thu Oct 16, 2003 1:40 pm

teo wrote:....I too would like to see more flexibility in the mblur area, specifically a break between OSA and mblur. Is there some reason that they are linked?

I agree that they shouldn't be linked. And OSA (AA) and mblur should be arbitrary - so you could involve 50+ samples in mblur...

z3r0_d:
"I would rather be able to set motion blur factor to a user-defined value, possibly as high as 4096 for extreme depth of field"
In Blender, the motion blur factor "Bf" is the number of frames that are involved in the blurring... so for a value of 4096 you'd be blurring 4096 frames together. MBlur also uses another number though, which you probably mean... which is the number of samples that MBlur uses... if your video is only say 720x480 and no pixel moves more than 100 pixels in the Bf time (e.g. 0.5 frames) then the number of samples only needs to be about 100-200. (not 4000+, though 4000+ would make the exact colours of the blurred pixels more precise)


Return to “Rendering”

Who is online

Users browsing this forum: No registered users and 1 guest