what about passes?

Blender's renderer and external renderer export

Moderators: jesterKing, stiv

3dsman
Posts: 0
Joined: Wed Nov 17, 2004 11:55 pm

what about passes?

Post by 3dsman » Sun Nov 21, 2004 9:36 pm

is there any solution to make multipass rendering?
to separate specular, ambient, bump, glow, ...
or is it forecast???
thanks for answer.

konrad_ha
Posts: 0
Joined: Wed Jul 21, 2004 12:15 am
Location: Munich | Germany | Europe
Contact:

Post by konrad_ha » Mon Nov 22, 2004 5:27 pm

I second that request. Multipass-rendering would be very useful.

z3r0_d
Posts: 289
Joined: Wed Oct 16, 2002 2:38 am
Contact:

Post by z3r0_d » Mon Nov 22, 2004 7:42 pm

having not used it, I fail to see how

can you inform me?

also, how do you do things like specular highlights on transparent objects or objects which refract the environment with passes?

Pablosbrain
Posts: 254
Joined: Wed Jan 28, 2004 7:39 pm

Post by Pablosbrain » Mon Nov 22, 2004 8:19 pm

Here is a link with a great explanation of multi-pass rendering and reasons you might want to do it.

http://www.3drender.com/light/compositing/

For me the two things I would use it for are the depth map for doing ZBlur/DOF in my compositing program after rendering out the scenes. Makes it faster to make changes as the frame wouldn't need to be rerendered along with the DOF effect.
And the second would be as the above linked page mentions.. tweaking shadow and lighting a bit by using my compositor instead of rerendering the entire sequence again.

Pablosbrain
Posts: 254
Joined: Wed Jan 28, 2004 7:39 pm

Post by Pablosbrain » Mon Nov 22, 2004 8:27 pm

As an example it can take over 20 minutes to render a full NTSC frame from my animated short project FWD-REV. It would be a great time saver to be able to add effects like DOF and different lighting effects in post production vs having to do that in the initial render.

In essence.... it comes down to wanting more control over the final product... whether thats a single still image or a series of images.

ideasman
Posts: 0
Joined: Tue Feb 25, 2003 2:37 pm

Post by ideasman » Mon Nov 22, 2004 11:14 pm

A project I have put off is to make a python script that renderes multipass-

I did zbuffer and specular, but yet to do all the different passes.

Since this is done in python by manipulating the material and world data its a bit slow.

Ultimatly it would be nice to render all passes into 1 multi-layered XCF/PSD (Gimp could save as a psd still.)

- Cam

3dsman
Posts: 0
Joined: Wed Nov 17, 2004 11:55 pm

Post by 3dsman » Tue Nov 23, 2004 8:19 am

i think the better solution is to render all passes in one calculation.
Ultimatly it would be nice to render all passes into 1 multi-layered XCF/PSD (Gimp could save as a psd still.)
or the new openEXR picture format of ILM. (in 32 or 16 bit per color)

joeri
Posts: 96
Joined: Fri Jan 10, 2003 6:41 pm
Contact:

Post by joeri » Tue Nov 23, 2004 10:21 am

3dsman wrote:i think the better solution is to render all passes in one calculation.
Ultimatly it would be nice to render all passes into 1 multi-layered XCF/PSD (Gimp could save as a psd still.)
or the new openEXR picture format of ILM. (in 32 or 16 bit per color)
For setting glow on only the spec it's better to have separate passes and have a sequence editor determen the amount of glow in a post process.
(for example) Or do a minor color hue change to the image and not to the specs.
The psd thing is nice as an option. But if I'm happy with the color pass and not with the spec pass, I'd re-render the spec, in a different sequence.

I'd also like layers to be in a separate pass, or photoshop layer.
Then it's easy to make post fake-dof or post object transparancy (not face transparancy).

bertram
Posts: 0
Joined: Wed Oct 16, 2002 12:03 am

Post by bertram » Tue Nov 23, 2004 11:05 am

joeri wrote:I'd also like layers to be in a separate pass, or photoshop layer.
Then it's easy to make post fake-dof or post object transparancy
:shock:
Groovy!
Never thought of automating the process of rendering layers.
- Me wants this too!!! -

wavk
Posts: 126
Joined: Wed Oct 16, 2002 9:58 am
Location: The Netherlands
Contact:

Post by wavk » Tue Nov 23, 2004 12:10 pm

Hey ideasman,

What a coincidence! on the other side of the world I just finished a script a couple of weeks ago that let's you pick a material and render out a selection mask for it. Very useful for architectural stuff and I already used it many times since. I also used a very dirty way of just saving, distroying the whole scene by making everything black except for the specific material, rendering, and after that loading the original back in. But what's slow about it? I hardly notice python doing anything.

I also was thinking about psd export. In my case it would be cool to render all material masks and save them as channels in a psd file, ready for selection. But that would mean more than a python script I think. I don't think there are functions in the python API to access the rendered pixels?

How did you do the zbuffer, btw? Did you use the zbuffer blender creates while rendering or did you do something with mist? Or is it possible to access pixels and is that why your script is so slow?

I also read up on the photoshop format, which doens't look too complicated.

It would be very cool if we could join all this functionality into an exporter script?

3dsman
Posts: 0
Joined: Wed Nov 17, 2004 11:55 pm

Post by 3dsman » Tue Nov 23, 2004 9:14 pm

is there any solution in yafray or the blender renderer to render one object whith more than one material?

i mean more than one material for each pixel.

the render engine must compute for exemple two material at same time ( the mesh ( hiden faces, normal,...) is computed only once for all the materials.

when a pixel color is calculated the render engine use for exemple normals, light angles, z distance,... that not change for each passes so why not use them more than once?

ideasman
Posts: 0
Joined: Tue Feb 25, 2003 2:37 pm

Post by ideasman » Tue Nov 23, 2004 10:47 pm

Hay Wavk, good to here from you-
I used a very hackish method of doing zbuffer- (Sorry to dissapoint you :) )
By using Mist in the world settings- but its all automated and is fast enough.

At the moment the post script only lets you render ZBuffer, Specualr and Diffuse..
Will add
*Textures (images and procedural)
*Bumpmap
*NormalMap
*Motion Map (XYZ motion of a vertex -> RGB vert colours)
*Material IDX (Like you did)
*Reflections
*AO
*Shadows

- Its not realt that big a deal once you have the base function.
But I would use the gimp for saving a PSD or XCF, I know enough scriptinh in the gimp to do this and its realy quite easy - (Again, when you have the time)

I have also been working on some Python scripts for dealing with a large database Im working on, dont know if you could use any but heres the list.

*Select face GUI, coplaner, linked image, linked normal, same area- etc.
.. flip normals and deleate faces (For UV mapping in face select mode)
* Select some mesh based on properties (surface area, face num, vert num)
* Find in instance of a UV image in a selection of mesh objects and move the 3d cursor there.
* Replace 1 material with another in all selected objects.
* Replace 1 image with another in all selected abjects - Without this its like finding a needel in a haystack.
* Thumbnail browser- Browse loaded images and assign them to selected active faces with a click.
... Also has a right click menu where you can - Load, Unload, reload and Edit in the Gimp using gimp remote :) .
* Added NEW name to batch namer.
* Made a wrapper for the PupMenu function that splits a very long menu up into multiple menues, that can be traversed with a MORE and LESS- menu items at the top and bottom- (I have 180 images in my database)
* Random object loc/size/rot- Simple and probably alredy coded, but I needed it for tree placement.
* A box packing function, get demo here, Im very happy with this script, My first use of classes- http://www.blender.org/modules.php?op=m ... pic&t=5072
* An automatic parenting script, adds a perent to each selected object (usefull to me, probably nobody else.)
... thats about it I think



BTW- Heres my post processing script.

Code: Select all

#!BPY

"""
Name: 'Render ZBuffer'
Blender: 234
Group: 'Export'
Tooltip: 'render ZBuffer'
"""


from Blender import *


# mode= 0:single frame, 1:animation
# all are are toggles
# osa 
'''
def renderScene(mode, specular, diffuse, shadow, raytrace, radio, envmap, osa): 
	old_scn = Scene.GetCurrent()
	
	oldscene = Scene.GetCurrent()
	
	scene.makeCurrent()
	
	context = scene.getRenderingContext()
	
	new_scn = scn.copy(0)
	
	# set the frame 
	#enableMotionBlur(toggle)
	#enableOversampling(toggle)
	#setOversamplingLevel(level) # 5,8,11,16
	#enableRadiosityRender(toggle)
	#enableRayTracing(toggle)
	#enableShadow(toggle
	
	
'''


# ZBuffer

# MAterial for all
# White
# Shadeless

# World
# Hor rgb = Black
# Mist on , Linear
# Set near/far

'''
def getActiveLayerOb():
	for ob in Object.Get():
		if ob.Layer
		# ViewLayer(layers=[])
'''
		

# generic functions
def setObjectMat(objList, material):
	for ob in objList:
		ob.setMaterials([material])


def setAllMat(scene, material):
	# Store a list of mesh names that have the mat applied to them
	# This saves us applying the mat to the same mesh- (Linked dupes)
	meshMatDoneList = [] 
	for ob in scene.getChildren():
		ob.setMaterials([material])
		if ob.getType() == 'Mesh':
			me = ob.getData()
			me.setMaterials([material])
			me.update()




def renderZBuffer(near, far):
	old_scn = Scene.GetCurrent()
	new_scn = old_scn.copy(2)
	
	new_scn.makeCurrent()

	try:
		new_world = World.Get('zbuffer')
	except:
		new_world = World.New('zbuffer')
	
	new_world.setHor([0,0,0])
	new_world.mistype = 2 # linear
	new_world.setMist([0, near, far, 0]) # intensity, near, far, height
	new_world.mode = 1

	new_world.makeActive()

	# Set render options
	context = new_scn.getRenderingContext()
	context.enableUnifiedRenderer(0)
	context.enableRadiosityRender(0)
	context.enableRayTracing(0)
	context.enableShadow(0)
	
	# Make ZBuffer material
	try: # Use this if its there.
		mat = Material.Get('zbuffer')
	except:
		mat = Material.New()
		mat.mode |= Material.Modes.SHADELESS
		mat.rgbCol = [1,1,1]
		mat.setRef(0.0)
	
	# Apply to all objects
	setAllMat(new_scn, mat)
	
	context.render()
	old_scn.makeCurrent()
	
	Scene.Unlink(new_scn)



# 1 = buf # 2 = halo # 4 = Layer # 8 = quad # 16 = negative # 32 - only shadow # 64 - sphere  # 128 - square spot # 4096 = no spec # 2048 = no diffuse # 8192 = ray shadow
def renderSpec():
	old_scn = Scene.GetCurrent()
	new_scn = old_scn.copy(2)
	new_scn.makeCurrent()
	
	# MAIN BIT, MAKE ALL LAMPS SPEC\
	for la in Lamp.Get():
		la.mode |= 2048
		la.mode &=~4096
	
	# Set render options
	context = new_scn.getRenderingContext()
	context.enableRadiosityRender(0)
	context.enableRayTracing(0)
	context.enableShadow(0)
	
	context.render()
	old_scn.makeCurrent()
	Scene.Unlink(new_scn)


def renderDiffuse():
	old_scn = Scene.GetCurrent()
	new_scn = old_scn.copy(2)
	new_scn.makeCurrent()
	
	# MAIN BIT, MAKE ALL LAMPS SPEC\
	for la in Lamp.Get():
		la.mode &=~2048
		la.mode |=4096
	
	# Set render options
	context = new_scn.getRenderingContext()
	context.enableRadiosityRender(0)
	context.enableRayTracing(0)
	context.enableShadow(0)
	
	context.render()
	old_scn.makeCurrent()
	Scene.Unlink(new_scn)


renderMethod = Draw.PupMenu(\
'Render Post%t|\
ZBuffer|\
Specular Lighting|\
Diffuse Lighting|\
...|')

if renderMethod == 1:
	near = Draw.PupFloatInput('Znear', 1, 0.001, 500.0, 1, 3)
	far  = Draw.PupFloatInput('Zfar', near+50, near + 0.001, near+500.0, 1, 3)
	renderZBuffer(near, far)
	
elif renderMethod == 2:
	renderSpec()
	
elif renderMethod == 3:
	renderDiffuse()


wavk wrote:Hey ideasman,

What a coincidence! on the other side of the world I just finished a script a couple of weeks ago that let's you pick a material and render out a selection mask for it. Very useful for architectural stuff and I already used it many times since. I also used a very dirty way of just saving, distroying the whole scene by making everything black except for the specific material, rendering, and after that loading the original back in. But what's slow about it? I hardly notice python doing anything.

I also was thinking about psd export. In my case it would be cool to render all material masks and save them as channels in a psd file, ready for selection. But that would mean more than a python script I think. I don't think there are functions in the python API to access the rendered pixels?

How did you do the zbuffer, btw? Did you use the zbuffer blender creates while rendering or did you do something with mist? Or is it possible to access pixels and is that why your script is so slow?

I also read up on the photoshop format, which doens't look too complicated.

It would be very cool if we could join all this functionality into an exporter script?

Pablosbrain
Posts: 254
Joined: Wed Jan 28, 2004 7:39 pm

Post by Pablosbrain » Tue Nov 23, 2004 11:43 pm

How hard would it be to add the ability to output a whole animated sequence to Zbuffer files with your script? I've tried it on a couple of my scenes already and it seems to work pretty good. Gonna try and find time to do some more testing with the output tonight.

konrad_ha
Posts: 0
Joined: Wed Jul 21, 2004 12:15 am
Location: Munich | Germany | Europe
Contact:

Post by konrad_ha » Tue Nov 23, 2004 11:48 pm

Thinking of animations (like always...):

How does the internal renderer handle the picture-data anyway? Does it compute different passes (specular, color, diffuse) and compose them into the final image? If so it should be feasable to create multiple passes without taking additional rendering-time.

On the implementation side:
Having an XCF with the passes as layers is surely fantastic for still-images, but for animations I'd still prefer seperate filesets.

And another thought: If rendering eg. specular as a pass would it be faster to just render this particular pass?

Being able to render passes from layers would be another very cool feature. I normally create a multitude of files from a certain scene with different settings for different parts of the scene I want to render ... err ... differently.

Need an example? I am currently working a a product shot where the final texture of the object (a revolutionary ski, btw.) is still not decided upon. Only the color of this particular object will change in the final version. Neither the background nor the lighting of the ski will be changed by that, so just having to render a color pass of that ski alone would save me lots of time.

The more I think of it the clearer it becomes that this feature is a must.

harkyman
Posts: 98
Joined: Fri Oct 18, 2002 2:47 pm
Location: Pennsylvania, USA
Contact:

Post by harkyman » Wed Nov 24, 2004 1:47 pm

That's a great script, but I think that something like this needs to be implemented directly into the renderer. That way, Blender only has to do the transforms and accompanying overhead once, then piping the data coming out of the different calculating sections (spec/diffuse/AO/etc.) into seperate files. On a four-pass scheme with scripting, you'd have Blender do the full setup/transform/ray-tree build for each pass, even thought that info would be the same. And Blender would still be calculating the other things, just coming up with no result.

If passes rendering were to be included, it could look something like this (just whipped it up):

Image

I've only made five channels, just as an example. Anyway, in this setup, we see the Diffuse, Mirror and Shadow channels all being rendered into a single composite file, which will have a .comp extension (resulting file for frame 5 would be, for example: testanim.comp.005.tga). Specular and AO passes each render into their own channels, respectively tagged with .spec and .ao (testanim.spec.005.tga and testanim.ao.005.tga). BTW, the user can type in whatever they want for those channel names; the SF stands for Suffix.

Using a panel like this, you could turn off all channels but one, or you could do what we do now, which is render all passes into a single channel. Alternatively, you can click the button at the bottom to have it render into a single, multi-layered Photoshop (or GIMP) file.

What do you think?

Post Reply