The Yafray Look

Blender's renderer and external renderer export

Moderators: jesterKing, stiv

Posts: 81
Joined: Sun Oct 13, 2002 8:04 pm

Postby green » Thu May 29, 2003 5:32 pm

Perhaps I'm missing something really obvious, but how would that make it *more* difficult for the shader writer? (Being someone that has written and used shaders myself, I can tell you that it would make things a hell of a lot easier for me.)

Because they havto compile and keep track of different shaders to combine to one look. Instead of keeping track of different function names.
(you still havent answered what added benefit this bring over include files)

As in alpha? No. As in ray-traced refractions? No. As in fake environment-map based refractions? Yes (at least, the environment map itself would be--how that environment map is used would be a projection/mapping issue).

I was thinking of taking the shading information on the currently shaded point's polygons backside. (diffuse(-Nf)) and adding it to the normal shading.

To get for example the effect you get on a leaf or a lampshade when a light shines through it.

I thought you were arguing against my point...
Anyway, that is how RenderMan shaders are set up right now, and that is a large part of my complaint: procedural textures are written into the shading models.

As it is right now you put in exactly what is needed and not one line of text more into the surface shader. With this system you would get everything under the sun into the shader.

I fail to see how what I'm suggesting conflicts with a slider/button style interface for non-programming users. In fact, it seems to me that what I'm suggesting would fit *better* with a sliders 'n' buttons style interface.

As it currently is you can do anything you want with the renderman shading language. I dont think this would be easier to use when writing a shader translator for the blender material editor

If you want to talk about the graphical abstraction, then thats another thing.

Posts: 25
Joined: Tue Oct 15, 2002 11:57 pm

Postby macke » Thu May 29, 2003 6:07 pm

green wrote:Well. Macke I know for a fact that you know how to program, calling yourself a non-coder gives the real non-coders too much of a good name :)

You always have to blow my cover, don't ya?

And also. Alot of the building blocks in the xsi rendertree are actually building blocks made by softimage. They are not made by mental images

That is correct, to an extent. The nodes in the rendertree are more or less just a gui for the blocks, as you mentioned with Shaderman, but in a more intuitive way. Some are a bit more, such as the incidence node which lets you also invert the result, but it's usually not a big difference in that and using the functions provided in the mental ray shader api. In fact, it's more nervwrecking to build the damn gui (because of typos and it being hard to read the code) than actually writing the shader itself. Then again, writing the gui still isn't very hard.

There is really no reason why you wouldnt be able to convert any shadertree to renderman shading language

You mean from XSI or any shadertree? Converting from XSI, which essentially is converting from mental ray to renderman could pose some problems. I've never tried myself, but I can imagine.

A good example of this would be maya's hypershader. the MayaMan renderman exporter can handle converting that just fine

It's directly related to the amount of time and energy invested in such a conversion. People using XSI are usually pretty content with Mental Ray, and have no need for a XSI -> Renderman converter. Should such a need arise though, I'm pretty sure proprietary tools is the way to go, considering the openness of XSI and the ability of quickly writing such tools.

(also, is it grammaticly correct to say you are writing a shader when using the gui drag and drop thing?)

No, you're right. It should be "making a shadertree" or similar naturally. I blame it on the beer I had last night! (Yay for graduation!)

But if you would really want to get this type of functionality(without using include files, which work more or less exactly like the xsi shader tree blocks, with having any number of inputs, and one output) why not just make everything a texture? shading models shouldnt need to be special they basicly do what a texture does anyways. generate color(or normal, or alpha value.. etc..)

It's all just numbers in the end anyway, call your shaders what you will and see if I care ;o)
It's easier to have some sort of terminology for discussions such as these though.

So in the end you can write everything in one file if you want to. but use other texture/shading model files from a file if that would be what you wanted

Seeing as how I don't write renderman shaders, I don't really belong in this discussion, but having a file you include with a bunch-o-functions in it seems to me like a good idea. Infact, that's how writing mental ray shaders works. You've got a bunch of functions and definitions in a header file and you just include that and write your block. What you then do is define a shadertree in a .mi file which the renderer takes as input. The shader written previously is a dll/so which get's called by the renderer.

the only thing that would separate the different blocks would be what return type they would send

Exactly! Some return colors, some return scalars, etc. In the end however, it's just a RGBA value though.

Posts: 25
Joined: Tue Oct 15, 2002 11:57 pm

Postby macke » Thu May 29, 2003 6:11 pm

cessen wrote:I agree completely. I wasn't saying that it shouldn't be an option (in fact, I said the opposite). I was just pointing out, in an "just incase you want to know" manner, that it's not physically accurate. But that doesn't mean at all that it's not useful

I weren't accusing you of that. But just because at first just using gaussian blur doesn't give you a more accurate result doesn't mean it's the wrong way to go. As I said, you can very easily fake a natural DOF effect with the boosting of highlights and adding of bokeh. It would, to me at least, make more sense to use such a method in a animation than one that takes ages to compute (which most 'accurate' simulations does). For stills its another matter though.

Posts: 23
Joined: Tue Oct 22, 2002 5:27 pm

Postby Little_Cube » Thu May 29, 2003 6:41 pm

I have to stand behind Green on this one. As I understand, cessen's problem with SL is that it's too generalized, but it's very easy to make SL more "high level" if one needs. As Green said you can make shading models, procedural textures, etc. as functions stored in a separate header (or headers) and then use them in a shader through an #include call. IMO that's the same as using building blocks, just without a graphical environment, isn't it? The only problem that's not solveable with this approach is the redundancy, i.e. having a big number of very specific shaders with the same basic functions. The solution to this, in a context of exporting from a 3D program, would be to generate and compile the shaders in the rendering stage, just before calling the renderer (AFAIK this is how Softimage is connected to Mental Ray, Maya with MayaMan to RenderMan and Houdini VOPs to Mantra). The shader export and compilation process shouldn't take too long and it can be done only once per animation, so it won't add much overhead to the rendering. In the end you can delete the compiled shaders if they are no longer needed (or this could be done automaticly be the export software).

Here's an example how this would work with the current Blender (well Tuhopuu actually) design and RenderMan. We have the shading models and, both the procedural and file-based textures as SL functions in a header file (or files). Before the rendering starts, Blender outputs a shader for every material containing Blender's material arguments, their current settings, the #include call(s), and the actual shader built from the functions in the header. After the export, the shader is compiled and attached to the corresponding object in the RIB. If a new shading model or procedural texture is added to Blender all one has to do is translate the math behind it to SL, add it to the header file and add the new option to the export system. The user would be completly shielded from the whole export/conversion process.

Although, all this is much less trivial to implement then I make it sound :wink:


Posts: 156
Joined: Tue Oct 15, 2002 11:43 pm

Postby cessen » Sat May 31, 2003 9:34 am

I have now come to agree with you, Green--to a point. In so far as the rest of the RenderMan spec' is not changed, the RenderMan shading system is a good fit. I guess what bothers me is that RenderMan isn't declarative. (As you well know, RIB files are basically just huge scripts written in a "render" scripting language.)
A bit of history here: the RenderMan spec' was originally designed to be an API standard via which modeling/animation programs would communicate with the renderer... directly. Then they decided that they needed to be able to store scenes for rendering independantly of the modeling/animation programs, and so they decided to (essentially) just stream the API commands to ASCII text files (and they dubbed these ASCII command streams "RIB" files).

I think that for storing scenes for rendering, a more declarative format would be much better suited. In fact, the only reason *not* to store such a thing declaratively would be for interactive rendering. After all, you don't see anyone storing images as scripts, do you? Unless, of course, you are working on it in a paint-program, in which case there is a list of commands so that you can "undo" and "redo" things. You *do* see compressed images, which have to be "rendered", but those are also stored declaratively, not as scripts.

Anyway, what I was trying to convince you of was that the declarative aspects of the shaders should be seperated from the functional aspects of the shaders. But, of course, since a good deal of the RenderMan spec' is already like that--conceptually declarative things being represented by functions/commands--one might as well keep the shaders the same, so that the chaos is universal. ;-)

Posts: 320
Joined: Sun Oct 27, 2002 6:45 am

Postby thorax » Sat May 31, 2003 12:45 pm

It would be nice to be able to use OpenGL cards to render images,
for quick animation tests, and such.. Something I miss from the SGI
days in school..

Also if you use a polygon per pixel its possible to get Phong-shading
with OpenGL.. Of course you would need to have adaptive tesselation
for Sub-div surfaces and NURBS surfaces.. But it would make for
really quick renders..

Posts: 898
Joined: Mon Oct 14, 2002 4:32 am
Location: Sydney, Australia

Postby matt_e » Sat May 31, 2003 2:14 pm

thorax wrote:It would be nice to be able to use OpenGL cards to render images,
for quick animation tests, and such.. Something I miss from the SGI
days in school..


Thorax, click the pink icon that looks like a picture, far right in the 3D viewport header. This renders a still using openGL. Shift-click to render an animation.

Return to “Rendering”

Who is online

Users browsing this forum: No registered users and 1 guest