The Yafray Look

Blender's renderer and external renderer export

Moderators: jesterKing, stiv

green
Posts: 81
Joined: Sun Oct 13, 2002 8:04 pm

Postby green » Sun May 25, 2003 9:48 am

cessen wrote:The internal Blender renderer is based on an old rendering architecture, and is thus limited in a lot of ways. However, there are still ways in which it can be improved without changing the rendering architecture itself.

In my case, I don't really feel like bothering with external renderers at the moment, and thus I am trying to improve Blender's internal renderer by means of adding a shading system and other such features.

RenderMan shaders are very nifty, but they are horribly organized in terms of types of shaders and how they work together. For instance, they make no distinction between texture-maps and BRDF's ("materials") in RenderMan shaders. And I have no idea why they consider geometric displacement to be a shading concept. Bah... that's my rant for the day.

I've always had this sort of love/hate relationship with the RenderMan Interface Spec'. On the one hand, it has a lot of neat--and important--concepts in it. On the other hand, it's old enough that it's getting very patch-worky... and if there's one thing I hate, it's patchy standards/programs. I have a whole theory about patch-work programs and standards, which I won't go into in detail right now. But the basic concept is that in order to avoid self-inconsistancies and general disorder, a program/standard has to be re-written from scratch every once in a while.

In the end, what I'm really looking forward to is Blender 3.0. Everything can be thought through and re-done, including rendering.


I think the reasoning goes something like:

There are no textures, everything is eather math or files. So what you would call a texture is simply a file. Would there be a need to not being able to call a file from within a shader? And for patterns and math, its very often when you write shaders that you have tight integration between the shading model and the math(only natural, since the shading model in 99% of the cases is very mathy), if you want to separater the process you can use include files. This is the basic standard with how shaders are written for other apps such as mentalray aswell, it might not look nice from a "programming of the architecture" pov, but its very sweet when you as an artist actually create the shaders :)

Displacement shaders are shaders because you only havto copy the surface shader to a new file and make the output affect the normal instead of the color/specularity/etc. bmrt even let you effect the normal in a surface shader, so it both worked as a surface shader and a displacement shader. This again makes the job for the shader writer quite nice. they can write all the functions in a include file and only havto do the shading model in the surface shader and the bump mapping/displacement in a displacement map file.

cessen
Posts: 156
Joined: Tue Oct 15, 2002 11:43 pm

Postby cessen » Sun May 25, 2003 9:37 pm

There are no textures, everything is eather math or files. So what you would call a texture is simply a file.


Unless it's a procedural texture--which is more what I was reffering to. With RenderMan shaders, you can get really stupid shaders like "Blinn Clouds Displacement" instead of having a Blinn BRDF shader, a Clouds texture shader, and a displacement algorithm that uses textures to displace the geometry of a surface.
And what if you want to combine procedural textures? Or BRDF's? Well, guess what, you have to make an entirely new shader for it! Grrr...

In other words, RenderMan shaders are horribly non-modular. That is what I hate about them. I love the concept of shaders, but I hate the RenderMan implimentation.

Actually.. the renderman integration is mainly aimed at animation. I dont think the added quality for still renderings will be all that impressive.


Heh... RenderMan mainly aimed at animation? Right...
You do realize that the RenderMan standard is set up in a way that you have to define each frame individually? This is another one of my gripes with RenderMan. You can't just specify the animation via keyframes. You actually have to individually specify each frame, which is very wasteful from a data-transfer standpoint. Granted, that is helped somewhat by delayed archiving, but that is not really a very elegant way to deal with things--you still have to specify the things that do change (such as if the model is deformed, or if it's position changes) every frame.

Darn it, I'm getting entirely too worked up over this. Sorry about that.

green
Posts: 81
Joined: Sun Oct 13, 2002 8:04 pm

Postby green » Sun May 25, 2003 10:51 pm

And what if you want to combine procedural textures? Or BRDF's? Well, guess what, you have to make an entirely new shader for it! Grrr...


Well. You would want to put the procedural stuff in a function that returns a color. Then you would insert "+ functionname()" in the actuall shading function. This function would preferably be in an include file. So you dont havto do a new shader you can modify another shader to take an argument about weather to use the function or not.

I really think that creating 2 different shaders is actually more work for the person creating the shaders as long as you think of it as one person creating the shaders with a texteditor after getting instructions on what the shader is for, when researching a new look then you would basicly allways have a very very tight integration between the shading model and any procedural patterns, where things change very often.

Having to create and compile different files and then use another mechanism for connecting them would be harder and take more time, though it would be more modular.

Its a bit different when looking at it from a material editor pov when you need to link textures to shading functions basicly all the time.

Heh... RenderMan mainly aimed at animation? Right...
You do realize that the RenderMan standard is set up in a way that you have to define each frame individually?

Yes I do know that. And I am allways talking from the artist pov. from their pov it takes a bit more time to export. that isnt too horribly bad. the pixels that comes out of the renderer is the same.

And I didnt mean the spec. I mean the renderers. All renderman renderers are aimed more at animation then still images from an artist pov. If you want to create a still image having a shading language isnt worth much since you can recreate whatever the shading language does (in most cases, not all) faster with image texture mapping.


Darn it, I'm getting entirely too worked up over this. Sorry about that.


Oh thats nothing, check out the discussion I had with the aqsis developers, that was fun! :)

Jamesk
Posts: 239
Joined: Mon Oct 14, 2002 8:15 am
Location: Sweden

Postby Jamesk » Tue May 27, 2003 9:25 am

Just for the record:

Cessen--> Your work on the internal renderer is just excellent. Just keep going!

Green--> Your work on the RIB-export is just excellent. It is the only really useful way to make an external rendering interface. Just keep going!

For Blender 3.0, if that is to be a total rewrite from scratch, I do hope that the scenedata-to-renderengine-connection will be abstract enough to allow renderers to be plugged in just like any other component (like BRDF-models, textures, compositing processors and so on can now). That would open up more possibilities. If Blender has a large enough amount of users by then, we might very well see makers of high-end renderers like Mental-Ray, FinalRender and such provide high-quality interfaces for their products.

cessen
Posts: 156
Joined: Tue Oct 15, 2002 11:43 pm

Postby cessen » Tue May 27, 2003 4:57 pm

Jamesk wrote:Green--> Your work on the RIB-export is just excellent. It is the only really useful way to make an external rendering interface. Just keep going!


Oh, don't get me wrong. I do appreciate Green's work on the RIB-exporter, and I think that it will be very, very useful. I wasn't complaining about that. I was complaining about the RenderMan spec' itself, which is something that Green has no control over.

Green wrote:I really think that creating 2 different shaders is actually more work for the person creating the shaders as long as you think of it as one person creating the shaders with a texteditor after getting instructions on what the shader is for, when researching a new look then you would basicly allways have a very very tight integration between the shading model and any procedural patterns, where things change very often.


Umm... well, maybe. I see it sort of as taking power away from the artist (i.e. those who don't program). Let's consider Blender's current material system. There are three BRDF's, and seven built-in procedural textures. In RenderMan, if you wanted to be able to use any combination of those you would either have to have one huge shader with an absurd number of parameters, or 3x7 different shaders. And what if you want to mix procedural textures? Then you'd have to make a lot more shaders to accommodate those possibilities. And if you want to mix BRDF's (which, granted, is something that you can't do in Blender...yet), then you'd have to have even more shaders (or an even more parameter-heavy single shader).

I think that keeping procedural textures and BRDF's seperate makes more sense. Then the artist can "mix 'n' match" them however they want, having the procedural textures effect any parameter(s) of the BRDF(s), without having to mess around with code or compilation.

Maybe I'm missing some point that you're making, but I really don't see how lack of modularity is good for either the programmer or the artist.[/quote]

green
Posts: 81
Joined: Sun Oct 13, 2002 8:04 pm

Postby green » Tue May 27, 2003 8:03 pm

cessen wrote:
Jamesk wrote:Green--> Your work on the RIB-export is just excellent. It is the only really useful way to make an external rendering interface. Just keep going!


Oh, don't get me wrong. I do appreciate Green's work on the RIB-exporter, and I think that it will be very, very useful. I wasn't complaining about that. I was complaining about the RenderMan spec' itself, which is something that Green has no control over.

Green wrote:I really think that creating 2 different shaders is actually more work for the person creating the shaders as long as you think of it as one person creating the shaders with a texteditor after getting instructions on what the shader is for, when researching a new look then you would basicly allways have a very very tight integration between the shading model and any procedural patterns, where things change very often.


Umm... well, maybe. I see it sort of as taking power away from the artist (i.e. those who don't program). Let's consider Blender's current material system. There are three BRDF's, and seven built-in procedural textures. In RenderMan, if you wanted to be able to use any combination of those you would either have to have one huge shader with an absurd number of parameters, or 3x7 different shaders. And what if you want to mix procedural textures? Then you'd have to make a lot more shaders to accommodate those possibilities. And if you want to mix BRDF's (which, granted, is something that you can't do in Blender...yet), then you'd have to have even more shaders (or an even more parameter-heavy single shader).

I think that keeping procedural textures and BRDF's seperate makes more sense. Then the artist can "mix 'n' match" them however they want, having the procedural textures effect any parameter(s) of the BRDF(s), without having to mess around with code or compilation.

Maybe I'm missing some point that you're making, but I really don't see how lack of modularity is good for either the programmer or the artist.
[/quote]


I dont disagree with anything you wrote, its imo simply a matter of perspective and target audience.

You would basicly never write a shader that would have all different procedural textures layered in a user specified way. It would result in huge and slower then optimal shader. You should convert every (using blender language) material to an original shader. The end user of such an app that translates blenders material in such a way never(unless they really need to) touches the shaders that have been created, the app should cover everything they need, infact such a shader wouldnt need one single argument(though its good to have as many as possible just in case)

But even so, the renderman shading language was not developed with an "export from application" thoughtprocess in mind.
The users of the shading were to be and still is people that write shaders in a text editor, and do this for very specific needs in high end movies(though I have noticed a quite interesting increce of its use lower down in production houses, and even some freelanse animators use it nowdays).

So to say that its taking away power from non-programmers is well.. the shading language was made FOR programmers. When will you ever find your average bryce user wanting to create a material in a text editor? They want buttons to push and sliders to drag.

Btw. This was all created before all the end user 3d app's came out, way in the dark ages of 3d graphics. You would be lucky to find any app that even had the capability to create 3d, even less one that can output .sl code back then.

If you put it like this:
The renderman shading language is bad because its not modular enough, and you need a modular language because it will be better for the users that dont know how to program.

It is basicly a useless argument, the users are programmers, and even if you had a modular programming language the ones that dont know how to programm still wont know how to use it.

So if you divided the shading models into their own shaders and the textures into their own shaders you would then need another shader to bind the textures and shading models together.
And this then is what the users would use.

for example the syntax could look like this

modularSurface()
{
shading Phong;
color perlinNoise;
color fluffyClouds;

perlinNoise = perlin();
fluffyClouds = billow();

Phong = phong(perlinNoise, fluffyClouds);

return Phong;
}

Would that be correct?
If so you can already do this by using include files, you dont get different shaders, but where would the difference to the end user be? imo the difference would be a smaller amount of shaders to compile if you use include files.

And also I think you are overestimating the usage of procedurals. these days shaders are mostly used to layer together image textures with some nice shading model.
there are also some very interesting development in how to use raytracing in an efficient way with shaders going on right now. (mostly for subsurface light-scattering and global illumination)

Procedurals are a remnant from the days when using texture maps was too memory consuming. (not to say you cant create nice stuff with them, or that no-one uses them anymore)

Jamesk
Posts: 239
Joined: Mon Oct 14, 2002 8:15 am
Location: Sweden

Postby Jamesk » Tue May 27, 2003 8:38 pm

I was just performing some basic Carl Bildt-style peacekeeping.
Totally uncalled for, of course, but one can never be too careful. :D

Now go back to work, both of you! I'll just sit here on my lazy non-coding *ss and build some more armadillo-dogs...

cessen
Posts: 156
Joined: Tue Oct 15, 2002 11:43 pm

Postby cessen » Tue May 27, 2003 8:58 pm

Green wrote:But even so, the renderman shading language was not developed with an "export from application" thoughtprocess in mind.
The users of the shading were to be and still is people that write shaders in a text editor, and do this for very specific needs in high end movies(though I have noticed a quite interesting increce of its use lower down in production houses, and even some freelanse animators use it nowdays).


Umm... non-programmer artists use shaders all the time. It's just that they don't program them.
In fact, the concept of "shading" is actually a 2d concept: you "shade" pixels of an image. That is why they are called shaders in the first place, because they decide the final color.

The problem with grouping everything under a single concept of "shading" is that it causes a lot of redundancy. Consider the fact that the Blinn BRDF is used in a lot of different shaders. Why not seperate it out? Leaving seperate copies of it compiled in the various shaders makes about as much sense as storing redundant copies of greek-pillar geometry just because each one has a different texture map on it.

Green wrote:So to say that its taking away power from non-programmers is well.. the shading language was made FOR programmers.


The reason for shaders was so that renderer's wouldn't be limited to a set of built-in "shading" algorithms (such as the Phong model in Blender).

Green wrote:When will you ever find your average bryce user wanting to create a material in a text editor? They want buttons to push and sliders to drag.


Precisely. So if there are a set of shaders that can be combined (without having to do any programming), that makes things more flexable for the users. No? Would you like it if the procedural textures in Blender only worked with the Phong model, simply because they were tied into it for some arbitrary reason?
I think it would make a lot more sense if the programmers program components (such as the Blinn model, or a clouds procedural texture) and then the users can combine them however they want.

Green wrote:Btw. This was all created before all the end user 3d app's came out, way in the dark ages of 3d graphics. You would be lucky to find any app that even had the capability to create 3d, even less one that can output .sl code back then.


Exactly. It made sense back then (well, actually, that's disputable--but that's not what I'm arguing about). It doesn't make sense now. Hence my complaint.

Green wrote:If you put it like this:
The renderman shading language is bad because its not modular enough, and you need a modular language because it will be better for the users that dont know how to program.

It is basicly a useless argument, the users are programmers, and even if you had a modular programming language the ones that dont know how to programm still wont know how to use it.


No, no, no. The language doesn't need to be modular (it's fine just the way it is). It's the way that the compiled shaders interact with each other that needs to be modular.

Green wrote:So if you divided the shading models into their own shaders and the textures into their own shaders you would then need another shader to bind the textures and shading models together.
And this then is what the users would use.


No, you wouldn't need a binding shader. That's what the renderer is for! The renderer would deal with allowing the shaders to communicate with eachother (for textures effecting shader parameters, etc.) and it would deal with mixing the end colors of multiple shaders applied to a single surface. See?

Forget the language, I'm not even talking about that. I'm talking about once the shaders have all been compiled, how can the user combine them and have them interact? At the moment, there is no way (or, at least, not much of any way) for users to do that, and thus they are forced to learn how to program if they wish to combine shaders.

I think that was our misunderstanding (I knew there had to be something). I thought I was talking about the compiled shaders, and you thought I was talking about the shading language itself. Sorry for not being more clear about that.

thorax
Posts: 320
Joined: Sun Oct 27, 2002 6:45 am
Contact:

Postby thorax » Tue May 27, 2003 9:20 pm

Green said this:

So to say that its taking away power from non-programmers is well.. the shading language was made FOR programmers. When will you ever find your average bryce user wanting to create a material in a text editor? They want buttons to push and sliders to drag.

Btw. This was all created before all the end user 3d app's came out, way in the dark ages of 3d graphics. You would be lucky to find any app that even had the capability to create 3d, even less one that can output .sl code back then.



I don't know if you are for or against text based languages..

I could describe a program as a graph, not like a
flow chart but as a tree of evaluation that is produced by a
parser as a result of a pass on a source file.. There is no reason
to keep things script based, I know for some its considered to be easier,
I think its thought to be more dependable and efficient, but there
are advantages to creating a object oriented language over a
text-based language.

Also I never would use Bryce in a million years and I've used every 3D
package since Sculpt 3D on the amigas.. I just think there are a lot of intensely stupid programming-only artists who think everyone else
is art-only artists.. I've gone back an forth between the two and I think a lot of you have as well.. I was driven to program out of a need for a better package, what I've found is there are a lot of programs and programmers that lack program design skills.

To understand program design you have to be able to think of programs as objects, and graphics of boxes attached with strings.. And think of encapsulations.. I'm 100% sure I could describe anything in the computer graphically because I've taken Assembly and Digital Electronics, its
nothing but a complex parallel association of objects with essential
functions. You could produce an entire computer from NAND gates, and
fundamentally aren't NAND gates nothing more than concepts,
you put in two bit states and out pops four possible values, the NAND's
can be arranged into flip flops, flip flops can be made into bytes,
you can use the NAND gates to make ALU's.. You need a quartz tuned clock to generate a clock signal, the flip-flops can be used to determine
when to perform functions based on the clock, memory is bunches of flip-flop concepts, when you turn the power on a computer that sets the state
of a bit that turns on a set of components, that cause the computer to read from a specific point in ROM, where the boot wedge is
loaded, then the operating system is loaded from a disk,
the operating system's kernel is executed, the kernel sets up processes
and queues (think of a ring of beads where each bead represents a
event, and the operating systems main process traverses these
beads, evalutating them, and spawning processes which are
put back onto the queue..).

Text-based interfaces are just views on the complex mechanism
of computers in a human readable way. But we can also understand
graphical languages, so why do we choose to use a text-based
interface over a graphical one, if really computers are fundamentally
graphs of circuits associated by wires.. Programs are nothing more
than a series of operations stored in memory and read off and operated on. Sequential circuits are simplifications of parallel circuits, so
the ultimate computer language is one that resembles the
way computers are designed, that of a IC CAD design program..

Why doesn't blender allow artists to program this way? Would this be too hard?
Last edited by thorax on Tue May 27, 2003 10:57 pm, edited 2 times in total.

green
Posts: 81
Joined: Sun Oct 13, 2002 8:04 pm

Postby green » Tue May 27, 2003 9:44 pm

Umm... non-programmer artists use shaders all the time. It's just that they don't program them.
In fact, the concept of "shading" is actually a 2d concept: you "shade" pixels of an image. That is why they are called shaders in the first place, because they decide the final color.


I never said that non-programmers dont use shaders. I said that the renderman shading language was developed for programmers and is still used by programmers.

In fact, the concept of "shading" is actually a 2d concept: you "shade" pixels of an image. That is why they are called shaders in the first place, because they decide the final color.

From a creation pov when writing a shader you think of it as shading points on surfaces. So for creators, and specificly when creating displacement shaders, its a 3d concept.

The problem with grouping everything under a single concept of "shading" is that it causes a lot of redundancy. Consider the fact that the Blinn BRDF is used in a lot of different shaders. Why not seperate it out?


Because its only one line of text. if you separate it out, when you call it you still will only have one line of text.

So if there are a set of shaders that can be combined (without having to do any programming),

Well this is the thing. What do you mean with "without any programming"?
Should they say how to combine the shading models and the textures in the .rib file?
Wouldnt that cause alot of redundancy when using the same "look" for multiple objects? (since you wouldnt use a third shader to bind it together)

No, no, no. The language doesn't need to be modular (it's fine just the way it is). It's the way that the compiled shaders interact with each other that needs to be modular.


I think the problem I am having is that I dont seam to be able to see the way you visualize the way you go through the workflow of creating a "look" for an object that combines pre-compiled procedural textures with texture files with shading models, one that is able to bind this together in as a complex way as you would in a normal .sl file. How would something like this work in practise?

cessen
Posts: 156
Joined: Tue Oct 15, 2002 11:43 pm

Postby cessen » Wed May 28, 2003 6:27 pm

(About the Blinn example.)
Green wrote:Because its only one line of text. if you separate it out, when you call it you still will only have one line of text.


I've written the Blinn BRDF before--in tuhopuu for one--and I can assure you that it is not one line of text.
But that's not really the point, anyway. The point is that it is a common thing that can be used in multiple places, so why not seperate it out? A possible analogy would be the distinction between static and dynamic builds of Blender: the dynamic build allows people to use whatever drivers they wish. Similarly, seperating the BRDF from the rest of the shader would allow a person to use whatever BRDF they wanted.

Green wrote:Well this is the thing. What do you mean with "without any programming"?
Should they say how to combine the shading models and the textures in the .rib file?


Yes.

Green wrote:Wouldnt that cause alot of redundancy when using the same "look" for multiple objects? (since you wouldnt use a third shader to bind it together)


That's a good point. I'll have to think about that one. My first-pass reaction, though, is that there would be some way of specifying a "material" that could be applied to any surface. Granted, that is sort of what shaders are, except that shaders aren't organized and broken down that way. But that's just a first-pass reaction...

Green wrote:I think the problem I am having is that I dont seam to be able to see the way you visualize the way you go through the workflow of creating a "look" for an object that combines pre-compiled procedural textures with texture files with shading models, one that is able to bind this together in as a complex way as you would in a normal .sl file. How would something like this work in practise?


Hmm. Well, I suppose I should start by explaining my basic assumptions. First of all, BRDF's and procedural textures are basically mini-programs. They are functions that take input and give output. Thus, I see them as being the only things that really need to be compiled.

The ways in which they are combined are much more like definitions, or structures, and thus I don't think it really makes sense to have that aspect of a material be compiled into a shader; I think it would make a lot more sense to have that be part of the RIB files.

My idea for how they would be put together would be a sort of tree: textures get mixed (ideally both file textures and procedural textures would have some sort of abstraction so that they would be effectively identical for this) and then those mixed values would effect arbitrary parameters of any number of shader models (typically a BRDF) which would then be evaluated and mixed to provide the final color.

I haven't really thought it out particularly thuroughly, but that's the basic idea.

green
Posts: 81
Joined: Sun Oct 13, 2002 8:04 pm

Postby green » Wed May 28, 2003 8:43 pm

I've written the Blinn BRDF before--in tuhopuu for one--and I can assure you that it is not one line of text.


I was talking about using functions from a include file.

something like
color = blinn(color, normalandshadingpointsetc..);
could do just fine.

Even so. Alot of the less complex shading models (plastic, or constant for example). All fit just fine in one line of code.

The point is that it is a common thing that can be used in multiple places, so why not seperate it out?


Because it means more work for the shader writer. And will clutter up the rib files(they are cluttered up enough as it is). When dealing with renderman shaders its often hard to know what is textures and what is shading. Would raytraced reflections be textures? Would translucense be textures? would bump mapping be textures? would subsurface light scattering be textures? it goes on and on :)

If they are not textures and part of the shading model. then you end up with a very static shading model that hasto have everything under the sun built in.

Or would you want to have the old functionality of writing a surface shader in the old and new way living side by side?

he dynamic build allows people to use whatever drivers they wish. Similarly, seperating the BRDF from the rest of the shader would allow a person to use whatever BRDF they wanted.


I dont think the end user is going to go digging through a 500 frame .rib file and add 10000 lines of text to change one shading model. They want buttons to push and sliders to drag.

Or do you expect them to write the scene themself in a text editor aswell? like people did with povray in the old days.

My first-pass reaction, though, is that there would be some way of specifying a "material" that could be applied to any surface. Granted, that is sort of what shaders are, except that shaders aren't organized and broken down that way. But that's just a first-pass reaction...


getting closer and closer to povray here. do you think the end user is going to want to write complex interconnecting "looks" that connect shading models and textures in a sane way? I doubt it. Its way slow to write and frustrating. And why would they want to do it? What added benefit do they get? I really dont see anything with this concept that you cannot do with include files in a surface shader. (am I missing something?)


When dealing with the mixing of textures to get a color for the shading.
Who writes the texture files that gets mixed?
Should the renderman spec include a specified number of textures aswell?

As it currently is the amount of free interesting renderman shaders on the net is not exactly mind blowing.
The shaders that are there are used to learn how to write shaders.

I see the same development as for povray with this idea. You get alot of sites with very specific textures to effect objects. Like galaxies, and explosions etc..

But the user does not get any real controll over them.

its basicly renderman turned into some sort of bryce/poser/povray app. You get a pre-packaged deal. And the result you get is cliche.

And to get there they will havto edit huge text files by hand.

macke
Posts: 25
Joined: Tue Oct 15, 2002 11:57 pm

Postby macke » Thu May 29, 2003 3:11 pm

cessen wrote:It can still look ok (so I'm not saying that it shouldn't be an option), but it's not physically accurate


Physical Smysical. If it looks good, it is good, end of story. A simple gaussian blur combined with boosted highlights and some cheap bokeh solution works wonders in most compositions, and is easy to use and not too slow either. Most people just overdo the effect anyhow, making lots of blurryness everywhere. Fact is, DOF doesn't make a picture, but it's the detail of one.

On to the shading note. I guess Cessen is talking about building shading blocks. Have a look at the rendertree in XSI for a good example of this. What you do is connect back and forth which all end up in a material node, which is kind of a final destination. This gives the user abilities to mix anything with anything, BRDF's with texture or whatever. Depending on how many building blocks you have, you can create more and more complex shading networks which can be just as advanced as any programmed shader. The renderer takes care of all the binding between nodes, which is a lovely thing for shader writers. Writing shaders for Mental Ray is a pleasure, even for non-coders like myself. Have a look at it, it might give you some ideas.

green
Posts: 81
Joined: Sun Oct 13, 2002 8:04 pm

Postby green » Thu May 29, 2003 4:35 pm

macke wrote:
cessen wrote:It can still look ok (so I'm not saying that it shouldn't be an option), but it's not physically accurate


Physical Smysical. If it looks good, it is good, end of story. A simple gaussian blur combined with boosted highlights and some cheap bokeh solution works wonders in most compositions, and is easy to use and not too slow either. Most people just overdo the effect anyhow, making lots of blurryness everywhere. Fact is, DOF doesn't make a picture, but it's the detail of one.

On to the shading note. I guess Cessen is talking about building shading blocks. Have a look at the rendertree in XSI for a good example of this. What you do is connect back and forth which all end up in a material node, which is kind of a final destination. This gives the user abilities to mix anything with anything, BRDF's with texture or whatever. Depending on how many building blocks you have, you can create more and more complex shading networks which can be just as advanced as any programmed shader. The renderer takes care of all the binding between nodes, which is a lovely thing for shader writers. Writing shaders for Mental Ray is a pleasure, even for non-coders like myself. Have a look at it, it might give you some ideas.


Well. Macke I know for a fact that you know how to program, calling yourself a non-coder gives the real non-coders too much of a good name :).

You can take a look at shaderman if you want to see how to connect nodes in the renderman shader language.
its very similar to the xsi render tree. But unfortunetly the shaderman developers just created a direct gui for the building blocks. So its not very intuitive.

And also. Alot of the building blocks in the xsi rendertree are actually building blocks made by softimage. They are not made by mental images.

There is really no reason why you wouldnt be able to convert any shadertree to renderman shading language.

A good example of this would be maya's hypershader. the MayaMan renderman exporter can handle converting that just fine.

(also, is it grammaticly correct to say you are writing a shader when using the gui drag and drop thing?).

--------------------------------------------------


I think its a bad idea to have some sort of connection mechanism in the .rib file. You would really need to have it in some sort ofcompiled form, so the renderer wouldnt havto compile the same shader for every frame you render.

But if you would really want to get this type of functionality(without using include files, which work more or less exactly like the xsi shader tree blocks, with having any number of inputs, and one output)

why not just make everything a texture?
shading models shouldnt need to be special
they basicly do what a texture does anyways. generate color(or normal, or alpha value.. etc..).

so having a system of blocks that can be connected. the blocks then having known inputs and outputs. And in a block you can connect different blocks.

So in the end you can write everything in one file if you want to. but use other texture/shading model files from a file if that would be what you wanted.

the only thing that would separate the different blocks would be what return type they would send.

In effect this would mean adding just one more type of shader, one that can be linked from any other type of shader.

cessen
Posts: 156
Joined: Tue Oct 15, 2002 11:43 pm

Postby cessen » Thu May 29, 2003 4:47 pm

Green wrote:Because it means more work for the shader writer.


Perhaps I'm missing something really obvious, but how would that make it *more* difficult for the shader writer? (Being someone that has written and used shaders myself, I can tell you that it would make things a hell of a lot easier for me.)

Green wrote:And will clutter up the rib files(they are cluttered up enough as it is).


Well, I suppose this brings me to another point: why RIB files in general aren't a very well thought-out system. But I suppose I'll stick to the subject at hand for now. ;-)

Green wrote:When dealing with renderman shaders its often hard to know what is textures and what is shading.


That was my main complaint to begin with! And that's precisely *why* they need a better organization scheme.

Green wrote:Would raytraced reflections be textures?


No.

Green wrote:Would translucense be textures?


As in alpha? No. As in ray-traced refractions? No. As in fake environment-map based refractions? Yes (at least, the environment map itself would be--how that environment map is used would be a projection/mapping issue).

Green wrote:would bump mapping be textures?


The act of altering the surface normal would not be, but the variation over the surface of *how* the surface normal is altered would be.

Green wrote:would subsurface light scattering be textures?


No. (Unless it's faked with image-based methods, in which case the image aspect of it would be textures.)

Green wrote:If they are not textures and part of the shading model. then you end up with a very static shading model that hasto have everything under the sun built in.


I thought you were arguing against my point...
Anyway, that is how RenderMan shaders are set up right now, and that is a large part of my complaint: procedural textures are written into the shading models.

Green wrote:I dont think the end user is going to go digging through a 500 frame .rib file and add 10000 lines of text to change one shading model. They want buttons to push and sliders to drag.


I fail to see how what I'm suggesting conflicts with a slider/button style interface for non-programming users. In fact, it seems to me that what I'm suggesting would fit *better* with a sliders 'n' buttons style interface.

macke wrote:Physical Smysical. If it looks good, it is good, end of story.


I agree completely. I wasn't saying that it shouldn't be an option (in fact, I said the opposite). I was just pointing out, in an "just incase you want to know" manner, that it's not physically accurate. But that doesn't mean at all that it's not useful.

macke wrote:On to the shading note. I guess Cessen is talking about building shading blocks. Have a look at the rendertree in XSI for a good example of this.


Yes, I was thinking of something along those lines. Thanks for making my gibberish understandable, Macke. ;-)


Return to “Rendering”

Who is online

Users browsing this forum: No registered users and 1 guest