Because they havto compile and keep track of different shaders to combine to one look. Instead of keeping track of different function names.Perhaps I'm missing something really obvious, but how would that make it *more* difficult for the shader writer? (Being someone that has written and used shaders myself, I can tell you that it would make things a hell of a lot easier for me.)
(you still havent answered what added benefit this bring over include files)
I was thinking of taking the shading information on the currently shaded point's polygons backside. (diffuse(-Nf)) and adding it to the normal shading.As in alpha? No. As in ray-traced refractions? No. As in fake environment-map based refractions? Yes (at least, the environment map itself would be--how that environment map is used would be a projection/mapping issue).
To get for example the effect you get on a leaf or a lampshade when a light shines through it.
As it is right now you put in exactly what is needed and not one line of text more into the surface shader. With this system you would get everything under the sun into the shader.I thought you were arguing against my point...
Anyway, that is how RenderMan shaders are set up right now, and that is a large part of my complaint: procedural textures are written into the shading models.
As it currently is you can do anything you want with the renderman shading language. I dont think this would be easier to use when writing a shader translator for the blender material editorI fail to see how what I'm suggesting conflicts with a slider/button style interface for non-programming users. In fact, it seems to me that what I'm suggesting would fit *better* with a sliders 'n' buttons style interface.
If you want to talk about the graphical abstraction, then thats another thing.