crafter-blender integration

General discussion about the development of the open source Blender

Moderators: jesterKing, stiv

bobthevirus
Posts: 0
Joined: Wed Jun 25, 2003 7:11 am

Post by bobthevirus »

Image

This image illustrates the method that the above XML uses, in an abstract kind of way - It shows that there are given inputs and outputs for each control block, and attempts to show what will happen as you choose to go into more and more detail for each part - ie. you will only see "a's" inputs and outputs and "s_b" and "s_c"'s when not directly editing "a"'s internals - in which case you would see that "a" is just a wrapper for "s_a"... Of course it could be a wrapper for much more...

How this would look to the end user is something like the crafter interface does now, with each input labeled and possibly typed by the shape that they are - sorta like

Code: Select all

|-----|
|      |
0      D 
|      |
|-----|
might indicate an int (round!) input, and a float output - converting blocks would probably be useful if it were typed.... Else typing could be internal... though that would make debugging _way_ harder, and the instances of converting between types would be so rare as to probably not require it...

Anyway, back on topic, Is this way of linking and "connectedness" going to work? - The semantics are not important, which is why I created the pictue, to try and make it more abstract, of course it ends up looking sorta like what we are actually going to use for the GUI... What does that say about the gui's absractness :-P

G

bfvietnam
Posts: 0
Joined: Wed Apr 21, 2004 8:54 pm

Post by bfvietnam »

A note On my XML

This format would support the concept that the blocks and
the blocks modules are connnected by nodes that act as
routers of information, like pipes in unix..a pipe is a round robin queue (implemented as an array, but can be implemented as linked lists),
where the queue changes size dynamically..The purpose of this
is that modules/blocks can handle buffer input or output, or in the
case of output the pipes that accept output can be notified when output is available. I will diagram up how this looks later..

I don't think it would be too hard to debug, specially if eachmodule had a test module, or maybe invariants could be specified (certain statements that should always be true, and when they are false will kick an exception event {which is a complex type of error} that will notify the user that the shader is having problems processing input.. By localizing control to the classes it forces the classes to deal with the problems of handling varying combinations of input and makes it easy to debug the modules since its easy to see what caused them to screw up.. You can strongly type the inputs and outputs, but this reduces the chance for modules to be reuseable components.. Either the module handles this complexity, or you can use type casting components external to the modules, or you can use both.. If you use both, then there is a better change the graph of the modules will look less complex, and over time the complexity will be reduced as users arrive at a core set of input and output parameters. Note that if you leave this decision up to the modules, some modules can be created to perform transformations on data input, when new types data types evolve..

bobthevirus
Posts: 0
Joined: Wed Jun 25, 2003 7:11 am

Post by bobthevirus »

kk, I'm really having problems understanding what you just said, probably because I am confused.......

Anyway, if it means what I think it means, how does the renderer actually use these classes.... The way I(inexperienced as I may be) was envisioning was for blender to have methods defined that the matarial code would call if the shader asked for them... Admitidly, this is way out of my depth, but surely this sort of thing would happen once for each pixal, and therefore buffers and stuff would be pointless, because for each pixal there is a linear (but optimiseable (bits can be skipped if something is out of range)) pipeline for the data to go through...

About the typing, I can see no real advantage to not having it typed, as converter blocks could be stuck in by the end user, and thier values tweaked if altering them is nessercery... The thing is that 80% of the time you are not going to want to stick a square float into a RGB hole, and if you do you are definately want to decide how to convert the values between each other.... Also doing it with typed values gives advantages when exporting to renderman and yafray and povray and stuff...

So, here is a question for the rest of you avidly hanging on to our every word (yeah right!) - Should the values between blocks be typed or not? And more importantly than that, why?

g

bobthevirus
Posts: 0
Joined: Wed Jun 25, 2003 7:11 am

Post by bobthevirus »

Note: See question above about typing ^^^^^^^^^^^^^^^

Anyway, sorry bfvietnam, didn't see your post about the XML, as I looked before I posted, and ended up on the next page, so (incorrectly) assumed that I'd had the last post on the previous page...

KK, I definately agree with you about block instead of control, it makes it muxh more obvious what it is (but that now means I'll have to use "nodes" where I was using "blocks" before... Anyway, the way you do this:

Code: Select all

 <method name="get_result">
      <code type="crafterlisp">
         (include "math.lsp")
         (overloaded_mult operand1 operand2)   
         # note that the types of the operands are not enforced, this is handled internally
          # by with a call to "math.lsp"..
      </code>
   </method>
bit is kinda troubling to me... It doesn't fit with my idea of the blocks at all - instead relying on the internal code to return the value requested... Don't get me wrong, it's probably better than my idea, but to me it doesn't fit either the graphical language, nor the elegantness (is there such a word?) of the XML... I would prefer to keep the XML linear and to the point, with the minimum fuss inside each block as possible.

I have some random ideas for actually using these matarials, they basically involve parsing it from the bottom up, and having it set up the c++ to use a linear series of pointers to functions (c++ can do tht, right?), with only very occasional branching for each matarial type... Basically it is built up backwards from the setSpec(int red, int green, int blue), with a series of connecting functions and working functions (the connecting ones being needed because of reasons...) to the inputs.... All this is in a Shader class... probably inside a blender datablock, that could be linked from an existing matarial block (though the renderer appears to be so inbred at the moment, I doubt that would be much use...) This would provide very quick execution time, probably in the order of what it is now (well actually I imagine that more object will find thier way in, and so it will become impractical...)

So, in other words, before we go any furthar with the XML (which we have appeared to reach at least an abstract agreement on, and will just have to be "tweaked" to reach parseable (and useable) level, I think I am going to get straght into the code, and see if I can figure out excactly how the renderer actually works

I WILL NEED HELP WITH THIS.... Please could those that have had more experience reply here with even very basic desciptions of ways of integrating a "matarial module" into the code, and also, based on the last two XML samples, how to convert this into a useable format for the renderer, as I am 99% sure my way is just screwy....


[edit]Sorry bout the little bit about your method call being screwy, I am simply trying to force it into my conception which has much smaller block sizes than you, and a tree-like stucture, versus your's with the more list-type (or OO) structure...[/edit]

bfvietnam
Posts: 0
Joined: Wed Apr 21, 2004 8:54 pm

Post by bfvietnam »

As for the dual representation of the relationships between the modules, one could probably be derived from the other..

The values probably need to be typed if you want to compile the source,
as it would make it easier to control how things are stored. But for the block level stuff there needn't be strong typing of variables.. This is the user's level of interaction.. And they are not going to really care about variable types, or worse become intimidated when their shaders don't work.. As for the float to RGB conversion.. What would one do with HDRI? Should RGB represent colors in the form of floating point by default to suport HDRI? Why does it have to be 0 through 255?

To convert from RGB to float it would be something like ((R/256 + G/256 + B/256)* (1.0/3)).
And float to RGB would be something like R = G = B = (floor(256 * (f - abs(f)))).

It may be the case, especially if working with film, that you don't use 24-bit color .. So you might want to bump up the internal representation of what RGB is.. So its best if the R, G and B internally are represented with floats each, and the conversion is easier than having to convert from a integer representation to float and back..

RGB to float would be (R + G + B)/3 (in the above you need to make an extra multiply to give a signal to the compiler that you need this cast to float.. The other way, you get R = G = B = f; And when you want an RGB value in 24-bit form, then you can convert it.. But I would imagine when blender does its rendering and ray-tracing, if not now eventually,
it will need to use floats to do filmwork and use HDRI.. Also it may be the case that future color formats use wavelength and intensity of color..

If we strongly-type the variables internally, then shaders constructed before may have to be rewritten to handle new color formats. I guess it wouldn't be too hard to convert old shaders to new color component representations. But another case may be, shouldn't it be up to the user to be able to easily toggle from one kind of conversion to another and
render time? If so, how would you code this into a shader that uses strongly-typed inputs/outputs?


I think there is a solution here.. The links between (high level) blocks and (low level) modules should be a type of node itself..

Its possible though that these links could handle the chore of type conversion for the modules/blocks. Or even offer methods to the blocks/modules to handle the conversions explcitly.. Like:

Code: Select all

class link {  // I tried to optimize this a bit.. 
   void *temp; // temporary pointer to data 
                     // (for purposes of implicit conversion)
   object *input; // pointer to block sending data..  
   method get(); // obtains data from block sending data.. 
   method get_raw(); // obtains raw data from sender.. 
}
That is..

Code: Select all

(block1) ----input--->{link}-------output------>(block2)

Using implicit type conversion: 
     link points to block1, and block2 has a pointer to link.. 
     block2 calls link with get() with its exprected data type. 
     Link calls block1 and requests an evaluation.. block1 
     passes link the data and its data type.. Link is left with 
     the responsibility of handling the conversion, then 
     link gives the the data to block2 as the type block2 expected.. 

Using explicit type conversion:
     Same as above, but block2 doesn't have to specify the data 
     type it's expecting.. And when the data is returned to block2 
     it is given raw without conversion, but block1 still has to 
     specify the data type (as a string)..    

The advantage of this implementation is that the link node can
be overloaded to handle newer data types, as they come about, and
these data type conversions can be applied to the entire shader configuration.. And the complexity of conversion is offloaded on the link data type.

It could be that there is various styles of links, and some link-types just pass the data directly without conversion, then another link type does implicit conversion, and possibly another that is user defined (that may, for instance, do things like color-cube conversions on the data)..

Visually in Crafter these links could be represented with different colors,
like blue for a basic link (no conversion), and red for implicit conversion, and yellow for user-defined conversion..

Its also possible that the valves I talked about, previously having to do with the utilization of sliders, could be made into a link-type.. And this is closer to what you wanted, but rather than the slider showing up as a box with a slider on it, it could be a line with varying lightness (the slider would show up when the use clicks on the link or as they pass over it) .. So that you can visually tell how much contribution one module's output is
going to the input of another module.. It also wouldn't clutter the drawing of the modules with extra boxes that aren't really modules. And still in the future the link nodetype could be overloaded to allow other kinds of interface components and styles.. However I can just see that at some point in the future someone will modify the link to allow a module to
call another one multiple times for other kinds of data, and that
would result in confusion for the designers of the shaders..

If you are worrying about whether this would compile, I mean if the resulting source would compile, it would, given that the block/modules
define their inputs and outputs distinctly and that the conversion from one data type to another can be reduced to code.. The idea with the implicit
conversion is that the user doesn't have to think about it, but the code
can still reduce it to static code or to a strongly-typed language.. Its just a matter of who takes the responsibility of managing the conversion, the shader designer or the crafter developers.

bfvietnam
Posts: 0
Joined: Wed Apr 21, 2004 8:54 pm

Post by bfvietnam »

bobthevirus wrote:Note: See question above about typing ^^^^^^^^^^^^^^^

Anyway, sorry bfvietnam, didn't see your post about the XML, as I looked before I posted, and ended up on the next page, so (incorrectly) assumed that I'd had the last post on the previous page...

KK, I definately agree with you about block instead of control, it makes it muxh more obvious what it is (but that now means I'll have to use "nodes" where I was using "blocks" before... Anyway, the way you do this:

Code: Select all

 <method name="get_result">
      <code type="crafterlisp">
         (include "math.lsp")
         (overloaded_mult operand1 operand2)   
         # note that the types of the operands are not enforced, this is handled internally
          # by with a call to "math.lsp"..
      </code>
   </method>
bit is kinda troubling to me... It doesn't fit with my idea of the blocks at all - instead relying on the internal code to return the value requested... Don't get me wrong, it's probably better than my idea, but to me it doesn't fit either the graphical language, nor the elegantness (is there such a word?) of the XML... I would prefer to keep the XML linear and to the point, with the minimum fuss inside each block as possible.
There are two node types.. Blocks are high level, modules are low-level..
The modules would use methods.. The Blocks would also have methods (possibly, but blocks can contain modules, some blocks are predefined,
like Mult would most likely be predefined).. But the methods of
modules should be the only ones to allow the embedding of
scripting languages.. The purpose of the scripting language embed is
in case someday in the future crafter is to have plugins that support alternate scripting languages. The scripting language that crafter uses internally, to stick with my original plan to have a visual language, can be XML derived from a flow-charting language..

here are some things to keep in mind when I talk about blocks, classes and modules:

module is and instance of class..
block configures and connects modules..
code can only be defined in classes.
blocks needn't always contain modules but may be predefined.
similarly crafter will have its own predefined classes.

As crafter is developed, classes of modules that tend to become more popular will be made into predefined types, just as blocks that become more popular will become predefined blocks.. The difference is the level of complexity and of control. Its likely the blocks will become predefined at runtime if they are included as predefined types with crafter.. Whereas the predefined classes, will be compiled into crafter, and made reducible to
executables for faster execution..
I have some random ideas for actually using these matarials, they basically involve parsing it from the bottom up, and having it set up the c++ to use a linear series of pointers to functions (c++ can do tht, right?),
Are you talking about an array of function pointers? C++ does that
with classes.. But c does it with function pointers..

You don't want to parse the data everytime, you want to reduce it to
data structures that are actively resident in memory.. And you can either
transverse the data structure and evaluate it or you can have objects at the top of the tree calling objects below requesting data.. The bottom up evaluation occurs as a result of either recursively evaluating the data structure or by returning results of a call from a parent object.. ITs the same thing though.. The difference is that in one you are puting
function pointers in every node of the tree, and in the other you have functions traversing the tree by calling eachother on the rest of the tree (having a function for each node type in the tree). You can also traverse the tree by using a function that knows how to handle all the nodetypes and uses a stack to keep track of where it has been..

However if you use objects instead of just data structures, you can reduce the complexity of evaluation by turning recursive function calls into calls
between objects.. Also objects can be designed to easily generate
different kinds of results.. Like you could ask a shader tree of objects to generate assembly code, if each object has knowledge how to generate assembly.. Whereas with the other methods, you would have to have seperate functions and/or states for determining which mode of evaluation you were in and which functions to use.. The simplicity of using recursive functions in C would be approaching the simplicity of using objects in C,
but not exceeding.. Because you still can't pass data types that
are themeselves objects, well not very easily..

But you could implement this class stuff in C.. The first C++ compilers
just converted C++ to C and compiled in C.
with only very occasional branching for each matarial type... Basically it is built up backwards from the setSpec(int red, int green, int blue), with a series of connecting functions and working functions (the connecting ones being needed because of reasons...) to the inputs.... All this is in a Shader class... probably inside a blender datablock, that could be linked from an existing matarial block (though the renderer appears to be so inbred at the moment, I doubt that would be much use...) This would provide very quick execution time, probably in the order of what it is now (well actually I imagine that more object will find thier way in, and so it will become impractical...)

So, in other words, before we go any furthar with the XML (which we have appeared to reach at least an abstract agreement on, and will just have to be "tweaked" to reach parseable (and useable) level, I think I am going to get straght into the code, and see if I can figure out excactly how the renderer actually works

I WILL NEED HELP WITH THIS.... Please could those that have had more experience reply here with even very basic desciptions of ways of integrating a "matarial module" into the code, and also, based on the last two XML samples, how to convert this into a useable format for the renderer, as I am 99% sure my way is just screwy....


[edit]Sorry bout the little bit about your method call being screwy, I am simply trying to force it into my conception which has much smaller block sizes than you, and a tree-like stucture, versus your's with the more list-type (or OO) structure...[/edit]
Well its good when coming up with a model that is to be scalable in the fuuture to have a level of indirection or abstraction.. Its easy to think
too much before doing somethign and not get anything done, but its harder to redesign something that wasn't thought about much initially.. This is why we are having this discussion.. When I waste more time is not on designs, I waste more time refactoring code that was poorly designed or hacking code that should be thought out before, because 90% of the time I'm "hacking at the terminal" I can change my mind about things, and then spend lots of time going around fixing things so that it works with the
new way.. When I sit down to design things, I can get done how to do things in a tenth the time it would take me to hack it straight.

As for C++ versus C, C++ allows developers to offer levels of
abstraction, whereas with C, its not very easy because there is not
a finer level of scope control as there is with C++.. In C you would need
functions to pass pointers to the parent structures to the methods, so that
the methods can make calls on the parent structure, but a method could just as easily overwrite the method pointers in the parent structure
to point to other methods or even an illegal address in memory.. There is just a lot more feet to shoot with C, unless you are good with syntactical guns.

bobthevirus
Posts: 0
Joined: Wed Jun 25, 2003 7:11 am

Post by bobthevirus »

Heh, I would definately be doing everything I possibly could in c++(If I get good enough to actually do something) - I am better at the abstract OO type stuff, than remembering everything, as you seem to have to do with c... It makes more logical sense...

You don't want to parse the data everytime, you want to reduce it to
data structures that are actively resident in memory.. And you can either
Yeah, sorry, what I was getting at was that the new blender DNA object would simply be an object that is created from the XML when the GUI is changed...

Ie. The Object would have a whole lot of child objects/whatevers, and each of those would get a pointer to the object that is linked to thier "input", one for each... something like each input being a struct with

Code: Select all

struct input {
 string name;  //this is only for attaching to the right input when parsing...
 object inputObject; //where to get it from
 value
}

which is used by the getInput method of each object:
getInput(name) {
 input = getInputStruct(name);
 if (input.value != NULL) {
  return input.value;
 } else {
  input.value = input.inputObject.getOutput(name);
  return input.value;
 }
} 
and the output(s) being got from the getOutput("[name]") method of each object... Each of these objects could possibly have buffering optimisations built in..

Anyway, it occurs to me now that this is probably a very inefficient way of doing things, as it involves at least six of these objects for each pixal on the screen, each with many sub-objects... These would have to be created (and allocated) by the renderer at render time from the prototypes stored in the blender DNA thingy... I also believe that these would have to be at a relitively low level....

I would like to note now, that this is where my lack of experience, and minor knowledge of alternitives to a hard-core OO system kick in.... So, I will be disappearing for a while, to look at other open-source renderers, to see how they parse renderman/X3D/yafXML... Also, this will give me more scope to understand the advantages of different systems... I'll post here if I find any good analysis', and add my findings to the wiki..

G

P.S. What would be the impact on speed if optimised python was built from the XML? That would at least save me from going insane trying to understand how to fit a non-static language into a static binary...

bfvietnam
Posts: 0
Joined: Wed Apr 21, 2004 8:54 pm

Post by bfvietnam »

bobthevirus wrote:Heh, I would definately be doing everything I possibly could in c++(If I get good enough to actually do something) - I am better at the abstract OO type stuff, than remembering everything, as you seem to have to do with c... It makes more logical sense...

You don't want to parse the data everytime, you want to reduce it to
data structures that are actively resident in memory.. And you can either
Yeah, sorry, what I was getting at was that the new blender DNA object would simply be an object that is created from the XML when the GUI is changed...
The only time you would use XML is to export the internal data structures
that crafter is using.. Internally though it would be using objects connected
with pointers, and the GUI would modify the contents of the objects..
When you either save your data out or read it in, you would go to xml..
But if you are using Blender's internal data framework to store the
shaders, you don't use XML at all.. The purpose for using DNA, if at all, is to store shaders in blend files.. But I'm assuming if you get that far, crafter is already compiled into blender.. But I assume now, that you
haven't and that Crafter generates XML that yafray reads in..

I guess when I talked about the XML format, I was talking about
the way to organize data structures more than I was talking about
XML.. I think I originally thought that somehow blender was going to
parse the XML into some internal data structures from data that was produced by Crafter, but I didn't think crafter would do it everytime
its interface changed in crafter, but you could do that.. Just that in blender there would have to be a code that turns XML into data structures in
blender (which wouldn't be a bad idea for future projects in blender)..


Ie. The Object would have a whole lot of child objects/whatevers, and each of those would get a pointer to the object that is linked to thier "input", one for each... something like each input being a struct with

Code: Select all

struct input {
 string name;  //this is only for attaching to the right input when parsing...
 object inputObject; //where to get it from
 value
}

which is used by the getInput method of each object:
getInput(name) {
 input = getInputStruct(name);
 if (input.value != NULL) {
  return input.value;
 } else {
  input.value = input.inputObject.getOutput(name);
  return input.value;
 }
} 
and the output(s) being got from the getOutput("[name]") method of each object... Each of these objects could possibly have buffering optimisations built in..

Anyway, it occurs to me now that this is probably a very inefficient way of doing things, as it involves at least six of these objects for each pixal on the screen, each with many sub-objects... These would have to be created (and allocated) by the renderer at render time from the prototypes stored in the blender DNA thingy... I also believe that these would have to be at a relitively low level....
Well what you can store in a tree structure you can reduce to
source code to compile.. Like if you could represent each module
in your structure as an assembly coded function, you could
generate executables from the data structure.. But I wouldn't think about this as something you would do now, but think about it in steps..
This is how you might go about it:

1. come up with design for the data structures and how they connect and are evaluated..

2. write functions to convert this data structure to XML and read it back from XML..

3. write a shader function that evaluates simple data structures created from XML data.

4. elaborate on the shader function to allow the evaluation of all data structures generated from XML..

5. look in blender to determine how the shader system works, and
determine how it can be decoupled (seperated) from blender, and how to subtitute your shader in for that..

6. work your shader system into blender, starting with simple tests , then
working up to more complex ones..

Also as you are doing that, when you get Step 1 completed above, while you are doing step 2, come up with a crafter interface to genrate the XML structures that your shader system will read in and evaluate.. To see if the shader generates valid output, you could probably have the shader dump images out in PNM format or some simple image format, so you can
see the results of the shader.. Or if you have a program that generates
images on the screen, you could use the shader to generate images there..

Eventually as you work your plugin into blender, you can bypass the
conversion to XML and have crafter work on the data structures directly..
There is no reason you have to use XML in the process, but it makes debugging easier.. Otherwise you will have problems in memory, and
core dumps are not easy to debug.. The other advantage to using XML in the process is it opens the door for other developers who might want to develop interfaces to your shader plugin.. As well as it gives you a way to store your shaders apart from blender's blend format.. Its also possible that someone could write a program that compiles your shaders into
executables that can be used in blender by your plugin. That's where you
would realize a great boost in performance, as the plugin would reduce you r object framework to data structures and function calls.

However another way is to have crafter generate C code, and compile
the results to a dll or linkable executable, that can be loaded into
blender and used by a shader API in blender..

But a big problem in either case is coming up with the API..

How you might do this is by finding all the code in blender that has to do with shading (which I wouldn't know about, you have to know about shaders to recognize this code) and seperating this code from the code that uses it.. Determine how to unify the interface so that the functions that are called don't access global variables (this is called decoupling).. Then you take all the functions in blender that call the shader code that
was in blender, and you have them call external functions defined in a library.. The library will now contain the actual code that was in blender,
but is now in a library.. Now you have an API. That means you can have any other library interface with the code in blender, in place of the original
shader code that blender used.. You can even expand blender's functionality by having other functions in blender accessible
via that interface, by allowing the library of functions to call back into blender and reference data in blender..

Test some crafter shaders out on this, and see if they render.. Then elaborate on the set of modules in Crafter and do more tests..

It might even be better to do some of your first tests by hacking shader
code in blender, to determine how to design the API.. But I feel like this is a task reserved for some experienced coders, more so than I.. Ton may be able to help out in supervising such a project, but I'm not sure if he would consider it worthwhile for him, considering all of his other tasks..

How problems with the shader system have affected me is when I
want to layer materials, its not possible to do that.. Only with textures..
And blender really needs to be able to layer materials as well as peform
more complex shading operations, which rely on having more access to blender's scenes and animation system.. It could be the case that blender's shading system is so tightly coupled with blender that its
tough to decouple without hampering development. There might
be something to how blender is interfacing with yafray that could help
determine how to interface blender with an external shader system...
I think a shader in blender could serve to make textures first and later to supplement entire materials eventually, and then possibly to create shaders that calculate part of the rendering calculations to allow for
hypertextures and volumetric effects.
I would like to note now, that this is where my lack of experience, and minor knowledge of alternitives to a hard-core OO system kick in.... So, I will be disappearing for a while, to look at other open-source renderers, to see how they parse renderman/X3D/yafXML... Also, this will give me more scope to understand the advantages of different systems... I'll post here if I find any good analysis', and add my findings to the wiki..

G

P.S. What would be the impact on speed if optimised python was built from the XML? That would at least save me from going insane trying to understand how to fit a non-static language into a static binary...
Well there would still have to be an SPI made to blender's shading system from python.. I mean there would have to be a decoupling on blender's
shading design into an API that allows python to substitute for calls blender makes to its internal functions for shading..

I would not know what these are.. The best thing to do in such a case is tweak the code and see what things do.. And determine what needs to be decoupled.. But what gets decoupled relies on what you know about shaders.. And to know about shaders you learn about Renderman and these other shader systems.. Obtaining source code to shaders, and seeing what is in a shader, and how things can be seperated in such a way that all or most shaders can be construced from the design.. Not knowing enough about what makes a shader, I can't say much.. But I imagine it would be finding things like input parameters, like surface normals,
accessing the surface color, having access to light data, determining
the length of light rays to determine the lightness of the surface color computation, etc.. These are things I thought up just from imagination.. But you probably know better.. I do know basic shaders don't do ray-tracing.. But obviously blender's shader system could be made more dynamic..

bfvietnam
Posts: 0
Joined: Wed Apr 21, 2004 8:54 pm

Post by bfvietnam »

I may provide further help, but I'm going to have to refrain because this is eating into my schedule a lot.. And I've got a lot of work to do.. I
had originally thought for some reason that you were the creator of crafter, and now I know you aren't, so I'm not sure I want to teach you everything I know or everything I'm not certain of.. I thought I was somehow influencing Wybren's ideas, but I guess I should have done my homework before I even contributed, but I don't regret it, you have been someone interesting to bounce ideas off of..

But what kept me going was that I figured the discussion would serve as a model for designing this shader system.. All I really know is some stuff about C and C++ and a lot about data structure organization, but I don't know a thing about shader design, except concepts I picked up while taking a computer graphics course in college.. If I had all the time in the world to do this, I would probably start by learning how to compile blender, get into the shader source, tweak it to see what it does.. Try to decouple it from blender, or get a setup where I could tweak code. Try to get to see the results of my tweaks without getting into blender.. Then to decouple it.. Come up with a design like the one I talked about, rewrite blender's shader system to use this model of evaluation, and then to create a API in blender that you can interface your code with, and interface blender's original shader code with.. Then to try to get the other blender developers to adopt this model for the current version of blender..

I think the best thing to do after that would be to first have this shader system as an option to blender's own.. Then as it becomes popular, have it substitute for blender's shader system, and have blender's shader system use the new API. There will need to be a way to select between blender material shaders and crafter material shaders..

However as a result of really looking at blender's source code, it could be that all that is needed is to modify blender's source code such that materials can use other materials as sources for color information..
Like how textures are currently used.. Or a way to arrange blender such that the specular color and texturing has its own texturing/color system.. And so on.. Its a matter of trying to take what exists and make it more flexible and easy to rearrange.. I imagine blender's current material/texture (or shader) system is just a pipeline of algorithms that prepare data for the renderer and color the results of the render to determine the color of surfaces.. There is a pattern to this, how does it work.. And can it be made better..


Like how can this be arranged such that the shaders can do more stuff in blender's renderings, and how it can even influence the way the models are rendered by blender.. Like how can shaders manipulate surface information when the render system tesselates the models to polygons (the way blender's displacement mapping works now, the vertices in the model at render time are push in and out along surface normals, how could a shader tesselate these surfaces and get more detail in there without using a lot of memory and time).. Or how the shaders can generate hypertextures for object volumes, and volumetric clouds from particle systems..

I think as you or other decouple blender's shader system and how
the render system works, the easier it will be to integrate other
shader systems and render-time plugins into blender expanding blender's capabilities..

simonharvey
Posts: 0
Joined: Tue May 18, 2004 8:11 am

Post by simonharvey »

I dont mean to throw an extra opinion to confuse things up but anyway, here it goes:

If you wanted to implement a shading language in blender it would either have to be either RSL or OpenGL shading language. Ton wanted to implement shaders in python which could work however I dont think that python is really made for this purpose since having to run a script hundreds of thousands of times per seconds would probably make a blender coder nauseous.

The best thing about RSL is that it is a small language with only vectors, colo(u)rs and floats, and shaders could be easily ported from a blender renderer to any renderman compatable renderers. Also the BISON and FLEX files can be theived from PIXIE, meaning that The only thing that you really have to do would be to modify the runtime and change the renderer API.

OpenGL shading language is based on renderman, C and C++ so has a similar syntax and peraps you could use it for the next generation of OpenGL 2.0 compatible cards to get hardware acceleration for rendering. 3DLabs have also made an OGLSL compiler that compiles OGLSL downto an intermediate format.

I have a couple of demo runtimes at home in various states of completion however when using programmable shaders you always have the speed hit. Perhaps the only way around this is for a programmable shader to shade samples in parallel (which would be possible in blenders scanline renderer) however it would take up alot more memory, virtually nuking blender's "lean scanline renderer" feature.

Having a runtime optimised for a compiled bytecode (think Java or C#) runs faster and also means that you have a virtual machine to put a shading graph subsystem on, since it is a whole lot easier to go from a graphical graph in memory to a bytecode than from source code to bytecode.

Kind Regards
Simon Harvey

bobthevirus
Posts: 0
Joined: Wed Jun 25, 2003 7:11 am

Post by bobthevirus »

Thanks Simon for a new veiw of things (and info on what ton wanted)... I have been looking through the relevent sections of the code, and are looking at modulizing it (while keeping it at the same speed as it is now) with calls for each of the differnent parts. (Prob will not be succesful, but some work has already been done on it with the addition of the newer shaders).

You wouldn't happen to know which GPLed renderer has the cleanest (most modulized) implimentation of the Renderman shading language, would you? - I still aren't convinced that GLSL is powerful enough.
[edit] PIXIE, man, how could I be so blind... Is this one of the good ones?[/edit]

However... After maybe trying to tidy up rendering code, I haven't had enough experience to even begin to look at a properly compilable language like renderman (It's way over my head)... So, my mission till summer is to try to create some patches, that hopefully people would like the look of, that provide calls from the renderer to the shaders, and back again, without slowing anything down... If this is done right, then a python shader would be reletively easy to implement... And the more complicated things like renderman I can leave up to someone else (-:

Are there any design documents for this, or blender3, or anything like that, because my coding style is still pretty basic, and I will probably miss a whole lot of the [easier/better] ways of doing things

simonharvey
Posts: 0
Joined: Tue May 18, 2004 8:11 am

Post by simonharvey »

bobthevirus wrote:Thanks Simon for... info on what ton wanted
The last email that I got back from ton was several months ago and at that point he had the goal of making blender's renderers "shader aware" which I guess means that it would be easy to bolt a python/RSL or GLSL shading subsystem in there.
bobthevirus wrote:You wouldn't happen to know which GPLed renderer has the cleanest (most modulized) implimentation of the Renderman shading language, would you? - I still aren't convinced that GLSL is powerful enough.
From the code that I have gone through Aqsis seems to have the upperhand when it comes to a clean runtime and compiler, however it was in C++, However you maybeable to write an interface to interface with blenders C code.

Just a note on GLSL: In terms of powerfulness GLSL wins because it has support for structs, etc, that RSL dosent have. However what makes a shading language good isn't really its syntax (since it is usually a take off of C) but the API functions that the renderer opens up to the shading subsystem. In this case RSL is leaps and bounds better because it has such an extensive list of APIs, covering everything from splines to transformation/ raytracing, etc...
Since GLSL is for realtime applications it cant have these features however if you were to write a suitable runtime I am sure that you could expose a fair amount of functionality to GLSL compiled bytecode.
bobthevirus wrote: If this is done right, then a python shader would be reletively easy to implement... And the more complicated things like renderman I can leave up to someone else (-:
This sounds like the right way to go... (ton seemed attracted to the python way of doing things and giving people a choice between slow, holistic, easy to learn python and hard to learn, fast, strongly typed, compiled RSL).
bobthevirus wrote:Are there any design documents for this, or blender3, or anything like that, because my coding style is still pretty basic, and I will probably miss a whole lot of the [easier/better] ways of doing things
I am not sure where any documentation is regarding Blender3, I am just concentrating on improving 2.* and just hoping that nothing changes that much so that it makes it impossible to apply my features :cry:

Kind Regards
Simon Harvey

lusque
Posts: 11
Joined: Wed Oct 16, 2002 9:53 am

Post by lusque »

Hi fellow blenderheads, compliments for the wonderful discussion you're having here, I'm learning a lot from you! :D

I don't want to confuse things neither, but I just found something on Slashdot that I think you will find intersting: Sh shading language

PS: sorry for my bad english

Post Reply