Jellybean wrote:Oky, I'm trying to wrap my head around this 3D texture idea. I'm going to make a play off of the armed gaurd example you gave, as I think that is what really made it click what you are picturing. This will be a sort of a different implementation of what you are refering to, to get an idea of what the end result will amount to. (I know this is different, but I'm trying to approach your idea from a different angle to see if I can intersect your train of thought..

)
So, you have a basic polygon model of an armed gaurd, and you want to him a nice steel arm, rivits and all. So, you create a new model of a plate of steel, generously applying the rivits, and some rust (he really should take better care of his arm). Now you need to wrap the steel plate around his arm. Rather than modify the steel plate, you make an actor out of it. You give steel plate a network of bones that corrospond to the polygons that make up the gaurds arm. Then you give the steel plate a pose, aligning the bones with the verticies of the gaurds arm, adding constraints to lock the bones to those vertices so when the gaurd moves his arm, the steel plate follows. What you have now is a steel plate (which is still a flat model) rolled into an arm using the character animation system.
Does this get close to what you're thinking of?
(more on textures, undo, displacement maps and more to come later.)
Yes you have an idea about object relationships, but I was talking about
the software relationship as well between the data that represents the objects.. A flexible visible object hiearchy depends a lot on a true internal object hiearchy. In blender the hiearchies can fail if functions were not specifically designed to handle the relationships of the bones controlling the steel plate in reference to the arm. Since blender is written in C,
such scenerios may only apply to certain objects used in a certain way,
but all other concepts which should be similar are unrelated..
Like I realized the other day that curves cannot be animated
with relative vertex keys despite the fact that all the objects uses vertices and surface curves can be animated with relative vertex keys. If all of blender had a concept of a vertex point cloud and performed transformations on point clouds instead of geometries, then operations like relative vertex keying could be applied to any geometry that uses the same kinds of vertices.. This kind of relationship and functionality is possible in an object oriented language. but its not easily done in C
unless you planned for it well enough ahead of time. Howeve it might be
possible to refactor blender's C code as C++ and add such relationships
so that functionality that is intuitive (like vertex keying) is inheritted
by all objects with vertices (which is pretty much everything).
This is one issue that needs to be cleared up before anything really significant is done to blender.. This is what I suspect some of the coders Ton hired were trying to do with the code. The structure of the code and the fact that it was designed in C, was causing troubles in the area of
refactoring and making similar features the same features. Part of it I imagine was code documentation.. But C was never designed to
work as an object oriented language, it was designed to
work as a modular language.. You can do object oriented programming
in C, but it involves trying to manage data structures with void pointers,
and when you manage data structures with void pointers there is no guaranteeing the integrity of your data structures, unless you are a really clever coder and are able to keep your p's and q's straight.
-------------------------
The Undo Concept..
Have you played with Maya? Its based on the same concepts
in maya where as you model something, as you do anything, there is this construction history... While creating that object you have selected points, performed and operation, selected points, performed another operation,
while behind the scenes when you select points on the object you
are indirectly specifying the data a function will act on (e.g. faces, vertices, edges are data types that can be affecyed by a function),
what you do with this data is specified by the operation (the function) and
what you want to do (function parameters).. Traditionally in C,
your operations you perform as seperate from the data you are performing them on. And for every kinda of data there is a set of functions, and for everything you want to do with the function on the data you need a set of parameters, and to make these operations perform the same independent of the geometry (whether you use meshes, surfaces, etc.) you would need at least ten functions for every data type and
all these functions would require the same kinds of parameters (interfaces to what you want to do). The problem is that in C (which blender is written in), unless you are clever, you will need to replicate
code between the functions when you add functionality to yoru application, because in order to maintain the intuitive understanding
of similar objects you have to constantly look for these relationships and
code specifically for every case that may help out in making the functional interface intuitive. If using an object orinted language,
this comes for free and if you are careful with your structures you
can avoid having to know too much about your objects.. Its Finish and forget..
The problem with blender is that if you want to add functionality to
all the primitives, you have to add it to each one by taking
all similar functions, and all similar data types, and translating what
this new operation means for every primitive, then you must find all referencing functions and change them to reflect the change. I'm saying this could be minimized if Blender used a more object oriented relationship between its features.. I think Ton would agree, because
everytime Ton added a feature to blender this meant he would have to
go back and reinterpret the idosyncracies of every function and data
type of all the "similar" geometries, in the iterest of producing a consistent behaviour.
The problem is that the more features you
have to implement the more stuff you have to remember and the
more bugs can occur, the longer the development time, and it only
increases as you add more features.. However with objects you
can add features quickly because all considerations about
similar functionality can be managed gradually over time can be refactored into unifying object types that behave the same and should..
In a object oriented
world there is no need for globals, globals can be used but shared among objects with the globals being placed inside an object called a monitor and controlled such that no two objects can modify the same global, By puting global variables in an object you can also monitor access to the global variables and dertmine misuse and overuse of globals by what objects
call the global variable monitor. There are little tricks liek this with objects that make their use well worth the investment of time.. But it can
be done withotu sacrificing the utility of a global variable..
Okay that aside.. My idea for Undo is best described by example,
let's take a mesh..
You can think of mesh modelling as a series of operations
performed on some basic primitive mesh, in order over time,
you start with a square, you extrude the square to a cube, you extrude it, and extrude it, and so on until you have a character..
Each operation is dependent on the one before, using the edges
and faces that result before. With a nromal undo you might
copy the previous mesh and modify that, this wastes a lot of space and
eats up memory if working on large messhes..
You can think of this as a series of operations on data (the C-style thought process) or as method calls that result in new 3D objects.
Or methods that change the existing object into something different.. There is no reason to copy the object everytime, why not just
store the differences between object modifications? Then make
it such that these differences are OOP objects themselves which have methods that call to the past and to the future of modification of the 3D geometry to resuse the construction history to make it easy to redo the whole process of creating the object without redoing everything over again.. Its also possible for the objects to communicate with each other and make changes that make sense, like if you change the size of the square that you extruded, all geometry that was created mased on this quad will have their sizes changed in purportion..
Okay what is with all this talk of obejcts, what are they?
The objects, are programming concepts (referred to OOP constructs, or object oriented programming constructs), unlike objects you model, these objects are concepts about how to arrange code and data, normally in C your code works on your data but all your code (functions control structures like "if" conditionals) is seperate from yoru data..
Normally you use libraries to act on the data and teh data is maintained
by a disjoint set of functions that take the data in through a parameter and spit it out either through another function parameter of via a return value.
Objects on the other hand organize the code with the data and
make the data only modifiable and accessible by the code that has most to do with it. This makes it so that only that object needs to know a lot about the way data is handled in it, no other object should ever have to know much more than the methods that access the data that is related with it. So you don't want to have to write 7 functions to read 7 different kinds of images formats into an image format you can use, its easier and better to create an image object that has methods to read 7 different file formats, and then a methods for copying areas of the image, drawing
lines ont he image, etc.. Then you can write porograms that use the image object and you don't have to pass into the functions that load
the images, the image data because you don't deal with the image data the image object deals with the image data.. You just issue commands
on the image object.. Also by associating data with the code,you reduce the need for specifiying a lot of extra paramters for referring to the data on which you need to operate.. A function that draws a pixel on a
image translates from:
image_type_to_return drawPixel(image_in, x,y, color);
to
image.drawPixel(x,y)
and
image.changePenColor(color);
There is no need to pass in an image because the "image" is explicitly
referred to as a result of calling the method, and the image is changed only by that method. So that drops two parameters you have to specify..
It also allows you to have multiple images, or image arrays, where
you can talk about
for (x = 1 ; x < 10 ; X++) {
image[x].load(filename + number_to_string(x));
}
Instead of
image images[],*p;
for (x = 1 ; x<10 ; X++) {
images[x] = malloc(sizeof(image_type_jpeg));
strcpy(filename,strcat(filename,itoa(x)), sizeof(filename) + sizeof(itoa(x));
p = load_from_jpeg(images[x], filename);
free(images[x]); images[x] = p;
}
// Yuck!!!!
So I think you can agree that things get hard to read really fast when working in C. And not only that, if you want to define something like a
vector based image (not using pixels), you can't inherit from the existing pixel based image. You will have to write some functions that convert
vector image data structures to pixel image data structures.
Whereas in C++ you could define a vector image that is both a pixel image and a vector images. And functions that work on the pixel image should work on the vector image.
----
An object is detailed by a class which defines the behaviour of the object and all its derived (or inheritted) classes of objects.
The form of a class in an object oriented language looks like the following...
Code: Select all
class mesh {
mesh *encapsulated_mesh; // references the mesh before the last operation, or the beginning mesh if that mesh has no encapsulated mesh itself....
int faces_removed[],edges_removed[],vertices_removed[]; // which geometry doesn't exist from the previous mesh for this mesh to operate on.. Say if I extrude a square from a cube, one face is removed, so the resulting box has 10 quads, instead of 11.. So the number of the quad in the previous mesh can't be referenced in meshes derived from it..
float vertices[][3]; // list of vertices with 3D coordinates
int edge[][2]; // edges defined as vertex indices associated.
int face[][4]; // faces are either quads or triangles defined by connecting edges..
MaterialType mat[]; // material representing a numbered face..
method mesh(mesh *ref) { // mesh constructor (used to setup mesh)
self->reference(ref); // obtains mesh data from somewhere else and uses it to compute a new result based on whatever method is performed next on the mesh..
}
method extrude(int specific_faces[], matrix transform) {
// extrude a set of faces and vertices to a target that is the starting
// faces transformed by some matrix (rotated, scaled, translated)..
return new mesh(&self); // creates a new mesh object that references the results of this extrude on the current mesh.
// discount geometry that was removed as a result of this extrude, see comment on "faces_removed"..
}
method reference(mesh *ref) {
encapsulated_mesh = ref;
// this is used to determine selected vertices and faces,
// then any changes on the previous mesh are recognized as
// modifications on the mesh data in the previous mesh..
}
}
I coded this one out of my head and what I know about mesh objects intuitively..
This class is an implementation of a mesh, it has a constructor method (called "mesh") which intializes the mesh object and does the usual preparations to get it ready for work. It has a few other methods "reference" and "extrude". Extrude extrudes any number of mesh quads specified by some transformation matrix. Then it generates a
new mesh object with a reference to itself, so that the resulting object can call it or traverse to it.
This is ina nutshell, this class, is like how I would implement undo and redo. The operations performed on the mesh, result in new derivative meshes that undefine geometry that was replaced as a result of the
operation (e.g. extrude).. I say undefine instead of delete because
we plan to be able to return to the geometry before it was modified, so
geometry need not be deleted, it only needs to be flagged as non-existent. Then we base the new geometry on the geometry that
can be used (is not undefined). The purpose of this kind of relationship between the object type and its operations and its operations results and so forth is it produces a kind of construction history, where
previous models of the mesh can be changed and the changes can trickle through the design as a result. In some cases the operation performed in the past may cause the future that has been defined to no longer exist or to be much different, in such cases the mesh could be subtituted, but
there is no need to throw away the contruction history as one could always selectively prune the futures from the construction history.
Now this kind of undo and redo and spawing of alternate branches (parallel universes?) relies on some careful thought of object modelling and use of programatic object classes (C++, Java, Smalltalk, etc.).
Blender currently is a mix of C and C++, so its questionable how
easily it is maintained and improved, but it won't be very useful to
implement a undo/redo operation until blender is object oriented..