Previous Thread  Next Thread

chat icon Regular UV is Passť

JA-forreal

Posted: Mon Apr 07, 2003 12:12 am
Joined: 22 Mar 2003
Posts: 187
Does UV mapping have to be a chore? Will Blender developers ever see the need to advance Blenders UV mapping tools beyond their current excellently simple and great state to something ultimately more wonderful? We went from having a great IPO animation system to an amazing NLA animation system. Blenders UV mapping system could be updated to where seamless complex uv mapping in Blender offered tools to remove distortion, local scaling of polys to expose hidden faces, built-in UV morph mapping, more accurate unwrapping tools or something beyond the present UV tools status. If this happened we would have better options for UV mapping in Blender. Someone also mentioned that face selection is a must in Blender for more advanced UV mapping.

Maybe I'm the only one who wants this kind of feature update, but I thought it was worth mentioning here anyway.
Reply with quote


Money_YaY!

Posted: Mon Apr 07, 2003 12:57 am
Joined: 23 Oct 2002
Posts: 870
Hell Yeah!

But I guess you just have to bribe them with something, to ever get
what you want.

^v^ I have comic books. *hint *hint
Reply with quote


kflich

Posted: Mon Apr 07, 2003 5:56 am
Joined: 16 Oct 2002
Posts: 31
i join the call to improove uv mapping ! i think a good start would be to see if sturbi's capsule unwrapper can be incorparated in blender (is it possible ?)
Reply with quote


thorax

Posted: Mon Apr 07, 2003 7:48 am
Joined: 27 Oct 2002
Posts: 321
Either you code it yourself, which is the purest form of expression of an idea..

Or you describe the concept with some documentation..


(I'm currently working on something in PHP which is a combination of the two, its an approach to documentation that will let everyone help out without sucking up too much personal investment, other than mine, I've been working on it for a couple of weeks.. I suggest doing whatever
you can)..

I like the UV system aside from the lack of unwrapping
polys. It is very basic, but one thing I really like about it which I think could be improved is support for Wacom Tablets..

I assigned the buttons on my graphire2 like this:
Erase = Erase
Topside of side button is Right-click
Bottom of side button is Middle-click
The tip is a left click..

So I can use Cntrl-middle-click, Shift-middle-click and middle-click to pan the object and rotate it, the tip of the pen to select buttons like switching to the paint feature, and for painting.. Right-click can be used to extract the color of the surface.

In my musuem shots, from my website.. I used these features
alone to create much of the wall painting.. And it helped out in determining orientations of the image mapping..

http://www.bl3nder.com/images/NYCColumnRoom3.jpg

http://www.bl3nder.com/images/NYCColumnRoom2.jpg

http://www.bl3nder.com/images/NYCColumnRoom1.jpg

Also used this feature a little bit with this image.. I'll finish UV
mapping for the mask part but the cartridges were the main feature..
Believe it or not the cartridges didn't take much to map, that it was more
perfection of the look I worked on.. But blender only really supports cylindrical and view dependent mapping, but no unwrapping feature,
I wonder what would be involved..

http://www.bl3nder.com/images/GasMaskXray.jpg

Using two wacom tablets at the same time would be cool, one to rotate the object, anotehr to spray paint features of the object.. I save the UV map out, load it into GIMP and paint it there, then load it back in.. There could be a better interface with GIMP say, so that one could paint on a
blender object in GIMP maybe, or best yet better painting features in blender.. But its suitable for free and opensource.. And what we lost with the open source is the ability to pound on NaN to get features out,
its kinda left up to the users, and as one said before Ahemm... Bribes..
Or best yet if you really feel like you can do it, you do it.. I'm
kinda waiting for a good compile for Windows that doesn't require VC++..
If you wait around long enough, someone will take time to
get over the hurdle and be eager (hopefully) to share their
solution.
Reply with quote


Money_YaY!

Posted: Mon Apr 07, 2003 1:06 pm
Joined: 23 Oct 2002
Posts: 870
Yes but you see those are very simple flat surfaces that you painted on.

Have you seen Zbrush yet. Try their Texture Master. That is just what we
need.

www.zbrush.com a free demo is there.

I might post a movie for it.
Reply with quote


JA-forreal

Posted: Mon Apr 07, 2003 5:55 pm
Joined: 22 Mar 2003
Posts: 187
Well, the only thing that I can do to help out since I am not a coder of any kind , is to spread the Blender Love. I also plan to put out a good Blender 3d character tutorial this year. Right now I'm am totally hyped up over theeth's new tga uv map output script tool. It's one of the best blender uv tools ever. It's good to see that others want to see a more powerful uv mapping system in Blender.

I use 2d mapping tools such as Painter that is similar to Gimp and Photoshop for uv mapping. But I also use 3d painting tools like Painter3d.
If you do alot of uv mapping like I do you would need to use a 3d paint tool of some kind to ease up you workflow. I would urge any 3d artist who has never used a 3d paint tool to download a demo version of one and try it out. You should think about adding a 3d painting tool as your next upgrade for your 3d graphics setup if your are really into uv mapping.

Or you could waint until Blender has built-in uv mapping tools like Maya.
That would be cool hey?
Reply with quote


thorax

Posted: Wed Apr 09, 2003 7:39 am
Joined: 27 Oct 2002
Posts: 321
Money_YaY! wrote:
Yes but you see those are very simple flat surfaces that you painted on.

Have you seen Zbrush yet. Try their Texture Master. That is just what we
need.

www.zbrush.com a free demo is there.

I might post a movie for it.



I've used XBrush before, it didn't impress me.. It seemed to me
all it was was a Z-buffer utilizing displacement mapping paint
system, but it only worked from a particular viewpoint, it was impossible to tell what perspective view I was painting from, but I could see how it was working.. It seemed to be doing displacement mapping according to the normal of the surface, and the modelling primitive was polygons it seemed.. But this is like a highlevel painting tool based on rather simple technology or vice versa.. I would relegate it to something like wanting
booleans to model with, it looks like what you really need until you consider how it works and realize it might not do what you think it does..

If you understand UV mapping, the concept is simple, you project a 2D image onto a 3D space.. You can't get around it unless you define another image space.. If you use dot sampling instead of pixels, why should your pixels be square anyway? You need multi-resolution texture mapping, and I don't mean uniformly mapped pixels, I mean in some places of a object you need more detail than others. This means the 2D space can't accurately be mapped to a 3D space because somewhere something will be warped.. So your image format needs to dynamically change pixel sizes as needed.. Pixel locations must be discrete locations and not colors in a matrix.. So far I don't know of any image format that is capable of this..

But I've determined this is what is really needed.. Its like the difference between using NURBS and Subdivision surfaces.. NURBS you can only have as much detail as you have isoparms, the more detail you have the higher the resolution of the NURBS surfaces, but with subdivision surfaces, you can have arbitrary detail because a sub-surface is not uniform across the entire object, it is at best continuos across a few points, no N-points.. Images need to me just as discontinuos in resolution.. samplings need not be reduced to pixels.. They should remain samplings..
Reply with quote


thorax

Posted: Wed Apr 09, 2003 8:02 am
Joined: 27 Oct 2002
Posts: 321
Think about quad-trees..

Get an unwrapping function, unwrap the surface into rigid polys (retain the angles of the polygons..

Assign a image map to the unwrapped surface, use some kind of threshold to determine how much resolution is needed for each polygon..


What I meant by this is not to tesselate the poly but to determine the
size of the image in world space.. Usually Mipmaps are used and every
texture has a number of textures of various resolutions that increase as you get closer to the object, that is a real-time image mapping optimization than a method of assigning resolutions to polygons.. What I'm talking about, I guess is just associating as much UV space as there is actual world space, so if youe chest-plate for your alien armored guard takes 33 cm, the polygon is given that much coverage in UV space,
and all other polys are purportional. I just realized that is a direct result of unwrapping the polys, as the UV mapping would be purportional to
the poly-unwrap. I didn't think about it too much I admit, I was grasping for some kind of adaptive resolution for UV mapping and trying to describe what I saw in the Zbrush UV mapping tools. But the fact that the pixels are square becomes a problem, because you can always get closer the surface and eventually you will see pixelization and it will appear square, not round.. You could optional represent pixels as with zero substance, they would be replcaed with vector equivalents (areas of red appear as areas of read bounded borders on another color. then the definition of how colors are aggregated into these vectors could be determined by other algorithms.. ). Note that this can't be so hard for
some rendering algorithm, I believe if quad-trees were used it would
take Log(N) time to do a search for a color, as opposed to Constant time
if it was stored in an array (which is how image data is referenced),
how is this less efficient than using a procedural texture? I don't know what the efficiency is of a algorithm that smoothes pixel colors together in order to create contiguous areas of color, say like what laser printers use to create more details lines with anti-aliasing effect. This method would be how you would adapt images to this vector based texturing method (could also be seen as a method of rendering a image in 3D space, as I think smoothing algorithms do this, but I want to extend it to support vector based methods of defining color rather than using pixels which ultimately define blocks of color). Not to poke fun at Iceman's name, but one of my fathers friends knows the guy who made Renderman, and said he had talked about making a derivative work called Iceman that was going to be a resolution independent rendering application (assuming that the ultimate result is not a pixel image but a vector based one).


Make an image format that can dynamically change resolutions of
quardrants as needed.. It could be the quadrants are dynamically sizeable partitions, or better yet are arbitrarily spaced dot-samples
that are compressed and later mappable to pixels. The idea being to keep from square-pixels getting involved as square pixels produce warping artifacts which don't need to exist anyway.. We have floating point
numbers and ability to improvise on the resolution because we have fast processors.. IS it possible to create abritrarily resolute images?

Now design a special paint program (CineGimp?) that can work with
these spot-sample images.. Imagine a subliminal dye printer and how it
creates images on paper or how a airbrush works, it doesn't produce
pixels, it produces random dots of abritrary resolution.. This is how the
samples (not pixels) are to be stored.. How do you erase these?
Not by changing colors of pixels but by removing the samples from
an area..

The paint program would also need to dynamically render what is being painted..

The problem with Zbrush is that it will have warping at some resolution
because a 2D uniform matrix is being mapped to a non-uniform 3D
surface. At some resolution the texture will appear blocky or warped..
If the image format custom-sampled for the mesh, it will not warp unless
the surface does, even then, it can be re-sampled, and the samples
in the image can have arbitrary detail at certain locations.. The location of samples is determined by some kind of quad-tree compression..

At the very least I think the image format and the method of unwrapping
need to change.. Pixels the way they are stored make sense only
on flat surfaces, not on rounded ones which will need to have arbitrary resolutions and not be stored as a matrix of uniform pixels..

It makes sense for games to use uniformly spaced pixels, but not for
animation work..

I wonder if OpenEXR is a step in this direction, I think I read something about not only the lightness of pixels being stored as floating point numbers but the location of the pixels as samples too.. There is
no reason the pixels need to be square either, they can be round, star shaped, etc.. Its just a matter of convenience..

Last edited by thorax on Thu Apr 10, 2003 5:06 am; edited 1 time in total
Reply with quote


Money_YaY!

Posted: Wed Apr 09, 2003 2:08 pm
Joined: 23 Oct 2002
Posts: 870
You are compleatly wrong about the paint thing.
I was talking about a script plug-in for it Called Texture Master.

I am going to post a movie soon showing it.

Stay tuned. I am sure Eskil could do it.
Reply with quote


Money_YaY!

Posted: Wed Apr 09, 2003 4:24 pm
Joined: 23 Oct 2002
Posts: 870
They are small but the visual is their.
It is a 3D model that when painted on and the drop command
is placed it drops the paint on to the model and then you can
rotate the model once again and paint on the next place that you choose.

I think this would be realy easy to put it to Blender .

Also the texture can be placed like this to in the other mp4

Lasty the edges have blending so as to get everthing just right.

Have a look.

http://www.aprilcolo.com/playing/keywest/tan
Reply with quote


Jellybean

Posted: Thu Apr 10, 2003 12:39 am
Joined: 17 Nov 2002
Posts: 20
thorax,
Something like vertex painting, but at a subsurface level?

If you took a model and divided each polygon into a number of subsurfaces (each vertex representing a color sample). At first, each polygon would being divided in a way to try to create as uniform distribution of samples as possible, allowing the user to choose the density of samples. You could also change the sample density at a face by face level to add detail where you need it.

Painting could be done like the current vertex painter, or another way I can think of is to unwrap, or represent as a flat surface only a small part at a time. You could use a selected face as a focus, and adjacent faces within a range would also show. You could move around the painting surface by selecting different faces on the model or one of the adjacent faces displayed on the unwrapped paint area. The futher from the focus face, the more distortion there would be on the unwrap, so ideally you would want to keep the area being painted in or close to the focus.

I think the biggest problem with any alternative painted texture method is the whole "special paint program" that would be needed to manipulate it. You lose the use of many great tools (paint programs et. al.) from the texture generation process. Also, textures would become more model dependant, making reuse between different models difficult. Some things to consider.

I think the assignment of arbitrarily positioned color samples would add a level complexity to the renderer greater than the benefits it would provide (just my opinion). Maybe it could be something applied to the texture generation level. Color samples, brush strokes, really any operation could be stored as a series of action, allowing the editing or deletion of any previous action without having to undo every action taken since. The sequence of actions could be saved, and the texture rendered to a bitmap or vector color map. This way the renderer doesn't have to understand and work to extract information from move complex texture types.

Thoughts?
(sorry for the spelinng, I'm outta time)
[edit: Okay, spelling fixed. Razz ]
Reply with quote


thorax

Posted: Thu Apr 10, 2003 8:13 am
Joined: 27 Oct 2002
Posts: 321
Very Happy Very Happy Very Happy Very Happy Very Happy Very Happy Very Happy Very Happy Very Happy Very Happy Very Happy Very Happy Very Happy Very Happy Very Happy Very Happy Very Happy Very Happy
Exclamation Exclamation Exclamation Exclamation Exclamation Exclamation Exclamation Exclamation Exclamation Exclamation Exclamation Exclamation Exclamation Exclamation Exclamation Exclamation Exclamation Exclamation

I have some interesting ideas I think but determined after the fact that you had come to the same conclusion about images needing not be pixel based.. I'm just a free thought pack rat and hate to delete jabber..
Read at the bottom.. I figure the talk may be useful for someone else..
Its not that I value what I say all that much, but I have all these ideas
bouncing around in my head like slam dancers and always coming up with something that I never thought before.. I wonder if Ideas are as bothersome to me as ideas are to others. I have a tendency to write them in a book and never to read them again unless I have a mental block someday, otherwise I would be all over the place trying to implement everything I thought about.. Anyhow skip to the bottom
if in a hurry, unless your friends call you "whiskers" (see Will Ferrel's impersonation of Harry Carey that is bouncing around on Kazaa). I feel like in a time-inverted-way that in a parallel universe I was Harry Carey, if the UV map was made of vector coordinates would you eat it, boy I would, in fact I would have seconds and follow it with a nice cool budweiser.



Jellybean wrote:
thorax,
Something like vertex painting, but at a subsurface level?


Exactly, but in a 2D sense..

I touched on one method, see the italicized block in the message I originally posted - I decided just to annotate it than to quote it, on coloring areas with vector imagery rather than pixel based imagery, this is just one method of painting the UV space and referring to it in a resolution independent way for the surface coloring. But its not completely the idea I had communicated)..

Its changed since I read your message so I'm coming up with different ideas..

The idea is like instead of vertex coloring
on a polygon surface, vertex coloring in the UV space on a 2D mesh
there, but not a uniform mesh.. Overlapping polys in 2D UV space doesn't have much meaning so polys wouldn't have to be tesselated, unless
it makes it easier for filling algorithms to have convex patches of color..

The method fo coloring would have to place some importance on the
need to have free movable splotches of color.. The color would be
defined as discrete points in a 2D space, but not connected, the borders of color and how it solidifies into a vector mapping is by determining some kind of 2D shape with something like a metaball function (determining the location of lines based on the attraction of a particular splotch)..

If you really need to think about it in terms of 3D space, the rendering of this UV color space is like images smoothing or 2D counterparts of metaballs, but where the function that determines how mass is contributed or how its shaped is left open to define by the user (a shader?) But that the most basic form of this coloring method would be to just paint the discrete points themselves without the metaball mass contribution stuff.. This would be like airbrushing the surface, or using blender particles to define surface coloring.. Filtering that through some kind of shape filler that derives a relationship between the particles and produces a vector image.

The UV mapping occurs as normal, but its open for elaboration as well..

The filtering method would be like a sub-sample averaging algorithm made into a kind of color distribution algorithm, like Floyd-stienberg dithering, to determine how much of the color in a quadrant contributes to the color of the pixel that is being rendered.

If you know how to oil paint, it is like putting a dab of paint on a canvas,
using thinner to distribute the color, but in a uniform method defined
by some artistic process (beit defined by a script or discretely and
insatiated), using shape brushes if needed.. Note that you could use motion vectors from MPEG compression to push this kind of
paint around like how you might use a sander on a layered set of oil paints. What I'm saying is this opens up a whole new avenue of expression that is combined artistic and algorithmic in nature.. I think the
mistake is being satisfied with static uniform pixel-based imagery,
and its really causing a problem in the way objects are being detailed
as arrays of numbers can become inefficient and ugly, so why use them
and why are they used, justifications?? This is what open source or free thought is about, questioning the paradigm..


As I mentioned in a previous message above (the italized part) the ultimate result of a render doesn't have to be a pixel image it can be a vector image (read about the Renderman -> Iceman concept).



Jellybean wrote:

If you took a model and divided each polygon into a number of subsurfaces (each vertex representing a color sample). At first, each polygon would being divided in a way to try to create as uniform distribution of samples as possible, allowing the user to choose the density of samples. You could also change the sample density at a face by face level to add detail where you need it.


This is the idea but relies too much on what exists, I tend to want to
break the model a little bit.. The method of storage or representation in the 3D space need not constrain the method of coloring the object..
UV mapping associates a coordinate in UV space with a coordinate
in 3D space.. A UV map is nothing but a flattened view (or a projection, at some angle of view) of triangles of 3D object in a 2D space, and its like warping colored spandex to the surface, but square pixels are being painted on the spandex before it is warped, and this is what we recognize as blocky artifacts on the objects. If we redefine the shape of the imagery in UV space, the blocky shapes assume the shape of the new imagery. If images are not used but discrete points (which are not tied to the spandex-like polys) are defined, one can define an arbitrary amount of detail without producing blocky distorted pixels on the shape.. The problem is pixel based images have become the media we use to define the color of the surface, and images albeit easy to use and transfer between apps, are imperfect the closer you get to them.. No matter how good your mapping algorithm, you will have blocky pixels at some resolution, unless your geometry is blocky, which is a rare case..

The imagery used to paint on the surface is okay in 2D space,
its just not okay that it is uniform sets of pixels that are square shaped.. The imagery could be stored as point clouds painted from a wacom tablet.. These point clouds could be used to derive specific shapes based on closeness of points or discretely by defining polygon areas of color.. Its much like working with Flash by the way, its like using vector imagery to define the color of a surface, where pixel-based imagery is a special case (where box shaped geometry is instantiated to represent pixels). The methods of mapping to the object from the UV space could be changed as well, but it is suitable.

The reason imagery is block shaped is that arrays are used with
uniform space, you can define a point as colored if

x = x/xratio ;
y = y/yratio;
if (pixel[abs(x)][abs(y)] == 1) { color = red } else { color = white }

What I'm talking about is

something like a procedural

color = f(x,y)

But the procedural might be completed by other functions, like a
lambda function that uses functions to derive another, this would be the mechanism of defining how UV space is painted, but it may also determine the shape of the painting and even how it is transformed to
3D space.. The x,y values are determined by the UV poly mapping in UV space, the projection of 3D onto 2D.. There is nothing limiting this to
being a 2D space procedural coloring, it could also be a 3D space (called a solid coloring) by including another coordinate "w", which would eb u,v,w mapping.. I guess what I'm describing is a shader but one that is influenced by point data..

color = f(x,y,g(),h())

where g() is a query function on a tree of points
and h() is a function that determines a color of any point given
x,y values..

But not precisely this, just with this level of dynamic description..
I think python would be capable..




Jellybean wrote:

Painting could be done like the current vertex painter, or another way I can think of is to unwrap, or represent as a flat surface only a small part at a time. You could use a selected face as a focus, and adjacent faces within a range would also show. You could move around the painting surface by selecting different faces on the model or one of the adjacent faces displayed on the unwrapped paint area. The futher from the focus face, the more distortion there would be on the unwrap, so ideally you would want to keep the area being painted in or close to the focus.

I think the biggest problem with any alternative painted texture method is the whole "special paint program" that would be needed to manipulate it. You lose the use of many great tools (paint programs et. al.) from the texture generation process. Also, textures would become more model dependant, making reuse between different models difficult. Some things to consider.


As is the case anyhow with UV mapping.. You would have to conform the image to a new surface, there is no reusability in UV mapping.. What
we are both talking about is a revolution that has already occured
it is just not being applied in quite this way. That of a shader.. Rather than
applying it to the surface color, the color of the UV mapping is
determined by another 2D style of shader, it also need not be 2D..
The point cloud that is defined and how it is stored and queried is
what I think we are both thinking about.. This point cloud determines
how the shader is influenced and how it paints the surface..
Just we are not using a uniform 2D array to defone the locations of
these bits of colors, we are using instead discrete points in a 2D vector-style space with pixelized uniform imagery being a special case..

This kind of level of description in surface imagery is
only a problem if it is proprietary.. This is open source software,
no need to be proprietary and thus impossible to share concepts..
Its also possible if we use OOP languages to define this concept
it will be modular and reusable, its questionable if Maya or
3DsMax is as modular.. Note Open Source challenges the foundation
on which proprietary software builds its base, thats another purpose of
open sourcing.. To challenge code integrity of the industry and of each other.

Quote:

I think the assignment of arbitrarily positioned color samples would add a level complexity to the renderer greater than the benefits it would provide (just my opinion). Maybe it could be something applied to the texture generation level. Color samples, brush strokes, really any operation could be stored as a series of action, allowing the editing or deletion of any previous action without having to undo every action taken since. The sequence of actions could be saved, and the texture rendered to a bitmap or vector color map. This way the renderer doesn't have to understand and work to extract information from move complex texture types.

Thoughts?
(sorry for the spelinng, I'm outta time)
[edit: Okay, spelling fixed. Razz ]


Exactly!!

The process you are descirbing where each action is performed is
like a operation stack.. Or a tree of operation. CSG or constructive solid geometry is a method of defining objects as hiearchies of boolean operations (like subtract these two shapes and take the intersection of that to make a coffee cup). The same concept can be applied (and should be applied) to modelling, blender I think will eventually have this feature as a result of making it more OOP and many trying to get features
found in programs like Maya and 3DsMax. One thing blender lacks now is
operation hiearchies, but operation hiearchies tend to produce a kind of inflexible method of modelling objects.. I'm thinking of something more flat.. The operations would be stored but there wouldn't be any hiearchy of relationship, like painting a feature here or there have a time relationship to determine what was on top of what, but not in a way
where the operations affect the result of the operation that came before..
There is nothing saying (given Z-buffers) that the order in which stuff is painted should determine if its painted ontop of, or behind, unless
one paints stuff in the same Z-buffer location (this is the argument of painters algorithm, where what is behind what is determined by the order in which things are painted.. In 3D, pixels are painted according to their distance from the observer.. So yes there would be a list of operations,
but the list don't determine the order of drawing.. I think this is one reason Flash is different from a vector-based desktop publishing program.. Here is a bizarre concept, imagine the UV space is just a flattened 3D space.. So UV mapping is determined by another scene in blender!! this is a very quick way to implement this concept of UV mapping.. It would also force everyone seeking to implement
blender's textures in their app to use blender as a plugin to their image
painting!!


HA!! That's it!! And when blender has hiearchical opperations (ala Object Oriented Design/Programming) it will translate to the UV space..

But the imagery produced by the flattened 3D space could be filtered
and mapped..

Note the "Sequencer" already does something like this, its made me wonder why the texture mapping doesn't do it as well, since some texture artists use 3D objects to generate bump maps.

Think about it..

Vector graphics is a special case of 3D graphics (flattened geometry),
and pixelized images are a special case of vector graphics (uniform meshes of square polygons). So by the transitive relationship (A = B, B = C, therefore A= C .. This is the only thing I can remember from discrete math, everything else is a blure.. But its obvious too..). 3D graphics should be used to map color on 3D graphics but projecting (rendering) 3D geometry to a 2D space and onto another 3D object..


Does this sound reasonable or am I too eager to find a solution??

Very Happy Very Happy Very Happy Very Happy Very Happy Very Happy Very Happy Very Happy Very Happy Very Happy Very Happy Very Happy Very Happy Very Happy Very Happy Very Happy Very Happy Very Happy
Exclamation Exclamation Exclamation Exclamation Exclamation Exclamation Exclamation Exclamation Exclamation Exclamation Exclamation Exclamation Exclamation Exclamation Exclamation Exclamation Exclamation Exclamation

Well I think this is a direct result of not getting sleep last night, I will probably get sleep and think, what a dumb idea tommorow.. But I think this 3D -> 2D -> 3D is a cheap solution to the overall problem with coloring objects.. But there will need to be some 2D only vector graphics
elaborations.. But I think stuff that already exists could be reused..
And somewhere in the middle put a 2D shader system to allow for
2D transformations on 3D imagery.. This is anotehr method of reuse...
Design with objects always comes back to thinking of stuff more simply and how to get away with less, because any kind of complexity increases bugs and problems.. So it may have sounded like I went all over the place but I was exploring the possibilities, but I'm happy I came back to a simple solution..

Sorry for being wordy..
Reply with quote


thorax

Posted: Thu Apr 10, 2003 8:39 am
Joined: 27 Oct 2002
Posts: 321
You brought up the idea of implementing Undo, Jellybean!
That's actually something I've already considered, it would require though that blender implement operations as objects.. Each operation is like an object wrapped around a pervious operation, wrapped around a previous, and so on and so on, with primitives glued into this tree of objects..
Kinda like a CSG tree, but the tree has the capability of communicating with itself in a very object oriented (trickle-through) way..
That way modifications made somewhere down back in the process
on one object will signal to all wrapping objects that operations need to be re-applied, rather than re-applying all the operations in order..
The creation of a scene is merged with the idea that it is not merely what you see that is the result but everything you did.. You can also
write programs that will thumb through these trees of operation and make tutorials on how to design objects, for instance, or dump to a set of scripts (maybe! hoping everyone is linear in their approach and capable of only doing one thing at a time, otherwise we must deal with semaphores and chicken-or-egg problems).. Note, teh result of modelling something maybe involve grafting on operation sets to branches of the operation tree that derived your result.. This is a reuse of design process.. But I think there is a devil in the details as no operation is
completely autonomous and discrete, some like lathing operations
require curve primitives to work..

Also on the concept of the texture in UV space, the problem was that
images are represented with pixels and that ultimately the problem is not how UV maps are projected, but the source of the imagery is pixel based.. I came up with a solution after a lot of jabber of using another scene (something Blender's Sequencer already does) to determine the
the coloring of imagery in the UV plane.. Then 2D shaders are applie to the UV plane and the result is projected on geometry of a certain material.. Note that if we use a scene as input, we benefit from everything that exists (total reuse) in blender already and allows us
to use 3D shaders (procedural textures) for painting 3D surfaces,
using the UV mapping plane as a middle-man for the process..

I need to tell Ton about this.. This concept is indirectly related to
the concept of allowing blender to perform functionality like flash
and powerpoint.. Combining 3D and 2D geometry to provide 2D output,
and allowing a form of feedback to occur between the two as well..

This is almost metaphysical.. I mean its a kind of recursive preception of realizing how to create stuff and how stuff can be used.. The process used to texture an object uses the process of rendering a object.. It may result that geometry can be mapped to geometry this way as well (imagine mapping a 3D object depicting sheet metal with rivets to a armed guard,
then suddenly changing the bump map on the object to actual geometry that is then perturbed out from the surface of the geometry.. I think that's what Zbrush stuff does.. If not its a different way of thinking about surface conforming geometry..
Reply with quote


Jellybean

Posted: Thu Apr 10, 2003 9:26 pm
Joined: 17 Nov 2002
Posts: 20
Oky, I'm trying to wrap my head around this 3D texture idea. I'm going to make a play off of the armed gaurd example you gave, as I think that is what really made it click what you are picturing. This will be a sort of a different implementation of what you are refering to, to get an idea of what the end result will amount to. (I know this is different, but I'm trying to approach your idea from a different angle to see if I can intersect your train of thought.. Smile )

So, you have a basic polygon model of an armed gaurd, and you want to him a nice steel arm, rivits and all. So, you create a new model of a plate of steel, generously applying the rivits, and some rust (he really should take better care of his arm). Now you need to wrap the steel plate around his arm. Rather than modify the steel plate, you make an actor out of it. You give steel plate a network of bones that corrospond to the polygons that make up the gaurds arm. Then you give the steel plate a pose, aligning the bones with the verticies of the gaurds arm, adding constraints to lock the bones to those vertices so when the gaurd moves his arm, the steel plate follows. What you have now is a steel plate (which is still a flat model) rolled into an arm using the character animation system.

Does this get close to what you're thinking of?

(more on textures, undo, displacement maps and more to come later.)
Reply with quote


thorax

Posted: Fri Apr 11, 2003 12:58 pm
Joined: 27 Oct 2002
Posts: 321
Jellybean wrote:
Oky, I'm trying to wrap my head around this 3D texture idea. I'm going to make a play off of the armed gaurd example you gave, as I think that is what really made it click what you are picturing. This will be a sort of a different implementation of what you are refering to, to get an idea of what the end result will amount to. (I know this is different, but I'm trying to approach your idea from a different angle to see if I can intersect your train of thought.. Smile )

So, you have a basic polygon model of an armed gaurd, and you want to him a nice steel arm, rivits and all. So, you create a new model of a plate of steel, generously applying the rivits, and some rust (he really should take better care of his arm). Now you need to wrap the steel plate around his arm. Rather than modify the steel plate, you make an actor out of it. You give steel plate a network of bones that corrospond to the polygons that make up the gaurds arm. Then you give the steel plate a pose, aligning the bones with the verticies of the gaurds arm, adding constraints to lock the bones to those vertices so when the gaurd moves his arm, the steel plate follows. What you have now is a steel plate (which is still a flat model) rolled into an arm using the character animation system.

Does this get close to what you're thinking of?

(more on textures, undo, displacement maps and more to come later.)


Yes you have an idea about object relationships, but I was talking about
the software relationship as well between the data that represents the objects.. A flexible visible object hiearchy depends a lot on a true internal object hiearchy. In blender the hiearchies can fail if functions were not specifically designed to handle the relationships of the bones controlling the steel plate in reference to the arm. Since blender is written in C,
such scenerios may only apply to certain objects used in a certain way,
but all other concepts which should be similar are unrelated..

Like I realized the other day that curves cannot be animated
with relative vertex keys despite the fact that all the objects uses vertices and surface curves can be animated with relative vertex keys. If all of blender had a concept of a vertex point cloud and performed transformations on point clouds instead of geometries, then operations like relative vertex keying could be applied to any geometry that uses the same kinds of vertices.. This kind of relationship and functionality is possible in an object oriented language. but its not easily done in C
unless you planned for it well enough ahead of time. Howeve it might be
possible to refactor blender's C code as C++ and add such relationships
so that functionality that is intuitive (like vertex keying) is inheritted
by all objects with vertices (which is pretty much everything).

This is one issue that needs to be cleared up before anything really significant is done to blender.. This is what I suspect some of the coders Ton hired were trying to do with the code. The structure of the code and the fact that it was designed in C, was causing troubles in the area of
refactoring and making similar features the same features. Part of it I imagine was code documentation.. But C was never designed to
work as an object oriented language, it was designed to
work as a modular language.. You can do object oriented programming
in C, but it involves trying to manage data structures with void pointers,
and when you manage data structures with void pointers there is no guaranteeing the integrity of your data structures, unless you are a really clever coder and are able to keep your p's and q's straight.

-------------------------

The Undo Concept..

Have you played with Maya? Its based on the same concepts
in maya where as you model something, as you do anything, there is this construction history... While creating that object you have selected points, performed and operation, selected points, performed another operation,
while behind the scenes when you select points on the object you
are indirectly specifying the data a function will act on (e.g. faces, vertices, edges are data types that can be affecyed by a function),
what you do with this data is specified by the operation (the function) and
what you want to do (function parameters).. Traditionally in C,
your operations you perform as seperate from the data you are performing them on. And for every kinda of data there is a set of functions, and for everything you want to do with the function on the data you need a set of parameters, and to make these operations perform the same independent of the geometry (whether you use meshes, surfaces, etc.) you would need at least ten functions for every data type and
all these functions would require the same kinds of parameters (interfaces to what you want to do). The problem is that in C (which blender is written in), unless you are clever, you will need to replicate
code between the functions when you add functionality to yoru application, because in order to maintain the intuitive understanding
of similar objects you have to constantly look for these relationships and
code specifically for every case that may help out in making the functional interface intuitive. If using an object orinted language,
this comes for free and if you are careful with your structures you
can avoid having to know too much about your objects.. Its Finish and forget..

The problem with blender is that if you want to add functionality to
all the primitives, you have to add it to each one by taking
all similar functions, and all similar data types, and translating what
this new operation means for every primitive, then you must find all referencing functions and change them to reflect the change. I'm saying this could be minimized if Blender used a more object oriented relationship between its features.. I think Ton would agree, because
everytime Ton added a feature to blender this meant he would have to
go back and reinterpret the idosyncracies of every function and data
type of all the "similar" geometries, in the iterest of producing a consistent behaviour.

The problem is that the more features you
have to implement the more stuff you have to remember and the
more bugs can occur, the longer the development time, and it only
increases as you add more features.. However with objects you
can add features quickly because all considerations about
similar functionality can be managed gradually over time can be refactored into unifying object types that behave the same and should..


In a object oriented
world there is no need for globals, globals can be used but shared among objects with the globals being placed inside an object called a monitor and controlled such that no two objects can modify the same global, By puting global variables in an object you can also monitor access to the global variables and dertmine misuse and overuse of globals by what objects
call the global variable monitor. There are little tricks liek this with objects that make their use well worth the investment of time.. But it can
be done withotu sacrificing the utility of a global variable..

Okay that aside.. My idea for Undo is best described by example,
let's take a mesh..

You can think of mesh modelling as a series of operations
performed on some basic primitive mesh, in order over time,
you start with a square, you extrude the square to a cube, you extrude it, and extrude it, and so on until you have a character..
Each operation is dependent on the one before, using the edges
and faces that result before. With a nromal undo you might
copy the previous mesh and modify that, this wastes a lot of space and
eats up memory if working on large messhes..

You can think of this as a series of operations on data (the C-style thought process) or as method calls that result in new 3D objects.
Or methods that change the existing object into something different.. There is no reason to copy the object everytime, why not just
store the differences between object modifications? Then make
it such that these differences are OOP objects themselves which have methods that call to the past and to the future of modification of the 3D geometry to resuse the construction history to make it easy to redo the whole process of creating the object without redoing everything over again.. Its also possible for the objects to communicate with each other and make changes that make sense, like if you change the size of the square that you extruded, all geometry that was created mased on this quad will have their sizes changed in purportion..

Okay what is with all this talk of obejcts, what are they?

The objects, are programming concepts (referred to OOP constructs, or object oriented programming constructs), unlike objects you model, these objects are concepts about how to arrange code and data, normally in C your code works on your data but all your code (functions control structures like "if" conditionals) is seperate from yoru data..
Normally you use libraries to act on the data and teh data is maintained
by a disjoint set of functions that take the data in through a parameter and spit it out either through another function parameter of via a return value.

Objects on the other hand organize the code with the data and
make the data only modifiable and accessible by the code that has most to do with it. This makes it so that only that object needs to know a lot about the way data is handled in it, no other object should ever have to know much more than the methods that access the data that is related with it. So you don't want to have to write 7 functions to read 7 different kinds of images formats into an image format you can use, its easier and better to create an image object that has methods to read 7 different file formats, and then a methods for copying areas of the image, drawing
lines ont he image, etc.. Then you can write porograms that use the image object and you don't have to pass into the functions that load
the images, the image data because you don't deal with the image data the image object deals with the image data.. You just issue commands
on the image object.. Also by associating data with the code,you reduce the need for specifiying a lot of extra paramters for referring to the data on which you need to operate.. A function that draws a pixel on a
image translates from:

image_type_to_return drawPixel(image_in, x,y, color);

to

image.drawPixel(x,y)

and

image.changePenColor(color);

There is no need to pass in an image because the "image" is explicitly
referred to as a result of calling the method, and the image is changed only by that method. So that drops two parameters you have to specify..
It also allows you to have multiple images, or image arrays, where
you can talk about

for (x = 1 ; x < 10 ; X++) {
image[x].load(filename + number_to_string(x));
}

Instead of

image images[],*p;
for (x = 1 ; x<10 ; X++) {
images[x] = malloc(sizeof(image_type_jpeg));
strcpy(filename,strcat(filename,itoa(x)), sizeof(filename) + sizeof(itoa(x));
p = load_from_jpeg(images[x], filename);
free(images[x]); images[x] = p;
}

// Yuck!!!!

So I think you can agree that things get hard to read really fast when working in C. And not only that, if you want to define something like a
vector based image (not using pixels), you can't inherit from the existing pixel based image. You will have to write some functions that convert
vector image data structures to pixel image data structures.
Whereas in C++ you could define a vector image that is both a pixel image and a vector images. And functions that work on the pixel image should work on the vector image.

----


An object is detailed by a class which defines the behaviour of the object and all its derived (or inheritted) classes of objects.

The form of a class in an object oriented language looks like the following...

Code:

class mesh {
mesh *encapsulated_mesh; // references the mesh before the last operation, or the beginning mesh if that mesh has no encapsulated mesh itself.... 
int faces_removed[],edges_removed[],vertices_removed[]; // which geometry doesn't exist from the previous mesh for this mesh to operate on.. Say if I extrude a square from a cube, one face is removed, so the resulting box has 10 quads, instead of 11.. So the number of the quad in the previous mesh can't be referenced in meshes derived from it..

float vertices[][3]; // list of vertices with 3D coordinates
int edge[][2]; // edges defined as vertex indices associated.
int face[][4]; // faces are either quads or triangles defined by connecting edges..
MaterialType mat[]; // material representing a numbered face..

method mesh(mesh *ref) { // mesh constructor (used to setup mesh)
self->reference(ref); // obtains mesh data from somewhere else and uses it to compute a new result based on whatever method is performed next on the mesh..
}

method extrude(int specific_faces[], matrix transform) {
// extrude a set of faces and vertices to a target that is the starting
// faces transformed by some matrix (rotated, scaled, translated)..
return new mesh(&self); // creates a new mesh object that references  the results of this extrude on the current mesh.   
// discount geometry that was removed as a result of this extrude, see comment on "faces_removed"..
}

method reference(mesh *ref) {
encapsulated_mesh = ref;
// this is used to determine selected vertices and faces,
// then any changes on the previous mesh are recognized as
// modifications on the mesh data in the previous mesh..
}

}


I coded this one out of my head and what I know about mesh objects intuitively..

This class is an implementation of a mesh, it has a constructor method (called "mesh") which intializes the mesh object and does the usual preparations to get it ready for work. It has a few other methods "reference" and "extrude". Extrude extrudes any number of mesh quads specified by some transformation matrix. Then it generates a
new mesh object with a reference to itself, so that the resulting object can call it or traverse to it.

This is ina nutshell, this class, is like how I would implement undo and redo. The operations performed on the mesh, result in new derivative meshes that undefine geometry that was replaced as a result of the
operation (e.g. extrude).. I say undefine instead of delete because
we plan to be able to return to the geometry before it was modified, so
geometry need not be deleted, it only needs to be flagged as non-existent. Then we base the new geometry on the geometry that
can be used (is not undefined). The purpose of this kind of relationship between the object type and its operations and its operations results and so forth is it produces a kind of construction history, where
previous models of the mesh can be changed and the changes can trickle through the design as a result. In some cases the operation performed in the past may cause the future that has been defined to no longer exist or to be much different, in such cases the mesh could be subtituted, but
there is no need to throw away the contruction history as one could always selectively prune the futures from the construction history.

Now this kind of undo and redo and spawing of alternate branches (parallel universes?) relies on some careful thought of object modelling and use of programatic object classes (C++, Java, Smalltalk, etc.).
Blender currently is a mix of C and C++, so its questionable how
easily it is maintained and improved, but it won't be very useful to
implement a undo/redo operation until blender is object oriented..
Reply with quote


Goto page 1, 2  Next
 
Jump to:  
Powered by phpBB © 2001, 2005 phpBB Group