Regular UV is Passé

General discussion about the development of the open source Blender

Moderators: jesterKing, stiv

thorax
Posts: 320
Joined: Sun Oct 27, 2002 6:45 am
Contact:

I'm going to elaborate on the UV texturing concept later

Post by thorax »

I had video presentation prepared but after watching them I determined that I could carefully describe the ideas wtihout having to be repetitive in clarifying my ideas.. Ton asked that I clarify them, but its not going to
be simple to describe in words, it will have to be done with
visual descriptions.. I think it will be quite intuitive and easy to
understand once its finished.. But I don't want to have to drag you guys
through a video equivalent of one of my long messages..

I have a tendency to rough out my ideas without being too
careful and not relaly careing about the end result.. I realized that if I am to describe and detail this concept I should put as much time
and care into it as I expect anyone to consider it for implementation.

I wish I could get paid credits like one other here who did a redesign of the Blender interface as a subject of his thesis. I'm working on a side project which I am pretty sure will also help out with the redesign of the interface, or at least the understanding of the interface. But that as well
I feel will be a basic concept that once I release it others will want to own it.. That's okay, that's what ideas are for..

:)

Money_YaY!
Posts: 442
Joined: Wed Oct 23, 2002 2:47 pm

Post by Money_YaY! »

Sooo what is it ?

:twisted:

thorax
Posts: 320
Joined: Sun Oct 27, 2002 6:45 am
Contact:

The idea explained yet again..

Post by thorax »

Money_YaY! wrote:Sooo what is it ?

:twisted:

Basically it is that your eye rays don't stop at the image map, they
are carried through to another image space.. The Image space can either be vector graphics, pixels or a 3D scene (elsewhere in blender).
But instead of rendering the image space to pixels and mapping that onto your object (by way of UV mapping), why not carry the ray-tracing (pardon the reference) on into the texture's image space, through to the
vector graphics and 3D scenes.. The point being that you eliminate pixels altogether because you never reduce the image space to pixels, but use it as a kind of window into the other rendering space... Its like rendering
a scene to produce a bump map for an object, but you don't render the
scene to pixels but sample it with the same rays you are sampling the object (which is having the image mapped to it)..

I've given Ton the video where I talk about this.. So maybe he will be able to figure out what I mean.. Its just hard for me to clarify it because
its quite different from anything I've seen in a 3D app.. It begs the question, why do we ever use pixels at all, considering that pixels are subset of vector graphics, and vector graphics is a kind of subset of 3D graphics. Pixels are uniform sets of points, vector graphics can use infinitely precise lines, points, curves, and 3D graphics uses vector graphics plus solid geometry and procedural textures.. Its also possible
that other 3D spaces could be used as source material for the source material.. So there is a whole area of experimental quasi-ray-tracing/rendering that hasn't been explored..

The value this has is as you get closer to objects you have painted or
detailed with vector graphics or or other 3D views, you will not have to deal with anti-aliasing issues (a problem with pixel based textures),
your imagery don't turn into pixels, you can use procedural textures
creatively not as an alternative to painting them).

The only reason we are looking for seemless and detailed textures on objects is we are using pixel based images as a source for our textures..
Its possible to create colorful and detailed textures without using pixel based imagery..

-----

8)

Jellybean
Posts: 20
Joined: Sun Nov 17, 2002 10:43 am

Post by Jellybean »

thorax wrote:It begs the question, why do we ever use pixels at all...
I'm going to pose an answer to this rehtorical question, not because I don't think your idea is great, but because I think it's overlooking something.

Pixels are oft used because they are simple.

Think of an artist working on a painting of a forest medow, filled with bright and colorful wild flowers. If the artist were intent an painting every petal on each flower, every blade of grass, every leaf on the surrounding trees... well, I don't think you would ever get the picture because the artist would still be working on it for many years to come. Instead, I'm sure the artist would use various brush techniques to give the impression of a vast array of flowers and lush trees.

Even with less detail, the impression of great detail is there. This is what pixels allow us to do, create the impression of detail greater than is actually there. Take a look at the displacement mapped model Money_Yay! linked to here. If that model were defined at the polygon level to that detail, the large number of polygons and the great amount of time it would take to manipulate those polygons to create this model would make it impractical. The use of a pixel based displacement map allowed this model to have the impression of greater detail, even though it's underlying polygon model is very simple.

A pixel based texture does have a fixed amount of given deatil, but it does give a one to one representation, making it strait foreward to understand and use. Procedural textures dont allow you to explicitly define the shape of the texture, and are like the "brush techniques" used in the painting to create the impression of greater detail without actually hand creating it. Vector mapped textures allow for a more precise representation, but now you must give attention to each detail. To actually create a mapping to another 3D model passes the responsibility for detail onto another model, but doesn't create any detail, neither surface texture nor color, than is already explicitly created on each model.

I guess what I'm not seeing is, if you are mapping another 3D model as a texture onto a base model, where does the detail come from. If the detail is to come from the other model, the responsibility for detail will now rest on that other model, which will either have to be modeled to the extent of detail desired, or make use of pixel based, vectored, or procedural texturing. In each case, all that has happened is a delaying of when the detail is added.

Mapping one model onto another adds a level of abstraction, but how does it really make adding detail any easier?

Of course, just as there are still life paintings which strive to be true to life to each detail, there is also a place for 3D models which are modeled fully to detail. Any tool or method which makes this easier is a good thing. Simplified texturing will never be the absolute solution, but neither will modeling to absolute detail, but all together, every method adds to the 3D artists medium, which is a good thing.

cmccad
Posts: 0
Joined: Mon Apr 07, 2003 11:58 pm

Post by cmccad »

I think I see what thorax is trying to get at. From a programming point of view, thorax wants the texture of an object to be an object which can be one of several things... a pixel based image (photoshop/gimp/aftereffects), a vector based image (illustrator/flash/Kontour), or a native blender object (view from an object or camera, proceedural texture, maybe something linked to IPO animation curves).

The idea is to be able to get a color/alpha value from the texture object in a resolution-independant way. If the texture is based on a pixel image, then the resolution (and therefore zooming without artifacts) is limited, but if the texture based on one of the other methods, then it is possible in principle to zoom in infinitely without pixelation occurring.

Does this sound right?
Casey

thorax
Posts: 320
Joined: Sun Oct 27, 2002 6:45 am
Contact:

Post by thorax »

cmccad wrote:I think I see what thorax is trying to get at. From a programming point of view, thorax wants the texture of an object to be an object which can be one of several things... a pixel based image (photoshop/gimp/aftereffects), a vector based image (illustrator/flash/Kontour), or a native blender object (view from an object or camera, proceedural texture, maybe something linked to IPO animation curves).

The idea is to be able to get a color/alpha value from the texture object in a resolution-independant way. If the texture is based on a pixel image, then the resolution (and therefore zooming without artifacts) is limited, but if the texture based on one of the other methods, then it is possible in principle to zoom in infinitely without pixelation occurring.

Does this sound right?
Casey


Right on the button!!!
:D :D :D

-----------------

Uhhh I think I answered too soon.. It sounds like you may not completely get the picture. We have two rendering spaces, rendering space A and rendering space B, each is like a complete blender scene and can be rendered seperately as such. Now say we put a suit of armor in
space A and in space B we make a metal plating (flat) with rivets all around it. We can map the surface in space B to space A, by rendering out space B to an image and mapping that with UV mapping to the suit of armor in space A. But why render to pixels in Space B?

I mean for every ray we cast in Space A we cast a ray in space B,
corresponding to the ray in Space A, oriented to the face on the suit of armor in Space A. Its like ray-tracing only we don't compute reflections or refractions, we carry the ray from one world into another.. So if you can think of reflections, its like for each reflection we are computing the ray in a different world, ultimately returning to the original world to
compute the color of the surface.

------------------------ if the above confuses you, don't read the rest of this

There is also a case where ray-tracing is a subset of this new method of rendering textures as well, because the same world could be used to compute reflections, so anywhere a ray hits on a surface we could dynamically generate a new camera oriented to the surface pointing in the direction of the surface's normal, and render any objects that contribute to the color of the surface. Of course the closer to the surface in Space A you get, the resolution of the surface increases and the rendering of the reflection increases.. And for phong shaded polygons, the eye rays cast by the camera would need to be projected according to the phong model. I'm not suggesting that this method I speak of be used to ray-tracing, but that ray-tracing is essentially a subset, which says something about the capabilities of this method of rendering textures on objects. The value that has is if you are someone like me who would rather render a scene than ray-trace it but every now and then would like the features of ray-tracing with the control of rendering, it is possible
to do if textures are computed as I describe in the text above..

thorax
Posts: 320
Joined: Sun Oct 27, 2002 6:45 am
Contact:

Well haven't time to redo these..

Post by thorax »

I made a video earlier, but I think I was half awake when I did these..
I haven't had the time to really summarize what I was trying to
say in these videos, but if you are willing to sit through them
you will get a more complete idea of what I mean..

Maybe I need to make a summary video..

:idea: :idea: :idea: :idea: :idea: :idea: :idea: :idea: :idea:
http://www.bl3nder.com/ThisIsIt2.rm
http://www.bl3nder.com/PixelVsDots.rm
:idea: :idea: :idea: :idea: :idea: :idea: :idea: :idea: :idea:

You should get an idea how this idea works from these
videos.. You also might look at a paper I found by
Alvy Ray Smith, who is the man who gave Pixar its name:

ftp://ftp.alvyray.com/Acrobat/6_Pixel.pdf

The topic is "a pixel is not square"..

Its somewhat related, but my point is not so much the
shape of the pixel or what it represents, but why
is it used at all as a source of textures, why not
create a source material as being a image or a
blender scene, and when it is a blender scene the samples
can be carried on through to the scene to sample the
texture itself, the is no reason to render a texture to
pixels and map the pixels onto an object and render that object.
It is possible to render the texture's scene to the object by
passing point samples from the object's scene into the texture's scene.
Why include the pixel-middleman at all?

The advantage, infinitely precise textures because as the camera gets
closer to the object that is being rendered, more samples are used to sample that object and so the texture is rendered with more samples,
it is not of a fixed resolution as it would be with a image map..

cmccad
Posts: 0
Joined: Mon Apr 07, 2003 11:58 pm

Post by cmccad »

Hmmm.

I am trying to understand what it is you are saying. I have come up with two different takes on your idea.
One take is that the points/lines/polys/nurbs/what-have-you (2d or 3d) from the texture space are flattened to 2d (retaining the "vectorness" of the original objects, *not* rasterized). This 2d view is then projected onto the (3d or 2d) polys/nurbs/what-have-you of the model, and this is your texture. To take this a step further, the model and the texture are flattened to the camera view, still retaining the "vectorness". This result is then
a) rendered to pixels (rasterized)
b) exported to a vector format
c) passed up to another layer (texture? post processor?)
Interesting questions:
What happens to shading? How would this represented as vector graphic? A simple gradient would not be very good.
Similarly, what happens to highlights?
Shadows would actually be easy, and very crisp, because there would be no limiting shadow map size.
Could motion blur be vectorized? This could be a *huge* timesaver.
What about volumetric effects (fog, volume lighting, flame, etc)? It's hard to imagine how this could be vectorized.

The other take is that the eye-ray gets passed from the camera (in the 3d scene) to the texture (usually a 2d surface), which translates this eye-ray according to the mapping on the object, and then sends the eye-ray into the texture space (can be 2d or 3d). The texture space returns a color value (for rendering to pixels - what would happen if the output were vector?), and then this is passed back up to the texture layer, which translates the color onto the object, and then is sent back to the camera.

Hmmm. This would mean that when the eye-ray is passed down to the texture & texture space, then it will need a sample size (or shape? I'm thinking forshortened polys mapped back to (rendered) square pixels, so the shape passed down would be semi-trapezoidal). This is starting to sound complicated. But at least once the vectors are converted to color samples (or irregularly shaped pixels?) we are back on solid ground, and other issues such as shading and highlights have already been solved.

This is really a great topic, and I thank you for such great ideas. But practically speaking, I don't see a massive switch from pixel to vector graphics happening. It is hard for me to imagine making a beat-up wood texture out of vector graphics, much less a convincing skin texture. At least, not without a *tremendous* ammount of work (compared to taking snap-shots with my digital camera). Still, the option should be there.

Casey

thorax
Posts: 320
Joined: Sun Oct 27, 2002 6:45 am
Contact:

You are there..

Post by thorax »

cmccad wrote:Hmmm.

I am trying to understand what it is you are saying. I have come up with two different takes on your idea.
One take is that the points/lines/polys/nurbs/what-have-you (2d or 3d) from the texture space are flattened to 2d (retaining the "vectorness" of the original objects, *not* rasterized). This 2d view is then projected onto the (3d or 2d) polys/nurbs/what-have-you of the model, and this is your texture. To take this a step further, the model and the texture are flattened to the camera view, still retaining the "vectorness". This result is then
a) rendered to pixels (rasterized)
b) exported to a vector format
c) passed up to another layer (texture? post processor?)
Interesting questions:
What happens to shading? How would this represented as vector graphic? A simple gradient would not be very good.
Similarly, what happens to highlights?
Shadows would actually be easy, and very crisp, because there would be no limiting shadow map size.
Could motion blur be vectorized? This could be a *huge* timesaver.
What about volumetric effects (fog, volume lighting, flame, etc)? It's hard to imagine how this could be vectorized.

The other take is that the eye-ray gets passed from the camera (in the 3d scene) to the texture (usually a 2d surface), which translates this eye-ray according to the mapping on the object, and then sends the eye-ray into the texture space (can be 2d or 3d). The texture space returns a color value (for rendering to pixels - what would happen if the output were vector?), and then this is passed back up to the texture layer, which translates the color onto the object, and then is sent back to the camera.

Hmmm. This would mean that when the eye-ray is passed down to the texture & texture space, then it will need a sample size (or shape? I'm thinking forshortened polys mapped back to (rendered) square pixels, so the shape passed down would be semi-trapezoidal). This is starting to sound complicated. But at least once the vectors are converted to color samples (or irregularly shaped pixels?) we are back on solid ground, and other issues such as shading and highlights have already been solved.

This is really a great topic, and I thank you for such great ideas. But practically speaking, I don't see a massive switch from pixel to vector graphics happening. It is hard for me to imagine making a beat-up wood texture out of vector graphics, much less a convincing skin texture. At least, not without a *tremendous* ammount of work (compared to taking snap-shots with my digital camera). Still, the option should be there.

Casey

Don't take the concept of the vector too literally, and don't confuse it with
the concept of a point vector (point with a magnitude and direction).
One use of the word vector is more common in 3D graphics the other is
used to describe a graphics type. A normal to a surface for instance is
used to describe the direction of a surface with a magnitude (or length of) of one (which is called a normalized vector, or a normal). Vector
graphics on the other hand is just connecting points and lines, how
the lines and points are stored determines the efficiency in the use of vector graphics. A pixel array could be represented with vector graphics
by storing the pixel value as normal but referencing it as a pixel of a certain shape oriented uniformlly across a 2D space.

When a pixel in rendered on the screen, it represents the averaging of
hundreds of samples (or eye rays).. The more you use the more detail is perceived in a pixel. IF you render a scene in blender with "OSA"
off that's like rendering a eye ray per pixel, but when you use OSA
blender renders many sample points for each pixel, yielding a
more perceived detailed look.. If you draw a line of sub-pixel
width through a pixel array, the line contributes a fraction of its color the
the pixel color or it contributes all of its color, if all of its color is contributed, the pixels approximating the line look jagged, whereas if the
pixels inherit only the fraction of color that the line represents,
there will be less jaggies in the pixel representation of the line..
This is usually how anti-aliasing is performed for 2D graphics. But
if you store the pixels and not the lines that the pixels are approximating,
you get only a representation of the line, not the line itself.. So just by going to pixels information is lost.. I should probably say to eliminate
the pixels at the surface texture rendering and have textures represented
with 2D or 3D graphics and not pixel based images, and delay
the averaging of eye-ray samples at the final render of the image..
ITs possible even that the final image is a vector format as well, but
as you say this can be cumbersome.

I think you have the idea, except I don't think the eye-ray samples have any dimension, they are samplings of a surface.. But the sampling are
most likely random samplings at a sub-pixel level that are average irrespective of their total contribution to the color area of the pixel..
The way 2D lines are rendered in blender may be different that the
way 3D lines are rendered, in that the 3D line may be randomly sampled
and the 2D lines are computed as lines of certain thickness and their
total contribution to the coloring of the pixel. However the color of the
pixel could also be determined by using sets of uniformly distributed
samples across the line. This would be like taking a high-res image of a
line and smoth-scaling the image to a 4th its size in a paint program like GIMP. The question is why is average of those details stored and not the
actual details.. And why not represent texture details as edges, points and polygons rather than pixels.

I'm sure you are aware of what I'm talking about and I see that
you understand because of the way you talk about it, I'm just
outlining some other information that may be helpful in understanding what I'm trying to do. Vector graphics is just a subset of
3D graphics, so either can be used, but at best the 3D graphics are
used. And my discovery is that blender can render multiple scenes,
so I thought it would be neat to use another scene as a source of texture
information, without averaging point samples twice in the process,
it seems redundant and it doesn't make sense to approximate a
infinitely preicse shape with in a pixel and then use that as a
source of texturing information for another render. The only way
it would make sense is if the texture is rendered ahead of time at a resolution higher than that of the scene the object that bares the
texture is rendered in, and that is inefficient. It would be beter just to
carry the eye-rays through to the second scene and use these rays to contribute to the pixel values in the scene of the object.

I am sure you know exactly what I'm talking about, I'm
just smoothing out the edges on the idea..

thorax
Posts: 320
Joined: Sun Oct 27, 2002 6:45 am
Contact:

You are there..

Post by thorax »

cmccad wrote:Hmmm.

I am trying to understand what it is you are saying. I have come up with two different takes on your idea.
One take is that the points/lines/polys/nurbs/what-have-you (2d or 3d) from the texture space are flattened to 2d (retaining the "vectorness" of the original objects, *not* rasterized). This 2d view is then projected onto the (3d or 2d) polys/nurbs/what-have-you of the model, and this is your texture. To take this a step further, the model and the texture are flattened to the camera view, still retaining the "vectorness". This result is then
a) rendered to pixels (rasterized)
b) exported to a vector format
c) passed up to another layer (texture? post processor?)
Interesting questions:
What happens to shading? How would this represented as vector graphic? A simple gradient would not be very good.
Similarly, what happens to highlights?
Shadows would actually be easy, and very crisp, because there would be no limiting shadow map size.
Could motion blur be vectorized? This could be a *huge* timesaver.
What about volumetric effects (fog, volume lighting, flame, etc)? It's hard to imagine how this could be vectorized.

The other take is that the eye-ray gets passed from the camera (in the 3d scene) to the texture (usually a 2d surface), which translates this eye-ray according to the mapping on the object, and then sends the eye-ray into the texture space (can be 2d or 3d). The texture space returns a color value (for rendering to pixels - what would happen if the output were vector?), and then this is passed back up to the texture layer, which translates the color onto the object, and then is sent back to the camera.

Hmmm. This would mean that when the eye-ray is passed down to the texture & texture space, then it will need a sample size (or shape? I'm thinking forshortened polys mapped back to (rendered) square pixels, so the shape passed down would be semi-trapezoidal). This is starting to sound complicated. But at least once the vectors are converted to color samples (or irregularly shaped pixels?) we are back on solid ground, and other issues such as shading and highlights have already been solved.

This is really a great topic, and I thank you for such great ideas. But practically speaking, I don't see a massive switch from pixel to vector graphics happening. It is hard for me to imagine making a beat-up wood texture out of vector graphics, much less a convincing skin texture. At least, not without a *tremendous* ammount of work (compared to taking snap-shots with my digital camera). Still, the option should be there.

Casey

Don't take the concept of the vector too literally, and don't confuse it with
the concept of a point vector (point with a magnitude and direction).
One use of the word vector is more common in 3D graphics the other is
used to describe a graphics type. A normal to a surface for instance is
used to describe the direction of a surface with a magnitude (or length of) of one (which is called a normalized vector, or a normal). Vector
graphics on the other hand is just connecting points and lines, how
the lines and points are stored determines the efficiency in the use of vector graphics. A pixel array could be represented with vector graphics
by storing the pixel value as normal but referencing it as a pixel of a certain shape oriented uniformlly across a 2D space.

When a pixel in rendered on the screen, it represents the averaging of
hundreds of samples (or eye rays).. The more you use the more detail is perceived in a pixel. IF you render a scene in blender with "OSA"
off that's like rendering a eye ray per pixel, but when you use OSA
blender renders many sample points for each pixel, yielding a
more perceived detailed look.. If you draw a line of sub-pixel
width through a pixel array, the line contributes a fraction of its color the
the pixel color or it contributes all of its color, if all of its color is contributed, the pixels approximating the line look jagged, whereas if the
pixels inherit only the fraction of color that the line represents,
there will be less jaggies in the pixel representation of the line..
This is usually how anti-aliasing is performed for 2D graphics. But
if you store the pixels and not the lines that the pixels are approximating,
you get only a representation of the line, not the line itself.. So just by going to pixels information is lost.. I should probably say to eliminate
the pixels at the surface texture rendering and have textures represented
with 2D or 3D graphics and not pixel based images, and delay
the averaging of eye-ray samples at the final render of the image..
ITs possible even that the final image is a vector format as well, but
as you say this can be cumbersome.

I think you have the idea, except I don't think the eye-ray samples have any dimension, they are samplings of a surface.. But the sampling are
most likely random samplings at a sub-pixel level that are average irrespective of their total contribution to the color area of the pixel..
The way 2D lines are rendered in blender may be different that the
way 3D lines are rendered, in that the 3D line may be randomly sampled
and the 2D lines are computed as lines of certain thickness and their
total contribution to the coloring of the pixel. However the color of the
pixel could also be determined by using sets of uniformly distributed
samples across the line. This would be like taking a high-res image of a
line and smoth-scaling the image to a 4th its size in a paint program like GIMP. The question is why is average of those details stored and not the
actual details.. And why not represent texture details as edges, points and polygons rather than pixels.

I'm sure you are aware of what I'm talking about and I see that
you understand because of the way you talk about it, I'm just
outlining some other information that may be helpful in understanding what I'm trying to do. Vector graphics is just a subset of
3D graphics, so either can be used, but at best the 3D graphics are
used. And my discovery is that blender can render multiple scenes,
so I thought it would be neat to use another scene as a source of texture
information, without averaging point samples twice in the process,
it seems redundant and it doesn't make sense to approximate a
infinitely preicse shape with in a pixel and then use that as a
source of texturing information for another render. The only way
it would make sense is if the texture is rendered ahead of time at a resolution higher than that of the scene the object that bares the
texture is rendered in, and that is inefficient. It would be beter just to
carry the eye-rays through to the second scene and use these rays to contribute to the pixel values in the scene of the object.

I am sure you know exactly what I'm talking about, I'm
just smoothing out the edges on the idea..

thorax
Posts: 320
Joined: Sun Oct 27, 2002 6:45 am
Contact:

New method for streaming realvideo (news to me)

Post by thorax »

To watch those videos I did on realvideo (if you have a modem connection) use these links.. You will need to enable
"Instant Playback" under
"tools->preferences->Connection->Playback Settings" under the

Real One Player to play these videos.. If you have ZoneAlarm or a
firewall, it is adviseable to lock the real player off from being a server
or having the real-scheduler contact the net.. Real is known to use spyware..

http://www.bl3nder.com/ThisIsIt2.ram
http://www.bl3nder.com/PixelVsDots.ram

Post Reply