cmccad wrote:Hmmm.
I am trying to understand what it is you are saying. I have come up with two different takes on your idea.
One take is that the points/lines/polys/nurbs/what-have-you (2d or 3d) from the texture space are flattened to 2d (retaining the "vectorness" of the original objects, *not* rasterized). This 2d view is then projected onto the (3d or 2d) polys/nurbs/what-have-you of the model, and this is your texture. To take this a step further, the model and the texture are flattened to the camera view, still retaining the "vectorness". This result is then
a) rendered to pixels (rasterized)
b) exported to a vector format
c) passed up to another layer (texture? post processor?)
Interesting questions:
What happens to shading? How would this represented as vector graphic? A simple gradient would not be very good.
Similarly, what happens to highlights?
Shadows would actually be easy, and very crisp, because there would be no limiting shadow map size.
Could motion blur be vectorized? This could be a *huge* timesaver.
What about volumetric effects (fog, volume lighting, flame, etc)? It's hard to imagine how this could be vectorized.
The other take is that the eye-ray gets passed from the camera (in the 3d scene) to the texture (usually a 2d surface), which translates this eye-ray according to the mapping on the object, and then sends the eye-ray into the texture space (can be 2d or 3d). The texture space returns a color value (for rendering to pixels - what would happen if the output were vector?), and then this is passed back up to the texture layer, which translates the color onto the object, and then is sent back to the camera.
Hmmm. This would mean that when the eye-ray is passed down to the texture & texture space, then it will need a sample size (or shape? I'm thinking forshortened polys mapped back to (rendered) square pixels, so the shape passed down would be semi-trapezoidal). This is starting to sound complicated. But at least once the vectors are converted to color samples (or irregularly shaped pixels?) we are back on solid ground, and other issues such as shading and highlights have already been solved.
This is really a great topic, and I thank you for such great ideas. But practically speaking, I don't see a massive switch from pixel to vector graphics happening. It is hard for me to imagine making a beat-up wood texture out of vector graphics, much less a convincing skin texture. At least, not without a *tremendous* ammount of work (compared to taking snap-shots with my digital camera). Still, the option should be there.
Casey
Don't take the concept of the vector too literally, and don't confuse it with
the concept of a point vector (point with a magnitude and direction).
One use of the word vector is more common in 3D graphics the other is
used to describe a graphics type. A normal to a surface for instance is
used to describe the direction of a surface with a magnitude (or length of) of one (which is called a normalized vector, or a normal). Vector
graphics on the other hand is just connecting points and lines, how
the lines and points are stored determines the efficiency in the use of vector graphics. A pixel array could be represented with vector graphics
by storing the pixel value as normal but referencing it as a pixel of a certain shape oriented uniformlly across a 2D space.
When a pixel in rendered on the screen, it represents the averaging of
hundreds of samples (or eye rays).. The more you use the more detail is perceived in a pixel. IF you render a scene in blender with "OSA"
off that's like rendering a eye ray per pixel, but when you use OSA
blender renders many sample points for each pixel, yielding a
more perceived detailed look.. If you draw a line of sub-pixel
width through a pixel array, the line contributes a fraction of its color the
the pixel color or it contributes all of its color, if all of its color is contributed, the pixels approximating the line look jagged, whereas if the
pixels inherit only the fraction of color that the line represents,
there will be less jaggies in the pixel representation of the line..
This is usually how anti-aliasing is performed for 2D graphics. But
if you store the pixels and not the lines that the pixels are approximating,
you get only a representation of the line, not the line itself.. So just by going to pixels information is lost.. I should probably say to eliminate
the pixels at the surface texture rendering and have textures represented
with 2D or 3D graphics and not pixel based images, and delay
the averaging of eye-ray samples at the final render of the image..
ITs possible even that the final image is a vector format as well, but
as you say this can be cumbersome.
I think you have the idea, except I don't think the eye-ray samples have any dimension, they are samplings of a surface.. But the sampling are
most likely random samplings at a sub-pixel level that are average irrespective of their total contribution to the color area of the pixel..
The way 2D lines are rendered in blender may be different that the
way 3D lines are rendered, in that the 3D line may be randomly sampled
and the 2D lines are computed as lines of certain thickness and their
total contribution to the coloring of the pixel. However the color of the
pixel could also be determined by using sets of uniformly distributed
samples across the line. This would be like taking a high-res image of a
line and smoth-scaling the image to a 4th its size in a paint program like GIMP. The question is why is average of those details stored and not the
actual details.. And why not represent texture details as edges, points and polygons rather than pixels.
I'm sure you are aware of what I'm talking about and I see that
you understand because of the way you talk about it, I'm just
outlining some other information that may be helpful in understanding what I'm trying to do. Vector graphics is just a subset of
3D graphics, so either can be used, but at best the 3D graphics are
used. And my discovery is that blender can render multiple scenes,
so I thought it would be neat to use another scene as a source of texture
information, without averaging point samples twice in the process,
it seems redundant and it doesn't make sense to approximate a
infinitely preicse shape with in a pixel and then use that as a
source of texturing information for another render. The only way
it would make sense is if the texture is rendered ahead of time at a resolution higher than that of the scene the object that bares the
texture is rendered in, and that is inefficient. It would be beter just to
carry the eye-rays through to the second scene and use these rays to contribute to the pixel values in the scene of the object.
I am sure you know exactly what I'm talking about, I'm
just smoothing out the edges on the idea..