Isn't multipass programmable shading the future of CG?

Blender's renderer and external renderer export

Moderators: jesterKing, stiv

Posts: 1
Joined: Wed Oct 16, 2002 5:26 pm

Isn't multipass programmable shading the future of CG?

Postby dubois » Sat Oct 19, 2002 12:24 am

I'm not an expert on this but I read this article a while back which gives me the impression that the future of rendering is in multipass shading compilers rather than `brute force' techniques like ray tracing.

below is an excerpt from the article which is at

(A few years ago) Mark S. Peercy and his colleagues from SGI presented a paper entitled Interactive Multi-Pass Programmable Shading.
Peercy's paper demonstrated precisely how to translate complex code—in this case, those of the RenderMan Shading Language used at places like Pixar—into OpenGL rendering passes using a compiler. The compiler would accept RenderMan shading programs and output OpenGL instructions.
One key observation allows shaders to be translated into multi-pass OpenGL: a single rendering pass is also a general SIMD instruction—the same operations are performed simultaneously for all pixels in an object.
A shader computation is broken into pieces, each of which can be evaluated by an OpenGL rendering pass. In this way, we build up a final result for all pixels in an object.
Peercy's demonstration compiler was able to produce output nearly identical to RenderMan's built-in renderer.
The implications were simple but powerful: this method would enable consumer graphics chips to accelerate the rendering of just about "anything." Even if the graphics chips couldn't handle all the necessary passes in real time, they could generate the same ouput far faster than even the speediest general-purpose microprocessor.

My impression is that graphics procssor development (driven by the gaming industry) will allow these multipass shading methods to soon match anything that can be accomplished via ray-tracing in far less time.

For those of you in the know, does this seem like an accurate view of the future of CG? Doesn't this mean that focusing on a tightly intigrated ray-tracer for blender would be something of a waste of time?


Posts: 79
Joined: Mon Oct 14, 2002 5:24 am

Postby MrMunkily » Sat Oct 19, 2002 9:33 pm

Until opengl hardware will accelerate a raytrcing function, I doubt this will be useful.

Reason:opengl and all framebuffer/scanline based systems CANNOT do raytracing. there are still MANY things (GI, caustics, refraction, radiosity(well, you can but it looks awful)) that simply cannot be done without raytracing.

still it's DEFINITELY something to consider. people who want to see a scanliner remain blender's default would be very interested inm such a method. this is similar to nvidia' CG initiative and ATI rendermonkey, correct?

Posts: 13
Joined: Sun Oct 20, 2002 12:16 am
Location: Georgia Tech, Atlanta

Postby markluffel » Sun Oct 20, 2002 12:45 am

A group called VM Labs supposedly created a ray tracing graphics chip four years ago. Read:

After seeing all of the lighting tricks that Bungie put in Halo, I'm wondering about the use of ray tracing too. I think that running a radiosity thread along with a real-time renderer is the next hybrid approach. Most of that secondary illumination is fairly static. When there was some change though, the radiosity calculations would take more slices.

Posts: 119
Joined: Wed Oct 16, 2002 10:30 am

Postby dreamerv3 » Sun Oct 20, 2002 4:01 am

Mr Munkily: Umcould you please provide some proof of the awful looks of alternate methods of simulating cuastics using pixel and vertex program running on 3D hardware?

Odds are you cannot because hardware accelerated caustics haven't yet been introduced in implementation even though they're very possible with languages like CG from Nvidia, which is used in Direct X. A renderman derived shading language is in development at 3dlabs and will be used in OpenGL 2.

Maybe there's a better way than raytracing, and until you explore the possibilities don't shrug off progress unless you have definitive proof that it doesn't work at all or works horribly to the point of not being siutable for even games.

I think that blender should as a matter of course use OpenGL 2.0 for ALL of its rendering.

Next siggraph 3Dlabs will be showing us things which can be achieved in OpenGL 2 which will make all these futile pleas for raytracers seem well foolish.

When you've got 3D chips with 100million transistors to throw at rendering don't waste time developing a software based dinosaur which is slower by at least 10 orders of magnitude.

C'mon wake up!

Posts: 79
Joined: Mon Oct 14, 2002 5:24 am

Postby MrMunkily » Sun Oct 20, 2002 9:56 am

understood. I cannot implement it, sice i am not a programmer. If you can show me an example of an opengl or any, for that matter, scene rendered on a standard-issue GPU that has the effects you speak of, then I will jmp on the bandwagon immediately. However, I haven't seen the kind of staggering qualiy associated with ray and photon based effects EVER rendered on any software card. HOWEVER, I can think of at least one way that it (Skydome / Regular radiosity) could be done, theoretically (I can elaborate later if needed). As for opengl2 rendering as a primary render...., I'm all for it, provided that a tried and true sofware render is kept in reserve. I know that it will take a while at least before this will provide production - quality images :)

let's wait for 3dlabs to show us how its done, then, shall we? Or perhaps there are already examples? I look from a pure artsits/user's perspective, and I judge primarily by what I see. I like blender's render in visual quality over, say BMRT which gives things a different, and gerally more putrid tone, yet I believe the overall feel of the lightflow renderer is more polished, glossy when neccesary, and artist-y (its interface leaves seomthing to be desired, to say the least). I'm just trying to illustrate my point that if by implemeting this we don't actually gain any features, (albeit lightning-fast speed) we're not going to get blender into the big leagues (unless a secondary raytracer that can fill in functions is also supplied.) and if GPU can suddenly perform photon/ray GI calculations, then hallelujah!

Posts: 239
Joined: Mon Oct 14, 2002 8:15 am
Location: Sweden

Postby Jamesk » Tue Oct 22, 2002 8:17 am

There's no doubt that the original scanline renderer of Blender is very, very good - fast, reliable and flexible. A multipass capability would make things even better. Lot's of things can be accomplished without raytracing, especially with clever use of standard texturing. This is particularly true in a standard animation production where raytrace methods simply don't cut it in terms of high processing capacity demands. Multipassing would make the standard scanliner unstoppable!

Posts: 79
Joined: Mon Oct 14, 2002 5:24 am

Postby MrMunkily » Wed Oct 23, 2002 12:58 am

This is particularly true in a standard animation production where raytrace methods simply don't cut it in terms of high processing capacity demands

I take it you've naver used mentalray.

raytracing is ready for production.[/quote]

Posts: 107
Joined: Wed Oct 16, 2002 5:31 am

Postby jeotero » Wed Oct 23, 2002 3:22 am


Posts: 10
Joined: Mon Oct 14, 2002 8:43 am

Postby Noiprox » Thu Oct 24, 2002 2:30 am

I think OpenGL 2 and multipass, hardware-accelerated rendering are features that will almost certainly be in Blender3. They will probably not make it into the current Blender architecture, though, so be patient...I do think they will be .. rather .. nice, though. :)

This is particularly true in a standard animation production where raytrace methods simply don't cut it in terms of high processing capacity demands

That is ludicrous.

Posts: 79
Joined: Mon Oct 14, 2002 5:24 am

Postby MrMunkily » Thu Oct 24, 2002 2:34 am

By definition photon/ray based GI requires a raytracer. However, i had an idea just thinking now, how we could do hemicube radiosity/GI faster (and therefore better since quality could be upped) ..

using the cpu to render the hemicube is not ideal - if we could use the GPU to render the hemicube (per sample in the image) and then grab the image from the framebuffer, change it to an illumination value based on the amount of 'lit' faces in the sampled hemicube - then bake it into a texture to put on the geometry, that would be a way to use the hardware accelerator in the graphics card to make the calculations go a lot faster.

Am I nuts? would this work? if i would, would it be better/faster than before? (I'm talking about the accelerator thing, not the texture thing, I know there are issues/advantages to that)

Posts: 75
Joined: Thu Oct 17, 2002 9:39 pm

Postby Zsolt » Sun Oct 27, 2002 6:34 pm

This is what I first thought of when reading about the ATI Radeon 9700. It's got this function when the redndering outputs of eg. RenderMan, 3DMax are converted on the fly to instructions the GPU (VPU=Video Processing Unit, as ATI likes to call it) of the Radeon can process, basically meaning that the card can do scanline or raytracing by hardware. This means dozens of times faster rendering. Not real-time raytracing, but much closer to it than even the best "brute force" rendering pc. Of course it has limits to what it can do, but what I've seen of it, it looks great.

This must be the way of the future. More and more such cards will appear, (like the NV30) so sooner or later it should be natural that a modern 3D program can provide output directly to the video card. (hint hint: Blender... too bad I'm not a programmer :)


Return to “Rendering”

Who is online

Users browsing this forum: No registered users and 2 guests