Page 1 of 2

High speed (low quality) hardware accelerated rendering

Posted: Sat Nov 06, 2004 3:43 pm
by johnb
Has anyone thought of (or even better, tried implementing) using the power of consumer graphics hardware to perform high speed renders? Not realtime, but much faster than software-only rendering. I guess it would only be practical on hardware that supports pixel shaders (to provide per-pixel lighting and so on), and of course many effects available in Blender would be impossible with current hardware, but even with those difficulties, I think that having a system to produce draft-quality renders at relatively high speed would be very useful - animations and complex scenes could then be rendered quickly during development, giving feedback much faster, and the normal Blender renderer, or Yafray, or some other high-quality software renderer could be used for final output.

Comments, anyone?

John B

Posted: Sat Nov 06, 2004 4:54 pm
by z3r0_d
GPU-accelerated rendering has been discussed recently on the bf-funboard mailing list.

see
http://projects.blender.org/pipermail/b ... .html#2250

I think the conclusion was that it sounds nice, but given that graphic cards aren't guarnteed similar output it wouldn't be useful on a render farm.

gpu accelerated rendering for previews was not discussed iirc

Posted: Sat Nov 06, 2004 5:09 pm
by johnb
Ok, thanks. I'll read through that then.

John B

Posted: Mon Nov 08, 2004 12:04 am
by joeri
z3r0_d wrote: I think the conclusion was that it sounds nice, but given that graphic cards aren't guarnteed similar output it wouldn't be useful on a render farm.
Why use a renderfarm on (almost) realtime rendering?
Are you making a movie of 36 hours that needs to be done in 24 ???

Posted: Mon Nov 08, 2004 12:22 am
by z3r0_d
even using hardware acceleration the rendering isn't almost realtime

if you looked at the pictures on one of the links mentioned, iirc the render times were about 12 seconds to 1.5 min.

blender can render stuff in 12 seconds, but not _that_ stuff

wouldn't you want a render farm if rendering any kind of animation in genreal? a 3 min animation would take over a day [37hr?] to render at just 30 seconds per frame. it is difficult to make a sufficently detailed render take less than 30 seconds in blender, particularly if you turn on motion blur or render at a large [1024x768] resolution

Posted: Mon Nov 08, 2004 1:41 pm
by joeri
Hmmm, got a point there.
On the other hand, If I was to render on film resolution, and could make 3 minutes of animation a day, I probably could affort 4 (or more) of the same graphics boards. Making the render farm a reality, and the reason not to do it a bit silly.

Posted: Mon Nov 08, 2004 2:09 pm
by alt
I think the conclusion was that it would be nice but it took six very skilled engineers working for two years to make something like that inside NVidia. And it is not realtime, not even near. Speed boost they got with their first version (2 years of development) was twice the speed of plain cpu rendering. Some scenes rendered faster and others were slower with GPU acceleration.

And therefore making something like that in Blender would take a lot of time to reverse-engineer a specific card and after that some years to code acceleration for only that card. And this would need to be done to every chipset because they are not the same beast from inside.

But cool it is, yes.

Outputting bad-looking OpenGL previews to scare the children is a lot easier. But maybe not wanted.

Posted: Mon Nov 08, 2004 3:44 pm
by joeri
alt wrote:Outputting bad-looking OpenGL previews to scare the children is a lot easier. But maybe not wanted.
It's very handy in making a good timing on large scenes. I use it all the time.

Posted: Mon Nov 08, 2004 4:15 pm
by johnb
z3r0_d wrote:if you looked at the pictures on one of the links mentioned, iirc the render times were about 12 seconds to 1.5 min.
If you're referring to parthenon, you have to take into account that those times are for hardware accelerated global illumination rendering (I believe they're performing hardware assissted photon mapping). Normal Blender renders don't do global illumination - for that you need radiosity or raytracing (which Blender can do of course, but it isn't the default).
If you were to use hardware acceleration to produce images of a similar quality to Blender's normal renderer (somewhat lower quality since some features would be difficult or impossible to recreate using hardware, and hardware doesn't follow detailed enough standards to produce the same pixel-perfect output as software renderers can) then the rendering speed would, I believe (although I don't have stats to back this up) be much closer to realtime. Given the complexity of the scenes, and the extensive use of complex pixel shaders that would be required to give an output that is similar in quality to Blender's standard output, the renders would not be realtime - but they would be much closer than the current software renderer. I think they would be close enough to give passable quality output (but not pixel perfect) fast enough to be very useful as feedback while a scene or animation is being worked on. More useful than the current previews available, which can only really be used to check that things are positioned correctly - to check materials and lighting you need to render.

John B

Posted: Mon Nov 08, 2004 5:06 pm
by hxa7241
i am working on an opengl gpu accelerated global illumination renderer right now. and aiming to fit with blender is a principal consideration.

the architecture (a not particularly original rearrangement of things) is just about complete, and back-of-the-envelope estimations are very encouraging. ...but only running code will prove it for sure, of course.

a lot of the bf-funboard mailing list comments seemed mainly and overly concerned with standards of graphics card capability -- but my first version targets just basic opengl, and that should provide plenty, and run on practically anything.


btw, i recently read a neat tech paper (unpublished) that describes a technique for real-time -- yes, 40 fps -- global illumination, although its somewhat limited, and only really a demo -- i am not sure how or if its ideas can be useful yet.


...i am beginning to wonder if monte-carlo algorithms will be obsolete in a year or two, and photon mapping sidelined to a feature technique.


hxa7241

Posted: Tue Nov 09, 2004 11:12 pm
by konrad_ha
The idea of GPU-rendering for previews was discussed in the mailing-list, yet no conclusion was reached.

I for my part am absolutely in favour of enhancing the preview-renderings through more sofisticated usage of standard-OpenGL. The current ones are so butt-ugly I can't really show them to my clients. Every animator will surely understand the importance of a) quick and b) watchable previews. I am still convinced this simple goal isn't far out of reach, if only the current OpenGL-output would be enhanced.

Posted: Wed Nov 10, 2004 12:39 am
by johnb
konrad_ha wrote:The idea of GPU-rendering for previews was discussed in the mailing-list, yet no conclusion was reached.

I for my part am absolutely in favour of enhancing the preview-renderings through more sofisticated usage of standard-OpenGL. The current ones are so butt-ugly I can't really show them to my clients. Every animator will surely understand the importance of a) quick and b) watchable previews. I am still convinced this simple goal isn't far out of reach, if only the current OpenGL-output would be enhanced.
I would agree with that completely; I think it's a reasonable target to aim for (although your definition of quick may vary; it may not be possible to make it fast enough to allow for the kind of interactive view during editing that the current system provides - but I believe it should be possible to make it fast enough to be very useful).

John B

Posted: Mon Nov 22, 2004 5:32 pm
by konrad_ha
it may not be possible to make it fast enough to allow for the kind of interactive view during editing that the current system provides
When my scenes reach a certain complexity they are normally far out of the reach of realtime-display. But with rendering-times of about 5 sec for one frame I can easily let a scene render to files, get a cappucino in the meantime and then watch it. The current OpenGl-preview-renderer does a fast job, but the quality is just too low. If at least all lights and light-colours would be taken into account it might just get good enough.

Posted: Wed Jan 05, 2005 12:16 am
by mpan3
Fully GPU based render would be REALLY nice, but I know it's going to take ALOT of work, so maybe we can improve the current "shaded view" mode, so it includes textures and per-pixel lights, maybe even some sort of simple shadowmap/stencil shadow?

Posted: Thu Jan 20, 2005 1:42 am
by Money_YaY!
hxa7241 wrote:i am working on an opengl gpu accelerated global illumination renderer right now. and aiming to fit with blender is a principal consideration.

the architecture (a not particularly original rearrangement of things) is just about complete, and back-of-the-envelope estimations are very encouraging. ...but only running code will prove it for sure, of course.

a lot of the bf-funboard mailing list comments seemed mainly and overly concerned with standards of graphics card capability -- but my first version targets just basic opengl, and that should provide plenty, and run on practically anything.


btw, i recently read a neat tech paper (unpublished) that describes a technique for real-time -- yes, 40 fps -- global illumination, although its somewhat limited, and only really a demo -- i am not sure how or if its ideas can be useful yet.


...i am beginning to wonder if monte-carlo algorithms will be obsolete in a year or two, and photon mapping sidelined to a feature technique.


hxa7241
Are you still working on this ??? It Would be sweet if you are.