What's going to become of the renderer?

Blender's renderer and external renderer export

Moderators: jesterKing, stiv

kflich
Posts: 31
Joined: Wed Oct 16, 2002 11:53 am
Location: israel

Postby kflich » Fri Oct 18, 2002 5:07 pm

check out :arrow: http://www.3dengines.de/ - its a list of thousands of render engeins (realtime and rendering) most of them free (although somewhat old). maybe one of them (or more) can be integrated in blender.

Jan_Jordan
Posts: 4
Joined: Tue Oct 22, 2002 10:54 am

Postby Jan_Jordan » Wed Oct 23, 2002 3:18 pm

Some thoughts:

As an artist i must say that ray tracing realy isnt the "holly cow" for doing renderings. In fact it is only usefull for very few limited things like refractions. A lot of high class renderers dont use raytracing at all or only on a material basis. Shadows are usualy equaly well or better done with shadowmaps. I would like to see an improved global illumination support, not an a mesh base. This could also be done with a scanline renderer.

Rendering with an export script and an external renderer has some drawbacks. Especially if you render an animation. Eventually the exporting, parsing, building a rendering data structure process takes a lot of time and because of using 2 separate programms you need the double ammount of ressources for representing the scene in your memory.

Using a shading language does not mean that the Material dialog has to be more complicated. You would have a standard material shader which exactly mimics the current material. Advanced users would be able to script additional shaders. Imho this has to be done with a text editor, a visual approach would be either too limited or a complete product by itself.
In addition you would either describe the UI of the shader dialog in a separate file or blender would parse the shader and create a UI based on the parameters of the shading function. Again the end user of the shader would be able to use it without knowing how to program a shader.

I realy would like to see the standard renderer improved. I like BMRT, POV and the like but for animations its simply too slow. Please dont forget to think about animation support while improving the renderer.

rwenzlaff
Posts: 31
Joined: Wed Oct 23, 2002 2:26 pm

Re plug-in Renderer

Postby rwenzlaff » Wed Oct 23, 2002 3:27 pm

While this is a good general idea, remember that plugin renderers need to export the same interface to the program calling it. Renderman already has specified such an interface. I haven't looked deeply enough to see whether Blender could neatly provide it's scene data to such an interface.

Otherwise, what you're really talking about is "on-the-fly" export. Rather than build 20 different render exporters into Blender, why don't we design a plug-in interface that always takes the data in the form Blender stores it in (linked lists), and exports it to whatever format is needed for an arbitrary renderer. So we'd have a BlenPOV.so, and a BlenLightFlow.so (or .dll) and others could be added at will.

The trick is to guess _all_ the info that Blender might ever have to export to support _Any_ renderer.
And make sure that is available to the plug-in.

In addition to this, each object in Blender should have something added to it's structure - a USERINFO block. This can be used to pass additional parameters to the render plug-ins (such as IOR and Caustic settings).

Backwards compatability may be an issue here. If we add to a struct, then previous revs of Blender won't be able to parse the data. Maybe a completley different object could be set up for user data that references objects in a scene. Kinda' like parenting a textfile to an object.

Would it be possible for the plug-in to export additional controls to make editing the user data easier? I don't know. It might be as easy as including a function that returns a *ScrArea. Have to check this out.

Bob

MrMunkily
Posts: 79
Joined: Mon Oct 14, 2002 5:24 am

Postby MrMunkily » Wed Oct 23, 2002 10:47 pm

I would like to see an improved global illumination support, not an a mesh base. This could also be done with a scanline renderer.


By definition photon/ray based GI requires a raytracer. However, i had an idea just thinking now, how we could do hemicube radiosity/GI faster (and therefore better since quality could be upped) ..

using the cpu to render the hemicube is not ideal - if we could use the GPU to render the hemicube (per sample in the image) and then grab the image from the framebuffer, change it to an illumination value based on the amount of 'lit' faces in the sampled hemicube - then bake it into a texture to put on the geometry, that would be a way to use the hardware accelerator in the graphics card to make the calculations go a lot faster.

Am I nuts? would this work? if i would, would it be better/faster than before? (I'm talking about the accelerator thing, not the texture thing, I know there are issues/advantages to that)

Jan_Jordan
Posts: 4
Joined: Tue Oct 22, 2002 10:54 am

Postby Jan_Jordan » Thu Oct 24, 2002 1:38 pm

Yes you are right, a lot of GI methodes are raytracing based. But even then i would like to see such things on a per material basis.
With "not mesh based" i want to express that i want it to be transparent to me, i dont want to see the subdivision, i just want the object lit without (visible) modifications.

MrMunkily
Posts: 79
Joined: Mon Oct 14, 2002 5:24 am

Postby MrMunkily » Thu Oct 24, 2002 2:48 pm

yes, that's a good thing, not to have to do the silly shooting thing. It confised m quite a bit when I started with blender. ugh.

anyway, I DID find some evidence that the opengl accelerated radio-solver does indeeed work!!!! No one's mad it look nice, though. They need to take more samples.

a good card can do maybe 30 fps of a non-textured scene - if its simple, even more. then convert that to lighting data woud take a bit more time, then put it on a texture and map it to surface... more time. stil, faster than most algorithms out there i'd think. now what about quality?

Dani
Posts: 251
Joined: Fri Oct 18, 2002 8:35 pm

Postby Dani » Fri Oct 25, 2002 10:31 pm

Hello all of you.

Here are some ideas, or points:

1)As I said previously, I seriously beleive the internal renderer would like some liftings such as TRUE AA, I mean, not oversampling Speediing (though it's alredy rather fast but slows down quickly when you try motion blur...) Quick and not-so-dirty vectorial-based motion blur (which has already been proposed) and, why not, selective raytracing.

2)The plugin interface, wether it's C-stuff or Python, should allow these sorts of things:
-Full acces to all data types in blender.
-Ability for the plug-in to add funcionnalities seamlessly into blender, such as area lights or stuff like that if a Vray_like renderer for blender was to be developped. I mean, a button called "AREA" would appear next to "LAMP-SPOT-SUN-HEMI".
See what I mean? There should be a complete interaction between the needs of a plugin and what offers blender.


3)Hum, final point... what was it? ermm... yeah I remember!
Blender sould try to detect all know renderers existing on a system (scanning the hard drives, the network for the executable for example, or see if it's declared in the system files etc..., and finally, if these fails, leave the possibility for the user to declare it later on...). The Blender-known-renderers-database could be implemented in blender's core or added by plugins.


Let's move on to other stuff... Wavk is working on a great new interface for texturing. this could be the way to the shaders talk there was earlier in this thread. MrMunkily: your idea about using GPU might not be so crazy as it would have been four years ago : ). Could you guve some explanation (hard ones or easy ones :)

Be Happy
Dani

MrMunkily
Posts: 79
Joined: Mon Oct 14, 2002 5:24 am

Postby MrMunkily » Sun Oct 27, 2002 2:58 am

OK - Here we go.
very simplfied

Conventional hemicube radiosity determines the illumintion at a point by rendering a fish-eyed cubic view of the world around it. It basically finds out how much illumination the point recieves from how many light sources are visible (and how much of each and how bright each is) in the 'hemicube' rendering. now, the hemicube rendering need not be very detailed. it need only be accurate enough to see which light sources are visible, and how much is visible from each. (how bright each is can be deduced from other information, I would assume). From this an illumination value for that spot (whether it be a pixel like in irradience, or some finite patch like in ugly classic radiosity) can be computed and applied.

run over the whole image in this manner the hemicube computing can take a hell of a long time. If we do it with opengl (and the videocard) we can get the same framerate as you might in a game, or more, since probably for consistencies sake things like textures and other stuff would be left out of the rendering. Different cards have different implementations of more advanced stuff, but most of the basic rendering would be pretty consistent, I would think. Using OGL, we might get, say, 100 samples per second as opposed to, um, a lot less. yhea. course I have no idea if this could actually work.

Dani
Posts: 251
Joined: Fri Oct 18, 2002 8:35 pm

Postby Dani » Sun Oct 27, 2002 1:43 pm

Thanks for the explanation!

Hey! that reminds me of some tests i was doing for simulating radiosity-like color bleeding. I simply rendered envmaps and pushed the filter value tu 25 so it'd be blurred. It worked for some simple objects (spheres) but for more complexe (humans) the boudary of the envmap appeared.
But the biggest problem was when trying to render multiple envmapped objects, they don't "see" the other objects' envmap till there are not rendered.

Seems to be quite the same technique no?
Hope you won't have the same problems I had. Actual G-forces and Radeons, Perhelia and others (Sabre) might allow such treatment through pixel shaders and that would certainly speed rendering times offering, in some way, two processors for the rendering (GPU/VPU + CPU). But you must make sure that people who don't have such videocards still can use hemi...diosity but through software and not hardware... (with all the losses in terms of optimisation...)

Good Luck!
Dani

Good luck

dreamerv3
Posts: 119
Joined: Wed Oct 16, 2002 10:30 am

Postby dreamerv3 » Wed Oct 30, 2002 5:24 pm

It seems to me that one could use the vertex program processor on the GPU to accelerate a particle system composed of vertices, this particle system would be written to mimick the behaviour of photons, diffusion, bounce, color inheritance and so forth.

The card would perform these simulations in realtime for literally thousands and perhaps millions of vertices, this data would then get stored in a buffer either in main memory or on the GPU itself, a pixel program or shader residing in the pixel processor or shading processor would read in this information and rasterize the scene based on the value's the vertices/photons returned for that frame.

Like Mr Munkily said, there are polygon based radiosity solutions but this would probably be more of a hybrid or photon mapping, with a new twist. I think with the proper sampling settings and interpolation smoothing some very interesting things could be done on that level.

It's a way to totally offload GI calculations from the main CPU and get the muscle of the GPU which is optimised for this task to perform them.

This might take a few more fractions of a second to render perhaps streching into 3 seconds per frame, but just think of the possibilities.

Think GPU.

kniffo
Posts: 38
Joined: Tue Nov 12, 2002 8:08 am
Location: Denmark

which renderer?

Postby kniffo » Tue Nov 12, 2002 1:57 pm

Hello folks, this is my first posting :)

Well, you are asking, of which server to take. I am just reading and learning the book "Advanced Renderman", so I like the idea to have a renderman-compatible renderer like the BMRT. The renderer of the BMRT is a bit slow, not as fast as PRMan but good though. The latest version can do HDRI for instance.

Other renderers that are good, are only available to high quality software. These are VRay or MentalRay. They are free, but you cannot get an interface description for them.

Is Lightflow that good? I did'nt see pictures rendered with it except the pictures on lightflowtech.com.


Regards,
Kniffo

MrMunkily
Posts: 79
Joined: Mon Oct 14, 2002 5:24 am

Postby MrMunkily » Tue Nov 12, 2002 8:27 pm

Lightflow is dead. Development has ceased. BMRT is dead. Development , support, and distribution has ceased.

Pablosbrain
Posts: 356
Joined: Wed Jan 28, 2004 7:39 pm

Postby Pablosbrain » Tue Nov 12, 2002 8:37 pm

I am just wanting an easy way to export mesh/deform info as well as "ALL" texturing information to a renderman compliant renderer... this would totally beat out some of the high end solutions that require extra software.

kniffo
Posts: 38
Joined: Tue Nov 12, 2002 8:08 am
Location: Denmark

Postby kniffo » Wed Nov 13, 2002 7:51 pm

MrMunkily wrote:Lightflow is dead. Development has ceased. BMRT is dead. Development , support, and distribution has ceased.


Yep, you're right. I've been on the page of lightflow. BMRT is down for a long time now...

Well in my mind we could make a renderman copmatible interface. That's a good selection.

All the other good renderers are just closed for open development. And all these open source projects are not as good as them. :(

So, what shall we do?

green
Posts: 81
Joined: Sun Oct 13, 2002 8:04 pm

Postby green » Thu Nov 14, 2002 9:40 am

kniffo wrote:
MrMunkily wrote:Lightflow is dead. Development has ceased. BMRT is dead. Development , support, and distribution has ceased.


Yep, you're right. I've been on the page of lightflow. BMRT is down for a long time now...

Well in my mind we could make a renderman copmatible interface. That's a good selection.

All the other good renderers are just closed for open development. And all these open source projects are not as good as them. :(

So, what shall we do?


www.3delight.com


Return to “Rendering”

Who is online

Users browsing this forum: No registered users and 2 guests