Jamesk wrote:It's important to realize that the rendering engine can only do so much to improve the final result... How the scene is built, how the lights are rigged and how the textures look are far more important than what renderer is responsible for producing the output.
I agree completely. An analogy would be to say that more advanced paintbrushes and paint can only improve the painting by so much. Granted, they can improve the painting, because they give the artist more flexability, but a poor artist will not be able to make a good painting even with the most advanced brushes and paint, and a good artist will still be able to make okay paintings with a childrens water-color set.
Jamesk wrote:In my very humble opinion, the shortcomings of the current renderer can be fixed - without a total change of technology:
Some of the shortcomings are not in the renderer, though... more advanced modeling, texturing, and animation systems would be nice.
Jamesk wrote:A) AA filters: Currently we've got a boxfilter. That is (almost) the worst possible algorithm. Hack in support for Lanczos, Hamming, Catmul-Rom and Gaussian. Let the user choose which one to run. And increase the upper limit for OSA to 32 or maybe 64.
More AA filters would be nice, yes. But because of the way that blender does rendering (and anti-aliasing), implimenting them effectively could be extremely difficult and round-and-about.
Jamesk wrote:B) Lamp toggles: Enable the user to select, for all lamptypes, if the lamp in question should or should not emit specularity.
Already in the latest tuhopuu.
Jamesk wrote:C) Depth of field: The Z-buffer is already there whenever an image is rendered. Use that for a Z-based gaussian blur, hardcoded into the rendering pipeline. The Z-blur sequence plugin can already do this, but it would be very nifty to have something similar in the pipeline by default.
Gaussian blur is not what you would want. As with AA filters, limiting yourself to one type of blur would not be good. Different camera lenses and irises give different types of blur. It would be nice to have the option of switching between them and tweaking their settings.
And, as a side note, I don't know of any lense type that gives gaussian blur DOF. It can still look ok (so I'm not saying that it shouldn't be an option), but it's not physically accurate.
Also, image-based DOF is extremely difficult to impliment well. Quite frankly, the Z-blur sequence plugin isn't all that good (have you ever noticed the annoying artifacts that it causes?). It seems, at first thought, like it's simple. But, infact, it is *extremely* complex.
Jamesk wrote:D) Selective raytracing: Whenever you need real reflection or refraction, it should be possible to raytrace those. I'm sure it could be done. Personally I think that environmentmaps are far more flexible when it comes to reflections, but there are times when they get you in trouble.
Yes, that would be nice. But I'm certainly not going to tackle that feature.
Also, there are a lot of little things that have to be taken into account when creating a hybrid rendering system (at least, if you want it to be efficient), and I'm not sure if anyone here (even myself) has enough experience and know-how to take on a project like that.
JamesK wrote:E) Texture preprocessor: Currently we can change the filterwidth for texture interpolation and mipmapping. This should be improved to support a wider range of filters, somewhat similar to the AA-filters mentioned above. It could also be useful to have access to other 2D-processors here, like gaussian blur for instance.
A very interesting and worthy idea. Generally speaking, though, texture pre-processing should be done in a 2D paint-program before the texture is even brought into Blender. However, this could be very useful for procedural textures and environment maps (since they never go through a 2d paint-program). And as far as texture-filtering goes, it turns out that it isn't really very useful to have different filters for textures.
JamesK wrote:F) Output postprocessor: When an image/frame is rendered, there should be some way to pass it through a final set of 2D-processors. This could include, but not be limited to, level adjustment, hue, brightness, contrast, colorize, unsharp mask, saturation and so on. In short - ordinary 2D-post filters. All of these things are already available in several open source libraries, so the only real effort would be to code the "hook" that would grab the buffer and send it through these filters.
That's what the sequence plugins are for, Jamesk.
Jamesk wrote:G) Deep shadowmaps: Shadowmap calculations should take opacity and optionally also color of geometry into account when creating the final shadowmap.
Implimenting deep shadow maps would be a really major programming project. It is certainly possible to impliment it in Blender, it's just that it would be a really major project.
Thanks for all the ideas, Jamesk!