The Yafray Look

Blender's renderer and external renderer export

Moderators: jesterKing, stiv

_florian_
Posts: 36
Joined: Wed Oct 16, 2002 10:17 am

Postby _florian_ » Thu May 22, 2003 3:32 am

Jamesk wrote:<edit>
I would help out in the actual implementation, but seriously - when I read stuff like that, particularly the math-bits, my eyes start to behave strangely. I think it's some sort of allergy... I go temporarily blind =)
</edit>

yeah !
try to understand women !
blendersource or math is easy compared. :twisted:
GIT d+ s:- a- C++ UL+++ P--- L+ E--- W+ N+ o-- K- w++ O-- M V--
PS+ PE Y+ PGP++ t+++ 5 X+++ R- tv++ b++ DI- D- G e+ h-- r- y++

dreamerv3
Posts: 119
Joined: Wed Oct 16, 2002 10:30 am

Postby dreamerv3 » Thu May 22, 2003 3:50 am

<sidenote>
You can't ever truly understand women, you can only merely adapt to women.

Blendersource and 3D math on the other hand are luxuriously constant, for the most part...

</sidenote>

I think the OpenGL 2 path is the best way to go, because in the amount of time it would take yafray to render "one 40 minute pretty frame" you could have 2,400 frames of OpenGL 2 output.

This is based on loading up an OpenGL 2 compliant card with sooo much geometry and lighting data to actually slow it down to about 1 fps. My take on this is, if you're slowing a 60+ megatransistor chip to 1fps then you'de better have a good lighting rig, shaders/high rez textures and enough polygons to make it sing "broadcast quality!".

Compare that to any other software renderer...

Jamesk
Posts: 239
Joined: Mon Oct 14, 2002 8:15 am
Location: Sweden

Postby Jamesk » Thu May 22, 2003 1:23 pm

_florian_ wrote:try to understand women !


You're kidding, 'aight? It's not possible - everybody knows that... sheesh... :roll:

JA-forreal
Posts: 187
Joined: Sat Mar 22, 2003 10:45 pm

Postby JA-forreal » Thu May 22, 2003 11:27 pm

JA-I've always been able to acheive more than satisfactory results from Blender's renderer. Is it prman? Of course not. Can it get better than most of the stuff you see in the WIP forum on elysiun? Absolutely. I think that 80% of Blender users just don't spend the time learning to push the renderer around. And maybe that's why I never really invested the time into tweaking things for Yafray - my work is quite happy where it is now.


Same here harkyman. I don't know if youv'e seem some of the sample test renders that I posted here. Blender has a great renderer. But I still think that something can be done to improve the way Blenders shadow maps interact with an objects surface. Softer and less even shadows would be a great improvement. I use halo settings to help soften the Blender hard shadows in some cases. But I'm looking forward to seeing an overall improvement in the shadow map area.
Last edited by JA-forreal on Fri May 23, 2003 8:12 am, edited 1 time in total.

harkyman
Posts: 278
Joined: Fri Oct 18, 2002 2:47 pm
Location: Pennsylvania, USA
Contact:

Postby harkyman » Fri May 23, 2003 5:45 am

Yeah. I think that DSM's are a must. I started reading up on them, and like someone on these forums said earlier, my eyes started going all buggy about two pages into the maths.

But give us DSM's, some good AA, and a plugin shader system like Cessen is working on, and I think we could see it start to kick some ass.

cessen
Posts: 156
Joined: Tue Oct 15, 2002 11:43 pm

Postby cessen » Fri May 23, 2003 8:23 pm

Jamesk wrote:It's important to realize that the rendering engine can only do so much to improve the final result... How the scene is built, how the lights are rigged and how the textures look are far more important than what renderer is responsible for producing the output.


I agree completely. An analogy would be to say that more advanced paintbrushes and paint can only improve the painting by so much. Granted, they can improve the painting, because they give the artist more flexability, but a poor artist will not be able to make a good painting even with the most advanced brushes and paint, and a good artist will still be able to make okay paintings with a childrens water-color set.

Jamesk wrote:In my very humble opinion, the shortcomings of the current renderer can be fixed - without a total change of technology:


Some of the shortcomings are not in the renderer, though... more advanced modeling, texturing, and animation systems would be nice. :)

Jamesk wrote:A) AA filters: Currently we've got a boxfilter. That is (almost) the worst possible algorithm. Hack in support for Lanczos, Hamming, Catmul-Rom and Gaussian. Let the user choose which one to run. And increase the upper limit for OSA to 32 or maybe 64.


More AA filters would be nice, yes. But because of the way that blender does rendering (and anti-aliasing), implimenting them effectively could be extremely difficult and round-and-about.

Jamesk wrote:B) Lamp toggles: Enable the user to select, for all lamptypes, if the lamp in question should or should not emit specularity.


Already in the latest tuhopuu.

Jamesk wrote:C) Depth of field: The Z-buffer is already there whenever an image is rendered. Use that for a Z-based gaussian blur, hardcoded into the rendering pipeline. The Z-blur sequence plugin can already do this, but it would be very nifty to have something similar in the pipeline by default.


Gaussian blur is not what you would want. As with AA filters, limiting yourself to one type of blur would not be good. Different camera lenses and irises give different types of blur. It would be nice to have the option of switching between them and tweaking their settings.
And, as a side note, I don't know of any lense type that gives gaussian blur DOF. It can still look ok (so I'm not saying that it shouldn't be an option), but it's not physically accurate.
Also, image-based DOF is extremely difficult to impliment well. Quite frankly, the Z-blur sequence plugin isn't all that good (have you ever noticed the annoying artifacts that it causes?). It seems, at first thought, like it's simple. But, infact, it is *extremely* complex.

Jamesk wrote:D) Selective raytracing: Whenever you need real reflection or refraction, it should be possible to raytrace those. I'm sure it could be done. Personally I think that environmentmaps are far more flexible when it comes to reflections, but there are times when they get you in trouble.


Yes, that would be nice. But I'm certainly not going to tackle that feature. ;-)
Also, there are a lot of little things that have to be taken into account when creating a hybrid rendering system (at least, if you want it to be efficient), and I'm not sure if anyone here (even myself) has enough experience and know-how to take on a project like that.

JamesK wrote:E) Texture preprocessor: Currently we can change the filterwidth for texture interpolation and mipmapping. This should be improved to support a wider range of filters, somewhat similar to the AA-filters mentioned above. It could also be useful to have access to other 2D-processors here, like gaussian blur for instance.


A very interesting and worthy idea. Generally speaking, though, texture pre-processing should be done in a 2D paint-program before the texture is even brought into Blender. However, this could be very useful for procedural textures and environment maps (since they never go through a 2d paint-program). And as far as texture-filtering goes, it turns out that it isn't really very useful to have different filters for textures.

JamesK wrote:F) Output postprocessor: When an image/frame is rendered, there should be some way to pass it through a final set of 2D-processors. This could include, but not be limited to, level adjustment, hue, brightness, contrast, colorize, unsharp mask, saturation and so on. In short - ordinary 2D-post filters. All of these things are already available in several open source libraries, so the only real effort would be to code the "hook" that would grab the buffer and send it through these filters.


That's what the sequence plugins are for, Jamesk. :-)

Jamesk wrote:G) Deep shadowmaps: Shadowmap calculations should take opacity and optionally also color of geometry into account when creating the final shadowmap.


Implimenting deep shadow maps would be a really major programming project. It is certainly possible to impliment it in Blender, it's just that it would be a really major project.

Thanks for all the ideas, Jamesk! :-D

LethalSideParting
Posts: 83
Joined: Mon Oct 21, 2002 12:53 am
Location: Bucks, England

Postby LethalSideParting » Fri May 23, 2003 8:43 pm

Hiya guys. Sorry if my understanding of render engine terminology is a little dodgy - I'm still getting to grips with this stuff....

Whilst we're talking about ripping the renderer apart and putting it back together in interesting new ways, I think we might need to have a look at the sampler, see if maybe there's a way that any algorithms used there can be improved. I don't know if any of you guys have noticed this, but if you render a starfield with small stars (e.g. you're going for a realistic look, instead of a 1960s sci-fi sort of look ;) ) and the camera moves at all, the stars tend to blink in a very unrealistic way, as though they're disappering and then reappearing.

I know it's not the range that I've set for the stars, I've checked that, so I'm guessing that the problem could be to do with the stars slipping between samples, maybe? If that's the case, then I think we've found an algorithm that definitely needs changing ;) .

Anyway, just trying to contribute something.
LethalSideParting

Jamesk
Posts: 239
Joined: Mon Oct 14, 2002 8:15 am
Location: Sweden

Postby Jamesk » Fri May 23, 2003 11:07 pm

That's quite a truckload of quotes, cessen. I must perform a counterstrike... =)

cessen wrote:a poor artist will not be able to make a good painting even with the most advanced brushes and paint, and a good artist will still be able to make okay paintings with a childrens water-color set.

Aye. This is an area that leaves a lot of people in the dark, for some reason I cannot grasp. It has been said many times before, but it doesn't hurt to repeat it.

cessen wrote:Some of the shortcomings are not in the renderer, though... more advanced modeling, texturing, and animation systems would be nice. :)

Most certainly. Even though it may seem like a small fart in deep space, I have to say that the IPO overshoot problem you were about to solve would be one of those really great additions. Have you commited some code for that stuff in Tuhopuu?

cessen wrote:More AA filters would be nice, yes. But because of the way that blender does rendering (and anti-aliasing), implimenting them effectively could be extremely difficult and round-and-about.

Well, I'm sure you know what you're talking about here (I certainly don't)... It's too bad, though. I've only got a hunch about this, but I wouldn't be surprised if the antialiasing routines are more or less "inlined" in the pipeline, making it difficult to hook new stuff into it.

cessen (on lamptoggles for specularity) wrote:Already in the latest tuhopuu.

I just saw those yesterday, in the May 22-build. I'm so happy! Don't tell me you made those too?

cessen (on depth of field) wrote:Gaussian blur is not what you would want. As with AA filters, limiting yourself to one type of blur would not be good. Different camera lenses and irises give different types of blur. It would be nice to have the option of switching between them and tweaking their settings.

Very true. I found myself looking at the backgrounds of an entire movie the other night (don't remember what it was, since I was concentrating on the look of the blurred backgrounds instead of the plot) - and depth of field doesn't look one bit like gaussian blur. Depending on the lens one will see various artifacts, most often bright lights bleed to a sort of pentagonal shape...

cessen wrote:And, as a side note, I don't know of any lense type that gives gaussian blur DOF. It can still look ok (so I'm not saying that it shouldn't be an option), but it's not physically accurate.

I'm really not too hot about "physical accuracy". When I was playing with VirtuaLight I realized that accurate DOF is something to avoid (unless there's a really big renderfarm hiding in the basement of course)...

cessen wrote: - - - the Z-blur sequence plugin isn't all that good (have you ever noticed the annoying artifacts that it causes?). It seems, at first thought, like it's simple. But, infact, it is *extremely* complex.

For some strange reason I've never been exposed to those artifacts. I'm sure they will hit me in the face some day... However, I don't use it much - I use other tricks for the short I'm making now (just plain old compositing manouvres). And as far as complexity go - I can only assume that it's very tricky stuff. At a glance it might appear easy, using the z-channel as a mask, but as I keep thinking about it I see nothing but problems =)

cessen (on selective raytracing) wrote:Yes, that would be nice. But I'm certainly not going to tackle that feature. ;-)

I hardly would have expected it... That was just a wish from la-la-land. Implementing that would probably mean total rewrite or total change of underwear...

cessen (about texture preprocessing) wrote: A very interesting and worthy idea. Generally speaking, though, texture pre-processing should be done in a 2D paint-program before the texture is even brought into Blender. However, this could be very useful for procedural textures and environment maps (since they never go through a 2d paint-program).

I thought it would be an interesting thing - and maybe not totally impossible to implement. My thoughts were to apply this to environment maps (as you said) but also ordinary textures. It could be VERY useful, particularly if the filter parameters could be animated.

cessen (on output postprocessing) wrote: That's what the sequence plugins are for, Jamesk. :-)

Of course. Silly me! =) How about a sequence plugin that is an interface to ImageMagick then? I seem to remember someone mentioning some sort of subproject concerning IM, but that it was discontinued due to some problems concerning the cross-platformness of it... Could an old lame Java-coder like me write such a plugin?

cessen (on deep shadowmaps) wrote:Implimenting deep shadow maps would be a really major programming project. It is certainly possible to impliment it in Blender, it's just that it would be a really major project.

Hmm... I guess that the current shadowmaps are grayscale-based? So the color would be a major problem meaning total rewrite of all procedures dealing with shadowmap calculations. Color aside - just getting opacity bias in the maps is equally tricky? Because the maps are built at an entirely different, geometry-only level?

cessen wrote:Thanks for all the ideas, Jamesk!

You're welcome. I only wish I was more confident with the math and c-programming, then I could actually contribute more than just ideas...

cekuhnen
Posts: 303
Joined: Mon Jan 13, 2003 11:04 pm

...

Postby cekuhnen » Sat May 24, 2003 2:58 am

i can understand people asking for better render results with blenders internal system but this would be for me a waste of recources.
yafray 3delight have renderqualetys blenders system could never reach.

good results take longer time point.

a fast preview with blender is fine but for real photorealistic results i would use a reyes industry standart based system.

or did anybody see pixar movies rendered with a low end app?


i would be happy if inside blender also normal lamps could cast shadows and not only spot lights.


i would put wore work into the modeling tools since they need some facelifting as well.

eicke

Jamesk
Posts: 239
Joined: Mon Oct 14, 2002 8:15 am
Location: Sweden

Re: ...

Postby Jamesk » Sat May 24, 2003 10:35 am

cekuhnen wrote:good results take longer time point. - - - or did anybody see pixar movies rendered with a low end app?


Nope. Neither did I see them being rendered on a single workstation, which is what I and the majority of 3D-users have access to.

YafRay and renderman compliants are excellent choices for users mainly interested in making stills. Animators, however, will benefit greatly from any improvements in the internal renderer.

Green is making some very impressive progress with the renderman integration. And perhaps someone else will soon do something similar for YafRay.
This is very good.
But it doesn't mean that the internal renderer should be left behind to die, because it will still be used for animation by anyone short of a decent renderfarm. Period.

cekuhnen
Posts: 303
Joined: Mon Jan 13, 2003 11:04 pm

...

Postby cekuhnen » Sun May 25, 2003 12:16 am

well i do not mean to let it die.but your point is not 100 correct.even for animations you need a good engine to produce the results. stills or animations both need a good base for good results.

i somehow like blenders particle system but i do not know how it will be included into yafray? for renderman thats pretty simple.

if you will include raytracing into blender than thiswill slowdown the internal system as well. so where is the point with putting in features which will slow it down when you could use an external system which gives you good results and are fast for raytrace technologie.

3delight has an ambient occlusion system which gives a pretty good global illumination and fast.

well if you use the internal system for cartoon animations it will be usefull but blenders material system is not one of the best and nothing could beat renderman shaders. self written cartoon shaders are better.

i know that this is not the easy way but even without a renderfarm you can make good animations fast.
also 3delight renders very fast with true displacements and 100% smooth surfaces blender could only produce with a high polycount which will slow down rendertime than. and true displacements for skin textures beat all fake systems.

as i say i do not want to see it die but i would not spent to many resources on it at the moment. i use it often for my product renderings since the output is quit fast but the final rendering is done in 3delight.

i would more spent time on the tools sine there are good render engines out there and there will be time to enhance the internal system later as well!

eicke

Jamesk
Posts: 239
Joined: Mon Oct 14, 2002 8:15 am
Location: Sweden

Postby Jamesk » Sun May 25, 2003 1:23 am

Well, it's a good thing that Renderman communications will be well integrated soon then. I would guess Green is more than 50% done by now.

Nevertheless, the neat thing about open source is that anyone can work on any area they feel like improving, even if that means improving the internal renderer. It cannot possibly be a waste of resources, as you put it, since there really are no resources allocated for anything at all - just a bunch of voluntary efforts that can be many things, but absolutely not a waste. That is an insult to those who research and implement those features.

If there are particular tools you would like to see in Blender soon, either add them yourself or start "lobbying" like I do... =)

cessen
Posts: 156
Joined: Tue Oct 15, 2002 11:43 pm

Postby cessen » Sun May 25, 2003 8:40 am

The internal Blender renderer is based on an old rendering architecture, and is thus limited in a lot of ways. However, there are still ways in which it can be improved without changing the rendering architecture itself.

In my case, I don't really feel like bothering with external renderers at the moment, and thus I am trying to improve Blender's internal renderer by means of adding a shading system and other such features.

RenderMan shaders are very nifty, but they are horribly organized in terms of types of shaders and how they work together. For instance, they make no distinction between texture-maps and BRDF's ("materials") in RenderMan shaders. And I have no idea why they consider geometric displacement to be a shading concept. Bah... that's my rant for the day.

I've always had this sort of love/hate relationship with the RenderMan Interface Spec'. On the one hand, it has a lot of neat--and important--concepts in it. On the other hand, it's old enough that it's getting very patch-worky... and if there's one thing I hate, it's patchy standards/programs. I have a whole theory about patch-work programs and standards, which I won't go into in detail right now. But the basic concept is that in order to avoid self-inconsistancies and general disorder, a program/standard has to be re-written from scratch every once in a while.

In the end, what I'm really looking forward to is Blender 3.0. Everything can be thought through and re-done, including rendering.

cessen
Posts: 156
Joined: Tue Oct 15, 2002 11:43 pm

Postby cessen » Sun May 25, 2003 8:45 am

self written cartoon shaders are better


The material plugin system I am working on for Blender will allow people to write their own arbitrary BRDF's, including custom toon shaders. The two new materials types (Blinn, and Toon) were actually just things that I added to get familiar with Blender's material code. The bigger project is to have a full-fledged material ("surface shader") plugin system.

green
Posts: 81
Joined: Sun Oct 13, 2002 8:04 pm

Re: ...

Postby green » Sun May 25, 2003 9:39 am

Jamesk wrote:
cekuhnen wrote:good results take longer time point. - - - or did anybody see pixar movies rendered with a low end app?


Nope. Neither did I see them being rendered on a single workstation, which is what I and the majority of 3D-users have access to.

YafRay and renderman compliants are excellent choices for users mainly interested in making stills. Animators, however, will benefit greatly from any improvements in the internal renderer.

Green is making some very impressive progress with the renderman integration. And perhaps someone else will soon do something similar for YafRay.
This is very good.
But it doesn't mean that the internal renderer should be left behind to die, because it will still be used for animation by anyone short of a decent renderfarm. Period.



Actually.. the renderman integration is mainly aimed at animation. I dont think the added quality for still renderings will be all that impressive.


Return to “Rendering”

Who is online

Users browsing this forum: No registered users and 0 guests