Feature Request: SuperSampling AA

Blender's renderer and external renderer export

Moderators: jesterKing, stiv

Caronte
Posts: 76
Joined: Wed Oct 16, 2002 12:53 am
Location: Valencia-Spain-Europe

Post by Caronte » Fri Jul 14, 2006 7:19 pm

M@dcow wrote:Im tired of people telling me that it's because I'm doing something wrong, because I'm not. I know this software like the back of my hand and I know that there isn't a magic combination of buttons I can press to get it working right. It's simply faulty. Period.
I second every word of this post :(
Caronte.
"Some Day, All Will Be Digital"
http://www.nicodigital.com

mpan3
Posts: 0
Joined: Wed Mar 24, 2004 7:16 pm

Post by mpan3 » Fri Jul 14, 2006 8:24 pm

Are we still talking about texture sampling/filtering issues and more specifically the "disappearance-of-detail-at-distance-and-small-incident-angles?" well Toon_Scheur said exactly what i was going to say. only better. Then M@dcow comes along and saying that turning off mip mapping isn't the solution... well i am no expert in Blender but *I* thought i got the whole problem under control the way i setup my texures:

a simple scene with a tiled normal map texture . Everything else at default value (5x AA Guass filter)
Image

mipmap turned off
Image

Texture filter size set to .25 rather than the defautl 1
Image

Texture filter size set to .1
Image

WHile i am not claiming that this is the solution to all the texture artifacts, it does solve majority of the problems. But then again, this is alredy an extreme case, not many things in the real life are this repetitive and high in frequency...except maybe for the silly zebra. :lol:

Caronte
Posts: 76
Joined: Wed Oct 16, 2002 12:53 am
Location: Valencia-Spain-Europe

Post by Caronte » Fri Jul 14, 2006 8:45 pm

I'm not talking about textures, but about fine details like far away things that must be represented (simulated) with less than a pixel size.

You want an example?
Ok, try to render hair without use transparency.
You can't get fine hairs.

The same is applyed to everything, ie. you can render a pencil far away of the camera without get artifacts of aliassing (or extreme blur).

Take a look to this thread:
http://www.blender.org/forum/viewtopic.php?t=4285
Caronte.
"Some Day, All Will Be Digital"
http://www.nicodigital.com

Toon_Scheur
Posts: 0
Joined: Sat Nov 06, 2004 6:20 pm

Post by Toon_Scheur » Fri Jul 14, 2006 10:08 pm

Caronte, is it possible to show us an example where you have the same scene rendered in Blender, Maya, Max, Vray and Mental Ray to see how all these apps handles AA?

OK, let me recapitulate: for example, you need a hair strand that is perhaps 1% of the width of a pixel to render nicely.

I think if you render such a fine detail, it won't show up. So why do we encounter these effects a lot on CG and not that often on real life footage/ photographs? Maybe the CG scene is too clean, exposing those jaggies. Maybe the contrast is too high, maybe the resolution is not high enough.

Hair rendering in Blender is still in its infancy, yet I haven't noticed anything strange with Elephant Dream.
I don't think that it is the fault of the software. If you want to get rid of jaggies, render at a higher resolution. If you still think it is still possible to render crisp detail just because Vray is able to do so too, than this following experiment should prove you right:
Render some hair strands in Vray with maximum AA and in a resolution of ... say.... 40 x 40?

Anyway, like I said, if you use the best possible viewing media (TFT screen for example), you'll see those artificats, or maybe even those gradient pixels that is filter those jaggies. But I bet that if you show the picture you refered to in that other thread on a television (CRT), I bet you'll see a straight line.

Caronte
Posts: 76
Joined: Wed Oct 16, 2002 12:53 am
Location: Valencia-Spain-Europe

Post by Caronte » Fri Jul 14, 2006 10:54 pm

Toon_Scheur wrote:Caronte, is it possible to show us an example where you have the same scene rendered in Blender, Maya, Max, Vray and Mental Ray to see how all these apps handles AA?
Sorry, I don't have those programs.
Toon_Scheur wrote:OK, let me recapitulate: for example, you need a hair strand that is perhaps 1% of the width of a pixel to render nicely.

I think if you render such a fine detail, it won't show up.
OK, then show a representation of that, but no a lot of stairs.
Toon_Scheur wrote:So why do we encounter these effects a lot on CG and not that often on real life footage/ photographs? Maybe the CG scene is too clean, exposing those jaggies.
I don't think so.
Toon_Scheur wrote:maybe the resolution is not high enough.
At the same resolution I see fine details on images from other sofwares.
Toon_Scheur wrote:Hair rendering in Blender is still in its infancy, yet I haven't noticed anything strange with Elephant Dream.
The hairs I'm talking about was only an example, but anyway you need to use transparency (even in ED) to get fine hairs.

May be the trick is to teach the render engine to do tranparency automatically when things getting lower than one pixel :roll:
Toon_Scheur wrote:I don't think that it is the fault of the software. If you want to get rid of jaggies, render at a higher resolution.
Thi's not a solution.
Toon_Scheur wrote:If you still think it is still possible to render crisp detail just because Vray is able to do so too, than this following experiment should prove you right:
Render some hair strands in Vray with maximum AA and in a resolution of ... say.... 40 x 40?
Even better, try you to take a photo (real) at decent resolution (1024x768) and tell me that you don't see fine details :roll:

The way how the image is represented with less pixels is the diference betwhen a good and a bad renderer/antialiasing metod.
Toon_Scheur wrote:Anyway, like I said, if you use the best possible viewing media (TFT screen for example), you'll see those artificats, or maybe even those gradient pixels that is filter those jaggies. But I bet that if you show the picture you refered to in that other thread on a television (CRT), I bet you'll see a straight line.

More excuses like M@dcow was said.


Sorry by my poor english, I hope you understand me :?
Caronte.
"Some Day, All Will Be Digital"
http://www.nicodigital.com

Toon_Scheur
Posts: 0
Joined: Sat Nov 06, 2004 6:20 pm

Post by Toon_Scheur » Sat Jul 15, 2006 12:59 am

Even better, try you to take a photo (real) at decent resolution (1024x768) and tell me that you don't see fine details Rolling Eyes

The way how the image is represented with less pixels is the diference betwhen a good and a bad renderer/antialiasing metod.

A real photo (on photographic medium) as analog not digital. There is blending going on on the film or photograpic paper. Scanning those photos won't produce jaggies because the scanner also scans the error of the elements.
But a 1 to 3 Mpixel photo camera does produce jagies. So resolution and medium is a important factor here. OSA is just a digital simulation of what will happen on an anolog medium.

I do agree that Blender has issues, but I think that is the situation for every other package. We're dealing with digital media, and digital media produces digital error, contributing to artifacts. That is true for every package!

Caronte
Posts: 76
Joined: Wed Oct 16, 2002 12:53 am
Location: Valencia-Spain-Europe

Post by Caronte » Sun Jul 16, 2006 11:18 am

Ok perhaps thi's a deadend conversation, you have your own reasons, and me the mines.
Caronte.
"Some Day, All Will Be Digital"
http://www.nicodigital.com

M@dcow
Posts: 0
Joined: Sun Apr 20, 2003 12:50 pm

Post by M@dcow » Sun Jul 16, 2006 3:08 pm

...appologies for that little, uh, 'Diva' moment back there, it must be that time of the month.

There is a real reason that I get so passionate about this though, and it can be encapsulated in a single image:

Image

This isn't blender, it's yafray. There is nothing complex in this scene, no GI, just a ball, a couple of buffered omni lamps (equivalent to buffered spotlights), a bump map, and a bit of reflectivity. Took 4 minutes to apply the bump map, set up the scene and render @ 800x400. 100 samples used. It just rendered, looked pretty good, job done.

I could have gone for higher settings, but didn't need to, that's the beauty of yafray...

Now try to replicate that in blender, the bump map I used is Here:

http://img.photobucket.com/albums/v27/M@dcow/geob04.jpg

Use whatever settings you can think up, turn off mip-mapping, lower the filter settings, enable full OSA in the material buttons, raise filter settings again, enable mip-mapping and try gauss, discover what could be a bug,raise bump factor, lower bump factor, realise that you are going round in circles....and on and on and on it goes. You will discover that whatever you try, you will run into a big fat wall.


If you actually tried all that, tell me:

1) Did you end up with a render that came even close? In the same ballpark? the same universe? Possibly not.
2) How many hours of your life did you waste trying to get it to look good? I know I've wasted a lot. Can I have them back please? :P

All this messing around makes blender hugely inefficient, is it an unfair assumption that if an artist has just applied a bump map, turned on anti aliasing and rendered, that he is going to want to actually see some bumps somewhere? Or is that just unreasonable? Is it better that he has to render out quadruple size sometimes in an futile effort to eliminate aliasing problems? To have to suffer that extra 4 hour per frame penalty for doing so? Or have to try a million different combinations of buttons, filtration settings and yada yada ya, just to get that kind of quality that he has seen in those images over @ CGtalk?

Most importantly, what's this telling those 3d studio Max and Maya users who might be considering blender, the potential migrators? It's telling them that if blender isn't even doing the very basic basics properly, then why use it on the complex stuff?

Poor Caronte has been talking about this for years, all the way back to the blender.nl forums in fact. You know the soft focus, vaseline effect you used to get in cheesy erotic films in the seventies? Blender has always been the 3d equivalent. You've just become accustomed to it, found ways around it. 'succeeded despite of' and so on....It's a true testament to some of the blender artists out there, for sure, but don't for one second confuse that for a system that actually works properly.

Current development and Project orange is taking blender into the realm of HD video authoring. Fantastic direction. Now give it what it needs to be a contender. The basics. Or you could just ignore us for another five years. :P
Last edited by M@dcow on Wed Jun 20, 2007 7:08 am, edited 3 times in total.

z3r0_d
Posts: 289
Joined: Wed Oct 16, 2002 2:38 am
Contact:

Post by z3r0_d » Sun Jul 16, 2006 6:08 pm

M@dcow wrote:Current development and Project orange is taking blender into the realm of HD video autoring. Fantastic direction, Fantastic choice. Now give it what it needs to be a contender. The basics. Or you could just ignore us for another five years. :P
texture filtering is hardly the basics, but you're making it apparent that mipmapping no longer makes sense... at least in the current form

downsampling textures is WRONG for bump maps [nvidia had a presentation at gdc which showed workarounds... but still], it makes the result simply disappear [or at the very least become much less apparent]...

without mipmapping I got something I think looks reasonable [to me] in 15 minutes or so [mostly so long because my compy sucks, takes 4 minutes for a 800x400 render [yours is 800x535 btw]]

[and then I proceeded to waste the rest of an hour trying not to spend thousands of words on a rant about the meaningless second half of your post]

mpan3
Posts: 0
Joined: Wed Mar 24, 2004 7:16 pm

Post by mpan3 » Sun Jul 16, 2006 10:08 pm

Mipmapping is a very efficient way to reduce rendering time and texture aliasing for distant texture. However like zero_d mentioned it simply shouldn't be used for bump and/or normal maps. That said, what madcow and caronte was saying actually only further proved that we need a super-sampling system in blender. Here is the chrome ball with the aforementioned bumpmap applied rendered in Blender with everything at default and mip-map off (rendering time:

Image


See? It dosn't look that bad at all... to me, at least.

The scene was rendered with AO and had 8x AA, I also applied a 'manual super-sampling' by rendering the image at twice the desired resolution and then sampled it down in photoshop to simulate what supersampling can accomplish.

[/img]

Toon_Scheur
Posts: 0
Joined: Sat Nov 06, 2004 6:20 pm

Post by Toon_Scheur » Mon Jul 17, 2006 1:46 am

Regarding the anti aliasing stuff, I want to start off with thise illustrations first (just a very thin cilinder dupliverted):

No OSA:
Image

Minimum OSA (Gauss):
Image

Maximum OSA (Gauss):
Image

OK, here is my layman analisys. The difference between the first image and the other images is HUGE! The increase in quality is huge. So I don't think there is something wrong with Blender's OSA.

BUT there is something wrong here.
1) The cilinders are extremely thin! I can imagine them being visible in front of the camera, but far away there are still visible!
2) I notice that the cilinders that are almost straight shows the most breakage. And the diagonal ones seems more complete. That is strange because doesn't a vertical projected cilinder has a constant pixel coverage along its lenght???

I think that what Carontes is talking about is not bad anti aliasing but this problem is rather due to the method of sampling and there is no LOD management.

It is obvious that there should be some sort of adaptive sampling to render coherent images and level of detail management. I think it is safe to say that it ain't necesary to draw details that are much much smaller than a pixel....but then again , wheter or not and where to place that tresghold should depend on material properties (tiny air partcles individualy can reflect a lot of light in the distance) or material and or object density (a person shouldn't go bald in the distance, rather the hair rendering should be 'smudged' instead of rendering each hair strand with brute force).

But this is just my amateurs analysis. Correct me please.
Last edited by Toon_Scheur on Mon Jul 17, 2006 12:33 pm, edited 1 time in total.

z3r0_d
Posts: 289
Joined: Wed Oct 16, 2002 2:38 am
Contact:

Post by z3r0_d » Mon Jul 17, 2006 8:02 am

mpan3 wrote:I also applied a 'manual super-sampling' by rendering the image at twice the desired resolution and then sampled it down in photoshop to simulate what supersampling can accomplish.
that makes quite a large difference.

check these out [the ones I mentioned before], both of these have mipmapping off for that texture...

4 minute render at 800x535:
Image

20 minute render at 1600x1070 [twice in each dimension], scaled down to same size as previous:
Image

the difference is quite apparent...

Caronte
Posts: 76
Joined: Wed Oct 16, 2002 12:53 am
Location: Valencia-Spain-Europe

Post by Caronte » Mon Jul 17, 2006 9:28 am

Toon_Scheur has explained very well what I'm talking about.

I think coders must see this post in order to find a solution.
Caronte.
"Some Day, All Will Be Digital"
http://www.nicodigital.com

M@dcow
Posts: 0
Joined: Sun Apr 20, 2003 12:50 pm

Post by M@dcow » Mon Jul 17, 2006 12:06 pm

Mpan,

Not a bad image at all, you've come pretty much as close as you can get in blender, but it still suffers from a few things, such as:

Lack of apparent bump depth
That soft focus blurry effect blender gives to it's renders.
Some aliasing artifacts. Despite the fact that you double sized it.

'Clarity' sums it up better than any other word in the english language. Blender renders lack clarity. It's all over the blender renders on this page...if you use your eyes critically, and compare the two, it'll be clear as day. I don't know what it is down the line that's causing it, I always assumed it was a combination of the way blender handles bumps, and the way it's AA works. I could be wrong. But there is something there, and it's making my renders a lot more blurred than I want them to be. And Caronte. And Cekuhnen.

I'd say getting blender up to scratch in this department is easily as important as any new feature the coders are planning to add for 2.43. Whatever this department may turn out to be...

Toon_Scheur
Posts: 0
Joined: Sat Nov 06, 2004 6:20 pm

Post by Toon_Scheur » Mon Jul 17, 2006 2:24 pm

Caronte wrote:Toon_Scheur has explained very well what I'm talking about.

I think coders must see this post in order to find a solution.
Yeah, and the biggest problem I forsee is when Ton implements subpixel displacement. How do you displace pixels when the object isn't even correctly detected by the render engine?

Post Reply