Feature Request: SuperSampling AA
Moderators: jesterKing, stiv
Feature Request: SuperSampling AA
Have you had instances where 16x multisampling just isn't enough? I am not sure if the traditional motion blur method will still be implemented, but 16x AA is often not enough when i am working with high frequency textures or shaders*. The solution is obviously to render the image at say 2, or 4 times the desired resolution and then scale it down. But currently there is no way to do this within blender, and it's a pain to scale it down in an external video compositor because of disk usage limitations.
*For example when you have a noise texture to achieve blurred reflection.
*For example when you have a noise texture to achieve blurred reflection.
-
- Posts: 0
- Joined: Sat Nov 06, 2004 6:20 pm
And when 16xAA or 32 x AA is not enough, then what? If you realy want to show that detail that is 1/100 th of the width of a pixel, why not render it 5000x6000 or something?
According to Shannon sampling law, the sampling frequency should be twice the highest frequency in your dataset. The highest frequency I can imagine in a digital picture is when you have alternating black and white pixels. So theoreticaly 2x AA should suffise.....but such an alternating block pattern has infinite frequencies (fourier transform).... so practicaly you need a infinite filter in those worst case scenarios. In conclusion it boils down to this: how can the ARTIST manipulate the image to make it look good anyway?
According to Shannon sampling law, the sampling frequency should be twice the highest frequency in your dataset. The highest frequency I can imagine in a digital picture is when you have alternating black and white pixels. So theoreticaly 2x AA should suffise.....but such an alternating block pattern has infinite frequencies (fourier transform).... so practicaly you need a infinite filter in those worst case scenarios. In conclusion it boils down to this: how can the ARTIST manipulate the image to make it look good anyway?
-
- Posts: 0
- Joined: Sat Nov 06, 2004 6:20 pm
I don't know VRay, but I will asume Vray uses raytracing. OK, let's assume you can control the raytracing process.... say let's shoot a few thousands of rays per pixel. You'll get very high quality images........ in time. Maybe Vray renders fast due to clever render algo's.
There was this Siggraph paper about factored BRDF sampling, which means high quality sampling in less time. I wrote the author of this paper once. He was interested in Blender, but he doesn't have the time to plunge into the code. In Elysiun one of his students picked this up and he said he will discuss with the Siggpraph author how he can help out. It was almost a year ago.... but I still have hope. He said he was willing to donate the code though.....
To come back at AA stuff. Maybe the question could be also: to scanline or not to scanline. It seems (I've read it somewhere...maybe something about the proces of Robots the movie) that a scanline renderer is fast for simple scenes. But if you have a very complicated scenery, you better off with raytracing. Why? Let's assume you want to render a forrest. All those Z-buffer sort algorithms, and visibility algorithm will bog down the scanline renderer. You'll get a huge speed increase with similar quality while raytracing because the rendering proces is less depended of the complexity of the scene. The ray bounces of the first leaf and get recorded, then the ray bounces from the 12 th leaf and get recorded. No sort algo's. I can imagine you'll get Moiré effects when rendering with scanline. Maybe with pure raytracing (optimize the octree for this) this effect is less noticeable?
And please correct if I'm telling fairytales here, I'm not THE expert on this topics
There was this Siggraph paper about factored BRDF sampling, which means high quality sampling in less time. I wrote the author of this paper once. He was interested in Blender, but he doesn't have the time to plunge into the code. In Elysiun one of his students picked this up and he said he will discuss with the Siggpraph author how he can help out. It was almost a year ago.... but I still have hope. He said he was willing to donate the code though.....

To come back at AA stuff. Maybe the question could be also: to scanline or not to scanline. It seems (I've read it somewhere...maybe something about the proces of Robots the movie) that a scanline renderer is fast for simple scenes. But if you have a very complicated scenery, you better off with raytracing. Why? Let's assume you want to render a forrest. All those Z-buffer sort algorithms, and visibility algorithm will bog down the scanline renderer. You'll get a huge speed increase with similar quality while raytracing because the rendering proces is less depended of the complexity of the scene. The ray bounces of the first leaf and get recorded, then the ray bounces from the 12 th leaf and get recorded. No sort algo's. I can imagine you'll get Moiré effects when rendering with scanline. Maybe with pure raytracing (optimize the octree for this) this effect is less noticeable?
And please correct if I'm telling fairytales here, I'm not THE expert on this topics

I get moire on high contrasted illumination over very thin geometry, rendering with the internal renderer.Toon_Scheur wrote:I can imagine you'll get Moiré effects when rendering with scanline.
If you are talking about use the RAY button in F10, I all the time use it, because need AO but this not fix the moire in any way.Toon_Scheur wrote:Maybe with pure raytracing (optimize the octree for this) this effect is less noticeable?
Toon_Scheur wrote:And please correct if I'm telling fairytales here, I'm not THE expert on this topics


Sorry by my poor English

Uhm...AA quality has nothing to do with whether you use raytracing or a scanline algorithm, both either hit the geometry or not. What determines quality is:
- number of samples
- placement of the samples
- choice of reconstruction filter
Unless you haven't noticed, Blender has a nice choice of filters already...but using fixed sampling positions are prone to Moiré-like effects,

However the human eye can forgive some sampling artifacts better than others. For example, high-frequency noise is usually less obvious than repeating patterns, slight blurring more acceptable than ringing etc. that's where the filters come in too, each doing a different trade-off between complexity and various unavoidable aliasing phenomenas.
Rendering at higher resolution effectively just increases sampling frequency after downsampling too, but by instead using more samples per pixel you can:
- try to find smarter sampling positions
- save a lot of RAM
So i agree that at least 32x supersampling still makes sense, sometimes maybe even more. But some oversampling like videocards do could help on geometric details a lot without such extreme extra-costs...basically re-weighting samples by additional hit-tests.
The scanline vs. raytrace debate doesn't really belong here...
- number of samples
- placement of the samples
- choice of reconstruction filter
Unless you haven't noticed, Blender has a nice choice of filters already...but using fixed sampling positions are prone to Moiré-like effects,
You didn't understand the problem really, your data-set is the scene, not your rendered image. A single polygon edge has an infinite spectrum already, you can never render it without removing information. All you're trying is to do is prevent visible aliasing by taking enough samples to cut off as much as possible from the bandwidth that exceeds the maximum frequency your image can represent (max image frequency is alternating b&w pixels). To do this task perfectly, you'd need to have a 2*infinity sampling rate, obviously not possibleToon_Scheur wrote:According to Shannon sampling law, the sampling frequency should be twice the highest frequency in your dataset. The highest frequency I can imagine in a digital picture is when you have alternating black and white pixels. So theoreticaly 2x AA should suffise.....but such an alternating block pattern has infinite frequencies (fourier transform).... so practicaly you need a infinite filter in those worst case scenarios. In conclusion it boils down to this: how can the ARTIST manipulate the image to make it look good anyway?

However the human eye can forgive some sampling artifacts better than others. For example, high-frequency noise is usually less obvious than repeating patterns, slight blurring more acceptable than ringing etc. that's where the filters come in too, each doing a different trade-off between complexity and various unavoidable aliasing phenomenas.
Rendering at higher resolution effectively just increases sampling frequency after downsampling too, but by instead using more samples per pixel you can:
- try to find smarter sampling positions
- save a lot of RAM
So i agree that at least 32x supersampling still makes sense, sometimes maybe even more. But some oversampling like videocards do could help on geometric details a lot without such extreme extra-costs...basically re-weighting samples by additional hit-tests.
The scanline vs. raytrace debate doesn't really belong here...
I couldn't agree with you guys more, as it stands now blender just doesn't cut it.....here's a post I made regarding this subject:
http://blenderartists.org/forum/showpos ... ostcount=6
In a sentence, blender has the worst bump mapping and anti aliasing of any 3d program I have ever used. and I've used a lot.
http://blenderartists.org/forum/showpos ... ostcount=6
In a sentence, blender has the worst bump mapping and anti aliasing of any 3d program I have ever used. and I've used a lot.
you gave up blender because of sampling issues?
Well, while in effect any sampling issues can be resolved via rendering an insanely large image and then size it down using a high quality kernel such as lanzcos, the time and memory consumption of doing this in a 3rd party program makes it unrealistic. for example:
1 uncompressed 1080p HD quality frame is 6mb.
to solve certain sampling issues this frame might have to be rendered at AT LEAST twice the dimension, hence the image becomes 24mb. Now this is fine if we are doing a still image, but for motion picture productions, not many people have the leisure of allocating 24mb for A FRAME, and read them into memory, scale them down, and write a 6mb version. Now, if blender can do this scaling down internally, the final frame will still be 6mb, and won't require the additional processing time to scale down the image.
Isn't the whole point of Blender to make the artist's job more streamlined and less tedious?

Well, while in effect any sampling issues can be resolved via rendering an insanely large image and then size it down using a high quality kernel such as lanzcos, the time and memory consumption of doing this in a 3rd party program makes it unrealistic. for example:
1 uncompressed 1080p HD quality frame is 6mb.
to solve certain sampling issues this frame might have to be rendered at AT LEAST twice the dimension, hence the image becomes 24mb. Now this is fine if we are doing a still image, but for motion picture productions, not many people have the leisure of allocating 24mb for A FRAME, and read them into memory, scale them down, and write a 6mb version. Now, if blender can do this scaling down internally, the final frame will still be 6mb, and won't require the additional processing time to scale down the image.
Isn't the whole point of Blender to make the artist's job more streamlined and less tedious?
The bump mapping vanishing on small incident angles comes not from AA though, that comes from texture filtering...simply disable MIP on bump maps and voilá...happens in most other 3d software too btw...
the only reason why it happens not in YafRay is that it doesn't even feature texture filtering yet...
the only reason why it happens not in YafRay is that it doesn't even feature texture filtering yet...

-
- Posts: 0
- Joined: Sat Nov 06, 2004 6:20 pm
LOL :
what's the point in doing that when in fact it can be done faster with OSA? Of course if you want a 64x64 filter or a 128x128 filter, indeed you should render at higher image sizes and sample it down. And furtermore, there are better filters with better performance already (like catmull-rohm, tent and such)
While watching robots, I often seen those Moireé effects. It depends on the media too you know. On a television, this effect is less noticeable because the CRT does its own AA (gauss filtering). Each incident electron beam spreads out a little.
So in conclusion, the target medium matters a lot too when rendering. You'll have less jaggies and moireés when watching the same animation on a 20 year old TV than when you are watching it on a 42" HDTV LCD screen or something.
Cekhunen is one of the strongest Blender advocates around. I think he gave up on trying to get this bumpmapping thing working.you gave up blender because of sampling issues? Shocked
Isn't it what a standard block filter does anyway?Well, while in effect any sampling issues can be resolved via rendering an insanely large image and then size it down
what's the point in doing that when in fact it can be done faster with OSA? Of course if you want a 64x64 filter or a 128x128 filter, indeed you should render at higher image sizes and sample it down. And furtermore, there are better filters with better performance already (like catmull-rohm, tent and such)
While watching robots, I often seen those Moireé effects. It depends on the media too you know. On a television, this effect is less noticeable because the CRT does its own AA (gauss filtering). Each incident electron beam spreads out a little.
So in conclusion, the target medium matters a lot too when rendering. You'll have less jaggies and moireés when watching the same animation on a 20 year old TV than when you are watching it on a 42" HDTV LCD screen or something.
Rather unsurprisingly, I thought of that.....years ago. Doing this creates a slew of other nasty problems, so no, this is not the soulution.Lynx3d wrote:The bump mapping vanishing on small incident angles comes not from AA though, that comes from texture filtering...simply disable MIP on bump maps and voilá...happens in most other 3d software too btw...
the only reason why it happens not in YafRay is that it doesn't even feature texture filtering yet...
Im tired of people telling me that it's because I'm doing something wrong, because I'm not. I know this software like the back of my hand and I know that there isn't a magic combination of buttons I can press to get it working right. It's simply faulty. Period.