64 bit rendering- The quick and dirty way (works right now)
Moderators: jesterKing, stiv
64 bit rendering- The quick and dirty way (works right now)
Hi, I have been looking at how to get 64 bit images out of blender.
I cant code in the rendering area yet so I looked at some other ways and discovered a realy simple method (tho dirty as I have mentioned)
Render larger- 3x with no/less AA.
convert the image to 64bpp in same app (imagemagick/cinepaint)
scale back down by 3.
Thats it.
Image magick is simple and can be automated easely.
The line below converts 1
convert test.png -depth 16 test2.png
convert test2.png -resize 300x300 test3.png
Im not sure if combining these into 1 copmmand makes it compute the scale in 16bpp. 2 definetly will.
and viola- 64bpp
Heres the scale ratio
1px = 8bits
2 = 9
3 = 10
4 = 11
5 = 12
6 = 13
7 = 14
8 = 15
9px scaled down to 1 = 16bpp
Therefor the width/height x3 will give a 64 bit image when scaled back down.
- Cam
I cant code in the rendering area yet so I looked at some other ways and discovered a realy simple method (tho dirty as I have mentioned)
Render larger- 3x with no/less AA.
convert the image to 64bpp in same app (imagemagick/cinepaint)
scale back down by 3.
Thats it.
Image magick is simple and can be automated easely.
The line below converts 1
convert test.png -depth 16 test2.png
convert test2.png -resize 300x300 test3.png
Im not sure if combining these into 1 copmmand makes it compute the scale in 16bpp. 2 definetly will.
and viola- 64bpp
Heres the scale ratio
1px = 8bits
2 = 9
3 = 10
4 = 11
5 = 12
6 = 13
7 = 14
8 = 15
9px scaled down to 1 = 16bpp
Therefor the width/height x3 will give a 64 bit image when scaled back down.
- Cam
@usagi: Let me say it in other words:
64 bit-output is mandatory for using blender in a professional environment.
@ideasman: Your approach to 64bit output sounds interesting, although you wont't be able to reverse the quantisation in areas of an image where colors blend very softly. And exactly these areas may cause the most problematic situations when doing a color correction.
If blender would do 24bit-dithering out of the (assumed) internal 64bit calculation, your method should work perfectly.
Bertram
64 bit-output is mandatory for using blender in a professional environment.

@ideasman: Your approach to 64bit output sounds interesting, although you wont't be able to reverse the quantisation in areas of an image where colors blend very softly. And exactly these areas may cause the most problematic situations when doing a color correction.
If blender would do 24bit-dithering out of the (assumed) internal 64bit calculation, your method should work perfectly.
Bertram
Thats a good point about the dithering.
This is not realy a perfect solution but its not that bad either.
We realy need 16 bit colour channels saved to a PNG from within blender.
Its interesting that blender can load 64 bit PNG images. I assume they are downsampled back to 32 but even so, it means that adding 64 bit support wouldent be odd. (blender rendering images that it coundent read)
- Cam
This is not realy a perfect solution but its not that bad either.
We realy need 16 bit colour channels saved to a PNG from within blender.
Its interesting that blender can load 64 bit PNG images. I assume they are downsampled back to 32 but even so, it means that adding 64 bit support wouldent be odd. (blender rendering images that it coundent read)
- Cam
You got it, usagi!
Actually HDRI is just a more fancy term for "higher color-depth than the usual 24bit", isn't it?
When talking about the output of your visual information, 16,7 millions of coulours are good enough, because the human eye doesn't have the perception for any more colours.
But when talking about processing the information, it would be a big problem to use the dynamic range of the output as the dynamic range for working.
In digital age we do have the ability to lossless reproduce any digital media. But when digitising analogue sources and processing them, we will always have to deal with big loss of information. In worst cases even more than with processing the analogue media/signal itself. Therefore the working dynamic range has to be a multiple of the target- (/output-) range.
This is why I campaign, that nowadays the ability to output 16bit per channel is no more a nice-to-have feature. It's much more of an urgently needed basic feature for blender.
Look ahead: More and more digital cameras and scanners yet in the consumer range provide color depths of 12 or even up to 16bit/channel.
By the way: Not only displays do support only up to 16,7 million colours, but also if your image is used for press, the dot screen allows 256 shades of each colour (Cyan, Magenta, Yellow, [Key/Black]). This is because most of the (aged) typesetters like Agfa, Linotronic, etc. are configured to work with resolutions of 2540 dpi. Using the very common screen ruling of 150 lpi (lines per inch), each dot is ~ 16 x 16 points (2540 / 150 = 16,9333) which allows to build up to 256 sizes of a dot.
In the meantime, techniques like FM-screening (frequency-modulated screen like your inkjet printer does it [instead of amplitude-modulated screen like laser printer or newspaper]) or hexachrome (6 instead of the 4 primary colours) push the demand for a higher dynamic range even in press application areas.
And: Sorry, if I bored anyone with my essay.
Actually HDRI is just a more fancy term for "higher color-depth than the usual 24bit", isn't it?
When talking about the output of your visual information, 16,7 millions of coulours are good enough, because the human eye doesn't have the perception for any more colours.
But when talking about processing the information, it would be a big problem to use the dynamic range of the output as the dynamic range for working.
In digital age we do have the ability to lossless reproduce any digital media. But when digitising analogue sources and processing them, we will always have to deal with big loss of information. In worst cases even more than with processing the analogue media/signal itself. Therefore the working dynamic range has to be a multiple of the target- (/output-) range.
This is why I campaign, that nowadays the ability to output 16bit per channel is no more a nice-to-have feature. It's much more of an urgently needed basic feature for blender.
Look ahead: More and more digital cameras and scanners yet in the consumer range provide color depths of 12 or even up to 16bit/channel.
By the way: Not only displays do support only up to 16,7 million colours, but also if your image is used for press, the dot screen allows 256 shades of each colour (Cyan, Magenta, Yellow, [Key/Black]). This is because most of the (aged) typesetters like Agfa, Linotronic, etc. are configured to work with resolutions of 2540 dpi. Using the very common screen ruling of 150 lpi (lines per inch), each dot is ~ 16 x 16 points (2540 / 150 = 16,9333) which allows to build up to 256 sizes of a dot.
In the meantime, techniques like FM-screening (frequency-modulated screen like your inkjet printer does it [instead of amplitude-modulated screen like laser printer or newspaper]) or hexachrome (6 instead of the 4 primary colours) push the demand for a higher dynamic range even in press application areas.
And: Sorry, if I bored anyone with my essay.
If I'm not totally mistaken it is not exactly the same. As I understand ideasman's method it produces more shades of color but within the previous range from black to "display white". True HDRI however ranges from black to sunlight which is possibly 1000 times brighter then the brightest white of a monitor screen.bertram wrote: Actually HDRI is just a more fancy term for "higher color-depth than the usual 24bit", isn't it?
Here is a good explanation: http://www.debevec.org/HDRShop/main-pages/intro.html
Bertram - I absolutely agree with you on the importance of higher precision output!
More bits per channel just means that you're representing the image that you already see with more definition - there are more steps between the 0 and the 1. Which is of course very useful in its own way, but serves a different purpose.
Most film is digitised at 16 bit per channel, mainly because film itself has a higher dynamic range than 24bit RGB, and also because losses in the dynamic range of your image are much more noticeable when viewed on a powerful projector (which has a much higher dynamic range than your average PC display). If you look at an 8bpc image beside a 16bpc image on a film projector, the difference should be discernible. Incidentally, this is the story behind Cinepaint (aka Film Gimp). The studios needed to work in 16bpc, the standard Gimp maintainers didn't want it, or allow the patches, or something, so they forked The Gimp to make their own 16bit version.
No - as mentioned, HDRI contains information above and below the values that you see on screen. You can bump up the exposure to see details that were under-exposed before, or reduce the exposure to see details that were over-exposed. So for example, instead of a range from 0 (black) to 1 (white), you can have a range from -5 (underexposed) to 0 (black) to 1 (white) to 5 (overexposed).bertram wrote:You got it, usagi!
Actually HDRI is just a more fancy term for "higher color-depth than the usual 24bit", isn't it?
More bits per channel just means that you're representing the image that you already see with more definition - there are more steps between the 0 and the 1. Which is of course very useful in its own way, but serves a different purpose.
Not necessarily - For example if you're doing something in black and white (greyscale), you only have 256 levels of brightness - this can be clearly visible in the form of banding on gradients even when the image hasn't been post-processed.When talking about the output of your visual information, 16,7 millions of coulours are good enough, because the human eye doesn't have the perception for any more colours.
Most film is digitised at 16 bit per channel, mainly because film itself has a higher dynamic range than 24bit RGB, and also because losses in the dynamic range of your image are much more noticeable when viewed on a powerful projector (which has a much higher dynamic range than your average PC display). If you look at an 8bpc image beside a 16bpc image on a film projector, the difference should be discernible. Incidentally, this is the story behind Cinepaint (aka Film Gimp). The studios needed to work in 16bpc, the standard Gimp maintainers didn't want it, or allow the patches, or something, so they forked The Gimp to make their own 16bit version.
But at least HDRI is a plain and simple 16bit/channel RGB, isn't it?No - as mentioned, HDRI contains information above and below the values that you see on screen.
The name "HDRI" only indicates the special way in which the gradient of the source image(s) is mapped to the final target "hdri"-image gradient.
Well, surely this is true für 8bit, although you may achieve a multiple of 256 "greyscales" in a 24bit RGB by only incrementing one channel at a time instead of all three RGB values. This gives slight variations in hue but this is almost not noticeable.Not necessarily - For example if you're doing something in black and white (greyscale), you only have 256 levels of brightness - this can be clearly visible in the form of banding on gradients even when the image hasn't been post-processed.
Another reason for heavy banding is the gamma correction that is applied 1. by blender, 2. by your graphics card, 3. by your imaging software,... This can make the best input image look awful on screen!
But back to 24bit: The 24bit would also be enough for film as it would be dithered. The dithering pattern would - if at all - be noticeable as a very slight extra noise.
Look at HDCAM which "only" works in 10bpc or 12bpc and was used for feature films like Episode 2...
In my opinion, the processing and storing of information in a maximum resolution is a legitimate interest as long as this is economically arguable. But exposing 16bpc to film is simply overdone, though granted this is standard.
I think it depends on the specific format. I found this siggraph presentation with a bit of googling that describes it well:bertram wrote:But at least HDRI is a plain and simple 16bit/channel RGB, isn't it?
The name "HDRI" only indicates the special way in which the gradient of the source image(s) is mapped to the final target "hdri"-image gradient.
http://www.debevec.org/IBL2003/GWcourseTalk-IBL2003.pdf
I had a look at the presentation. This is the point where I've got to quit the discussion about colourspace because it becomes too theoretical and high-levelled for me 
The only conclusion that I can draw is, that my information about the perception of the human eye was obviously outdated and therefore wrong.
I assume, that if blender could generate 16bpc-output, it should also be able to allow HDRI-Output. At least with the help of a little tweaking in S&L done by the artist himself.

The only conclusion that I can draw is, that my information about the perception of the human eye was obviously outdated and therefore wrong.
I assume, that if blender could generate 16bpc-output, it should also be able to allow HDRI-Output. At least with the help of a little tweaking in S&L done by the artist himself.
Using HDRI vs Outputting HDRI
Don't most 3D programs use HDRI instead of outputting it? I remember Dr.<?> Debevec taking a bunch of pictures at different f-stops then combining all that information into an HDRI image. But most modelling programs seem to use that information in reflections.
For example, if you have a black pool ball on a table right beside a window on a very sunny day, you can't see anything out of the window because it's too bright, but on the pool ball in the reflection of the outside you can see that there is a tree outside because the light has lost some of it's intensity. I could be way off, but that's the way I've seen it used.
So outputting HDRI doesn't seem like something we'd need Blender to do (maybe talk to the GIMP people?). It'd be really nice if Blender could use HDRI images though. Maybe it's already in Yafray?
For example, if you have a black pool ball on a table right beside a window on a very sunny day, you can't see anything out of the window because it's too bright, but on the pool ball in the reflection of the outside you can see that there is a tree outside because the light has lost some of it's intensity. I could be way off, but that's the way I've seen it used.
So outputting HDRI doesn't seem like something we'd need Blender to do (maybe talk to the GIMP people?). It'd be really nice if Blender could use HDRI images though. Maybe it's already in Yafray?