Skin shader - REAL translucensy

General discussion about the development of the open source Blender

Moderators: jesterKing, stiv

Post Reply
JoOngle
Posts: 0
Joined: Sat Jan 24, 2004 4:12 pm
Contact:

Skin shader - REAL translucensy

Post by JoOngle » Thu May 13, 2004 11:21 pm

Can it be done? Will it be done? It's a dream
of mine that it'll appear in Blender one day.

Sure - there are a dozen ways to fake real
translucensy in skin, but just to make it clear
what I mean - a short summary:

Have you ever seen your girlfriend in the
sunlight? Or a baby's face when the sun or
other brigth light hits it? Or your own hand
covering the lightbulb of a lamp?

Notice the light-translucent edges of your
fingers/the-face/the skin...and notice
that the light doesn't only go trough the
skin like opacity...but it spreads and shatter
thorough the flesh of the body giving it this
wonderful special...almost impossible to
emulate by faking it...color / light.

Now...Hollywood these days are using a
special technique supposedly invented by
a Dr. Henrik Jensen. The BBC recently
wrote about him and the "Skin Shader"

here: http://news.bbc.co.uk/1/hi/technology/3683853.stm

Unfortunately I'm not a coder myself, just
a mere 3d-artist that have been doing 3d
for many many years and have recently
switched permanently (so far) to Blender
from 3dstudio max (many years)...because
I love the Blender workflow, not to mention
the ENORMOUS progress you Blender Coders
have made with the thing. So I am daring
to wish for such a thing, especially since
you where innovative enough to give us
Ambient Occlusion (wich I use like crazy today).

Will it happen....?!

/Tommy

Koba
Posts: 0
Joined: Thu Apr 15, 2004 9:48 am

Post by Koba » Thu May 13, 2004 11:36 pm

Its already happening if you consider yafray as an extension of Blender's rendering system: http://www.coala.uniovi.es/~jandro/nona ... .php?t=750. If yafray gets this feature it may be ported over to Blender.

Koba

theeth
Posts: 500
Joined: Wed Oct 16, 2002 5:47 am
Location: Montreal
Contact:

Post by theeth » Thu May 13, 2004 11:37 pm

What you are looking for specificaly is called SubSurface Scattering (or SSS in short). It's based on the principle of transluency but is a lot more complex.

One of the YafRay coders has working code for SSS, so it's bound to pop up in YafRay at some point or another.

Martin
Life is what happens to you when you're busy making other plans.
- John Lennon

JoOngle
Posts: 0
Joined: Sat Jan 24, 2004 4:12 pm
Contact:

Post by JoOngle » Fri May 14, 2004 12:33 am

Thanks Koba & Theeth......

But...

Yafray is Nigthmarishly slow (yeah, I've heard that people
say it's fast, and fast compared to other GI-renderers)
but not really...

I have 3dstudio max with Vray & other GI renderers...they're
lightning fast compared to Yafray.

I rendered an "animation-table-scene"...full indoor scenario
with furniture, lots of items and much more with yafray...
it rendered for 5 DAYS in a row...without ever finishing on ANY
computer. I rendered it on AMD, INTEL + 10 different computers
and with Yafray it simply never finished....froze somehow.

Ambient occlusion might be a "cheaty" way of achieving the same
effects in shadows as with GI...but that one is a LOT faster (in Blender)
than in any other 3d software I've seen or tried....impressive! and
USEFUL.

To be honest...Yafray has been useless with professional use
simply because it can't handle things fast enough when we
come up with a lot of detail - but the BLENDER internal renderer can.
And it's lightning fast too....wich is essentially a dream come true
for many of us needing to do eg. animation shorts.

I do hope someone of you manage to get SSS + skinshader
into the Blender native renderer. The renderer in Blender
is fast and a GEM.

Thanks for your attention.

/T

Dani
Posts: 143
Joined: Fri Oct 18, 2002 8:35 pm

Post by Dani » Fri May 14, 2004 7:30 pm

Yes... I must agree.

Blender's renderer developpement shouldn't completely rely on external renderers. It's own internal renderer is extremely powerful and enables an infinite creativity and is getting better with each release. It is important to continue this developpement... improving, polishing, upgrading...

oh well, just another opinion

Dani

gabio
Posts: 0
Joined: Thu Jan 15, 2004 6:41 am
Location: Canada - Québec - Sherbrooke
Contact:

Post by gabio » Sat May 15, 2004 4:54 am

*AND* some internal trick of blender is still not portable. let say, particle...

dcuny
Posts: 0
Joined: Mon Jan 27, 2003 11:22 pm

Post by dcuny » Sat May 15, 2004 8:33 am

The following might not be especially helpful, but hopefully it'll give some idea why people can't just "slap together" a subsurface scattering shader into Blender. Hopefully I won't get too many details wrong... :?

There are two components to the effect: translucency and subsurface scattering.

Translucency is a measure of the light that enters one one side of the object and exits the other. You can see this effect on candles, jade, and ear lobes.

To fake translucency, you measure the distance from the front of an object to the back, and use that distance to determine the illumination of the object. It's fairly effective for simple things, but is is a cheat, because it assumes that the lightsource is directly behind the object. If you spin the object, it gives the effect away.

A more correct way to do this is to shoot a ray from the surface of the object to the lightsource, and measure the distance to the exit point. It's not much more difficult to do that the version above, and it looks better.

The correct way to do it is a bit more complex, sort of combination of the above methods. You "march" the ray through the object starting from the surface to the other side, moving in the direction of the line of sight (i.e. the direction vector comes from the eye start point to the surface point). By "marching", I mean you move the ray n steps along (where n is a small amount), and then shoot out a ray from where the "marching" ray is to the lightsource, measuring how far it is from the inside of the object to the outside surface. This is used to determine how much contributioning illumination is coming from the lightsource. March the ray forward n more units, and measure and add in the light contribution at that point, until the marching ray comes out the other end. Sum the contributing amounts, and you have the illumination.

Here is a link to a site the describes how this can be done in Renderman with shadowbuffers. Lots of pretty pictures, much clearer than my confused babbling above:

trans.htm
depthbasedTranslucence.htm

The translucency is only a portion of the effect, and often not even a very major one. Subsurface scattering is the effect of light that comes into the surface, bounces around a bit, and comes out somewhere else. So the subsurface illumination contribution is measured by determining how much light is contributed by neighboring points. That's why you get a soft "glowing" effect, and hard-edged shadows are softened - light from lit points near the shadow comes bouncing under the skin and illuminates the points in shadow. A very cool effect, and virtually impossible to fake cheaply.

The easiest way to do this is a two-pass method. First, determine the illumination of all the points on the mesh. Then, when you look at a point, you determine how much each neighbor contributes. This is not terribly difficult if you have lots of small triangles in your mesh, all roughly the same size. Don't bet on that happening by chance. So the determination of how much each nearby point becomes a bit more complex, because it's a function of the area of the poly that it's attached to.

Apparently it can be done relatively efficiently using this method, but that doesn't mean that it's fast. And while this is essentially a raytracing operation, although it apparently can be simulated with zBuffers - I've seen a description of how to do it in Stupide Renderman Tricks, but it's too clever for me to puzzle out exactly how they do it:

Siggraph2002-WLW.ppt

(It's in PowerPoint format, but OpenOffice can read it just fine).

harkyman
Posts: 98
Joined: Fri Oct 18, 2002 2:47 pm
Location: Pennsylvania, USA
Contact:

Post by harkyman » Sat May 15, 2004 6:29 pm

Isn't translucency for candles, earlobes, etc., just a specific case of sss, where the front and black surfaces are near enough that the sss actually hits the opposing side? And in that case, wouldn't a decent sss algorithm also generate translucency?

dcuny
Posts: 0
Joined: Mon Jan 27, 2003 11:22 pm

Post by dcuny » Sat May 15, 2004 11:32 pm

Well, from a physical point of view, it's all a trick of the light. :) And they are similar in that they are they are both phenomena of light moving through a material.

Simply put, translucency measures how much light doesn't get scattered while SSS measures how much scattered light contributes to a neighbor.

With translucency, you're measuring the light that starts out from behind and doesn't get scattered. The question is "how much light gets through this thickness?" The answer is a function of the light coming in, and the thickness of the mateial.

With subsurface scattering, you are measuring light that starts out in front, bounces around, and comes back out somewhere else. You visit nearby neighbors and ask "of the illumination you are receiving, how much bounces around the thin dermal layer and comes out at the point I'm interested in?" The answer is a function of the light coming in to that point, the percent that point's poly contributes to the target, and the distance of that point to the target point.

So no, you have to run two different algorithms to determine the contribution of each phenomena.

macke
Posts: 24
Joined: Tue Oct 15, 2002 11:57 pm

Post by macke » Tue May 18, 2004 12:22 am

What dcuny described above in the raymarching notes is referred to as the "single scattering term" in Jensen's paper. The other is the "diffuse scattering term". Jensen's bssrdf then consists of these two added together (bssrdf = single + diffuse).

Skin exhibits substantially less single scattering than diffuse scattering (I believe) which explains why the light scatters very softly inside a skin volume.

I'm not sure I'm ready to agree with you on the speed issue though dcuny. If you can define illumination on the mesh (perhaps with photon mapping, which is proposed for the diffuse term) you can easily store this in a data structure which enables you to find the data you need pretty quickly, and you can interpolate and extrapolate from this data as you'd wish. Doing it without any type of cache however can make the computation come to a grindin halt, especially with a lot of samples. On the same issue, Jensen proposes a russian roulette technique of deciding wether or not to stop the ray marching in the single scattering term, and also wether or not a photon is too low in energy to add to the effect (I think this is referred to as the scattering albedo). Additionally, Jensen also talks about volume photons in his book.

But anyways, this would mean the inclusion of photon mapping in Blender, which I have no idea if someone has even considered.

It should be worth noting too that Jensen's bssrdf is by no means THE way of calculating subsurface scattering. You have a lot of artistic freedom when it comes to creating a bssrdf. Jensen's method just happens to be very well suited for the job ;o)

Post Reply