N.G.B or Future 3D technology DISCUSSION (We need one)

General discussion about the development of the open source Blender

Moderators: jesterKing, stiv

kAinStein
Posts: 63
Joined: Wed Oct 16, 2002 3:08 pm

Post by kAinStein » Fri May 30, 2003 8:09 pm

First of all I would also suggest to keep the posts as small as possible - perhaps a short abstract and a link to access the thoughts in depth (which could be formatted better)...

dreamerv3:
I share the opinion that you're are taking the human factors (like creativity, inspiration and others) out of the process of 3D content _creation_ (and not only movie _making_). You even assume that all actions people take are based on algorithms and I don't see it like that...

The idea to have preset libraries (and what you suggest isn't more than a library with a (very) fuzzy search) is ok, but to base Blender on it can't be the way.

Regarding modularity: I agree that the future of Blender lies in a slim kernel that provides basic functionality and modular components as tools. But I strongly dislike the idea of making it a network based software. I would prefer selfregistering plugins and scripts with clear interfaces to the kernel (and perhaps a installing plugin for making the search and install procedure of new or newer plugins easier). What you suggested sounds like Application Service Provider and Webservices stuff which has been promoted in the last years but never had a break-through. In my opinion the reason is that people just don't want it (besides that there are lots of other issues like security, etc.).

So... That were my 2 eurocents for now...

dreamerv3
Posts: 119
Joined: Wed Oct 16, 2002 10:30 am

Post by dreamerv3 » Fri May 30, 2003 11:30 pm

kAinStein:

You're netiltes to your opinoon, but when I put code to the test editor, I'm gunning for the abstraction f basic repeatable concepts which can be reused. If you think the colors red, blue, green etc being defined in one context of design which is documented and taught in university art classes is totalitarian then maybe we should start sending letter to art departments criticizing thier design principles abnd rules. You free to break the rules but its good to start in a context as opposed to ground zero scratch.

I want to build buildings with design principles not atoms, even though I can invent new metals using the atmoic contruction method I don't have 20 years to spend on a building.

Sheesh there are such things as editors, if what you envision isn't contained lossely or tightly within a certain metaphor then go ahead and add content and assign it to a design philosohpy/context, you do the same thing everyday when you talk to people about what inspired your work, now tell the app what inspired your template and it will keep it while recognizing it as unique and analyzable.

If you don't like what I'm saying then fine, I'll keep on my way and see where it leads, maybe it will yeild some cools results, but before it does it will need the same basic 3d and other tools that any good app for DCC needs and I'll contribute to those too.

I think I like thoraxs' idea of refactoring the blender code so I'm going to look through it and see where parts can be extracted into modules and then halved or otherwise generalized and what needs to be written over. I'm going to use c++, but this is very compatible with c albiet I won't use c.

As far as network based stuff, I've been arguing that the main tools and the app need to stay together, both langauge wise and architecturally (at least internally, you can have scripting plugins and an interpreter for java, python, perl, etc) while some recognize the merits of this argument ( everyone canread from the same codebase as opposed to divide and conquer) some see other merits as more important (the ability to pick a characters teeth in pascal for some unknown reason). Fine to each his own I'm just happy blender is written in c and c++ and not something like lisp.

As far as someone saying erlang works fine on thier windows box: It worked on my windows box too, never did get it to run in linux, and java runs like a dream in linux most likely due to suns' installation support, sprint doesn't seem to care much for erlang support on linux and I don't blame them. C++ is good enough for both me and most of the rest of the developers out there.

I'll be setting up a web site soon with a design roadmap and start getting to work.

This is going to be fun, I've never put together a 3d app before althought I've programmed some logic for games. I guess the kernel is the toughest part. By the time the blender descendants come about I'll know as much as I should about 3d app design and implementation.

I'll still support the BF as I always have, I hope we can get a pos in startup park.

Later!

Thorax: I feel your post shouldn't have been deleted like that, it's censorship in the worst way. Send me an email via dreamerv3@programmer.net this kernel refactoring sounds interesting.

thorax
Posts: 320
Joined: Sun Oct 27, 2002 6:45 am
Contact:

Post by thorax » Sat May 31, 2003 4:30 am

Money_YaY! wrote:aaaa

Esssiee was doing some of that stuf.
And breve AI is another ...


bblah blas
^v^
It would have been easier just to wish for a trillion dollars..

Jamesk
Posts: 239
Joined: Mon Oct 14, 2002 8:15 am
Location: Sweden

Post by Jamesk » Sat May 31, 2003 11:07 am

I'm having some trouble grasping this talk about OpenGL 2... I read up a bit on it, and admittedly it sounds very powerful.

But this is still just intended for realtime display, right? And the features depend a lot on what kind of graphics hardware is installed on the system used.

I'm simply a bit worried that the intention is to try to use OGL2 as a replacement for the internal renderer (not the realtime rasterizer, but the 'real' renderer). Just because OGL2 supports 'amazing feature X' doesn't necessarily mean that the hardware supports that feature. So NGB would would only be cross-platform, but not cross-hardware. So in a future manual, one could read "Blender supports reflections over bumpmaps, but only if you have one of the following graphics adapters - - -"

If this is what this is about - then stop it! :D No, seriously, but it would be just as stupid as delegating animation frame rendering to DirectX or something.

And further more - any shader development would be restricted to whatever features are available in the OGL Shading Language. That's so stupid that I'm convinced that I've misunderstood this discussion.

If I have in fact misunderstood, then there is still one more issue. If moving to OGL 2 means that we get better and more accurate previews of the final render - then that's a very good thing indeed. It's very important to be able to make quick preview animations before cooking the real thing. BUT if it means that we will get a UI jam-packed with useless "cool" effects cluttering up the screen - then that's pure evil and must be avoided.

Just look at the industry-preferred animation package #1 - Maya. Does it have a cool looking interface? Does not. It's outright boring. But it's highly functional (at least for those who use it professionally on a day-by-day basis and have not been previously exposed to years of Blender useage).

What I'm trying to say is that the look of the application is NOT important. The functionality and ease of use IS important. The application is supposed to be the tool the artist uses to make the final product look cool/elegant/hot/whatever. The final product is the image or animation that the artist makes.

A coder-only person will of course feel that the application is his/her final product, but the focus should still be stability, clarity, usability and functionality - not appearance.

Summary: At the end of the day, the look of the application UI does not make anyone happy. But if the application was helpful in the making of an image or animation - that is what makes users happy.

Basically it would be very sad if we, the future NGB-users, end up saying this every day: "Yeah, well, my animation is just crap, but the Blender interface sure looks cool!"

xype
Posts: 127
Joined: Tue Oct 15, 2002 10:36 pm

Post by xype » Sat May 31, 2003 11:35 am

From what I know OpenGL features are not dependant on the hardware, if something is not implemented in hardware, it has a software fallback, which means it's a bit slower, but it still works (just like OpenGL 1).

So basically OpenGL 2 is much like a shader language for graphic cards. And If your card does not handle it, it's CPU-calculated. It's likely that some older graphic cards wont come with a OpenGL 2.0 driver, but I guess that's not really important if one is using Verse, since one could simple re-write the UI in OpenGL 1.0 for editing.

kAinStein
Posts: 63
Joined: Wed Oct 16, 2002 3:08 pm

Post by kAinStein » Sat May 31, 2003 2:50 pm

dreamerv3 wrote: I want to build buildings with design principles not atoms, even though I can invent new metals using the atmoic contruction method I don't have 20 years to spend on a building.

Sheesh there are such things as editors, if what you envision isn't contained lossely or tightly within a certain metaphor then go ahead and add content and assign it to a design philosohpy/context, you do the same thing everyday when you talk to people about what inspired your work, now tell the app what inspired your template and it will keep it while recognizing it as unique and analyzable.

If you don't like what I'm saying then fine, I'll keep on my way and see where it leads, maybe it will yeild some cools results, but before it does it will need the same basic 3d and other tools that any good app for DCC needs and I'll contribute to those too.
It sounds now more reasonable without the emphasis on that template/preset thing. If you see it as a kind of network transparent template library manager and don't mean it as a main tool then it does sound good. With network transparent I mean that it can manage objects (all kind of objects: meshes, textures, ...) that are stored locally, on the lan and collections which are stored somewhere in the internet. That would be a nice feature - no doubt.

But don't put too much hope into it: Search engines are only as good as the descriptions are. A piece of work might be unique and analyzable but the description is not. There is the human factor again - such things depend on personal experience, mood, language, etc.
I think I like thoraxs' idea of refactoring the blender code so I'm going to look through it and see where parts can be extracted into modules and then halved or otherwise generalized and what needs to be written over. I'm going to use c++, but this is very compatible with c albiet I won't use c.
That also makes sense to me.
As far as network based stuff, I've been arguing that the main tools and the app need to stay together, both langauge wise and architecturally (at least internally, you can have scripting plugins and an interpreter for java, python, perl, etc)
Well, yes. I also would suggest to keep most the application in one language and because C++ might now be the most common language for coding applications, I think that it would be only logical.

On the other hand it also should be possible to create tools even with other languages (actually I don't mean plain scripting). But that shouldn't be the problem at all. People still could create the language bindings they want if the design is good enough.
while some recognize the merits of this argument ( everyone canread from the same codebase as opposed to divide and conquer) some see other merits as more important (the ability to pick a characters teeth in pascal for some unknown reason). Fine to each his own I'm just happy blender is written in c and c++ and not something like lisp.
Hehe! How true! :D

As far as someone saying erlang works fine on thier windows box: It worked on my windows box too, never did get it to run in linux, and java runs like a dream in linux most likely due to suns' installation support, sprint doesn't seem to care much for erlang support on linux and I don't blame them. C++ is good enough for both me and most of the rest of the developers out there.
I haven't tried erlang under linux yet. Perhaps I should give it a try. And full ACK for the C++ thing - I see it the same way.
I'll be setting up a web site soon with a design roadmap and start getting to work.

This is going to be fun, I've never put together a 3d app before althought I've programmed some logic for games. I guess the kernel is the toughest part. By the time the blender descendants come about I'll know as much as I should about 3d app design and implementation.
Yep.

Jamesk
Posts: 239
Joined: Mon Oct 14, 2002 8:15 am
Location: Sweden

Post by Jamesk » Sat May 31, 2003 3:28 pm

xype wrote: - - - basically OpenGL 2 is much like a shader language for graphic cards. And If your card does not handle it, it's CPU-calculated. - - -
Fair enough. What about the rendering thing then? Are there actual plans to scrap the idea of an internal renderer and simply only use what OGL2 can provide?

Upside: One could render out animations at a blazingly high speed.

Downside: Just about everything else except for the speed-issue.

dreamerv3
Posts: 119
Joined: Wed Oct 16, 2002 10:30 am

Post by dreamerv3 » Sat May 31, 2003 5:55 pm

Well James, if you're follwoing the modularity shcool of thought, then there will be no internal renderer, the good part is that there will probably be a default renderer which you might be referring to in the same context. Its the one that comes with blender.

However as far as OpenGL 2 isa concerned I'm afraid you've misunderstood the breadth and depth of GL 2. GL 2 is not just a rendering API, it also will have vertex program functionality and memory management, this means that a significant portion of the deformations and other vertex based effects such as the animations of water surfaces, ik solvers, skin deformation systems will all be accelerated in hardware, whos' hardware?

Whoever supports GL 2, by writing a GL 2 complaint driver for thier card(s) will be able to accelerate applications which make calls to GL 2 functionality.

Nvidia, ATI, 3D labs, will all fully support GL 2. If a manufacturer does not support gL 2 you can always install and use Mesa which is an OpenGL clone in software. I do not doubt that there will be a Mesa mirror of OpenGL 2.

You won't be "previewing" the image in order to better "cook up" the final render, get those thoughts out of your head for now, because GL 2 will BE the final render, its that good looking. You move a light, the final render reflects the change in realtime.

Its not about a feature "x" its about killing the "please wait" message.

kAinStein: I'm not talking about "make pretty" buttons, but I'm talking about the next best thing. "make ready for the user to make pretty" buttons.

We waste so much time doing things that shouldn't have to be repeated over and over. the computer should know how the sun looks @ 5:30am in december on a clear to partly cloudy day. Don't like the cliud pattern? right click,a radial menu pops up where the mouse cursor was located giving you the shortest possible distance to travel to any item inside of it. Select-- Scene > Weather > Sky, now change the options. if you want to you can save your Scheme of {cloud and sky and sun pattern} to a "yourname_sky_effect#32" now that unique confluence of factors, is searchable because you assign emotional values to this effect, you do this because the program asks you for the emotional tags. when you alter to values for the features in the tool.

See where I'm going with this? the computer isn't decideding what IT thinks is good, it is giving you what you want to see.

If a ditrector tells a gaffer to cool down the lighting the gaffer knows what a cooler lighting setup is, the gaffer then adjusts the filters and lights as neccesary. The gaffer doesn't go and heat up the lighting or just run all the lights blue with filters. I'm proposing the same.

There should be a standard for models of 3d data, certain tags connect points, things of this nature. I'm not talking about formats I'm talking about a standard of detail a standard of quality or type.

Its a subtle idea, you have to think about it for a second because it has to avoid the pitfalls of automation in bygone days. The good news is that we have all the bad examples to show us what we shouldn't do. the poser people learned from their mistakes (as far as people go anyway, the other static models should be more lively like the people are.) and now have something very cool.

You should look into it if only for model creation.

Now the problem running through my mind is that of

Huge library of data to be arranged in all sorts of ways? OR Algorithms that take base types of data and modify them to create variation, I have to tell you I'm in favor of both, but there has to be a balance. if the algortihms don't know how to modify the base types enough or in the right ways to present a different look, in short if they cannot abstract far away enough from the template to create original concepts then better algorithms are needed. Or a way to take different base objects and blend them into a hybrid on which the modification algoithms can operate to create variation, I like this a whole lot better than a 50 terabyte 3d object library, that would be cookie cutter cause the data would be static and the objects would be chosen from a library and used verbatim, thats bad, but if the actual structure of the objects is changed then you can have something.

For example: In poser 2, if you could have clicked on an EYE on the face of a character the whole eye indetation in the skul and all the parts of it like the lashes and lids and whatever and moved it around on the face to a new location, maybe even down the length of the bopdy to the knees where the kneecaps would be, then you could have variation, an algorithm could modify the location of the facial features by all sorts of factors of incluence, a viola, faces become easy to make. This has already been done and is in use in 3d studio and other apps as plugins.

It needs to be opensource and integrated with this emotional/stlyistic/paradigm oriented typing system. Do you know what Neoclassical american houses from the 1980s era looked like? I just transmitted a rather detailed template for houses to you in a single line with very little effort. Now the mutations must begin. Its a different way of dealing with content creation, its higher level, the trick is keeping control where it must be and not where it wastes time and effort.

Here: http://www.cmhpf.org/kids/Guideboox/OldHouseGuide.html

This page in a nutshell (nutshells are fast to process mentally) tells me that different stryles of houses LOOK LIKE, now I can take the compnoents of the houses like the pillars, the walls the texture and shading templates, (vines on an outside wall "see the itialianette house" are a particle system and a bunch of shaders or actual geometry in the form of a model but the particle system can be more variable) and store them in a base type library. What would an Italianate/greek revivall hybrid look like? (you can specify blend presets such as amount of trim from each base type to include amount of support structures, cohesion physically, dimensioning) Its like controlling evolution.

Find a specimen you like? Here comes the program to ask you why you like it, Colors? amount of "x"? It collects your emotional responses to it in order to tag it and store the preset as directions on how to make "X" user's emotional value(aggressive) or (minimalist) object. One mans' trash is anothers treasure so it would be cool if the program could compare its emotional collection with those of others thereby finding
similarities and differences.

If the link goes down, I can keep on creating and lose my "focus group" functionality for the moment. I can tolerate that . (lead in to why I think important subsystems should sit in the same box)

What groups of people think about "x scheme". Thats possibly a facet of the networking, as I said as far as what I want to do maybe verse could do some interesting things, we'll have to see how it pans out as far as integration.

Starting to understand?

Jamesk
Posts: 239
Joined: Mon Oct 14, 2002 8:15 am
Location: Sweden

Post by Jamesk » Sat May 31, 2003 8:21 pm

Sure, well, OK... You seem to have envisioned all of this in great detail, and that's a good thing of course.

And I'm with you about the modularity breaking the idea about an INTERNAL renderer. That's only natural. I'll change that to a default, standard renderer that the user invokes in a totally transparent fashion.

But still, if one wants to have OGL 2 perform the rendering - can it do anything? For instance, if I like the way Mental Ray, Brazil or Messiah calculate images, can I make my graphics adapter perform like that using the appropriate systemcalls?

I think not. So my point is that it would not be possible to integrate anything new in the rendering pipeline - it would have to wait until Open GL 3.0 is released, hopefully with support for whatever we wanted it to do.

[think think think...]Or does this simply mean that some things that OGL 2 can handle, we let it handle - anything else is precalculated in the "not-internal rendering-preprocessor", translated into something that OGL 2 can in fact do (even if that means setting each individual pixel, just like an ordinary renderer) and take it from there?

Jamesk
Posts: 239
Joined: Mon Oct 14, 2002 8:15 am
Location: Sweden

Post by Jamesk » Sat May 31, 2003 9:10 pm

AAAAaaannnd: I must add (I cannot drop this, can I? :)

About OGL 2 again - does it support any of the following? :::

Subsurface scattering
Absorption
Ambient occlusion
Chromatic dispersion
Deep shadowmaps
Global illumination using Monte Carlo Irradiance
IBL using HDR-images
Caustics
Caustics with media interaction
Anisotropic BRDF's
True displacement
Volumetric shaders
Depth of field with different lensmodels

.... and so on. Basically, any cutting-edge rendering technology that we could add support for in a software renderer.

dreamerv3
Posts: 119
Joined: Wed Oct 16, 2002 10:30 am

Post by dreamerv3 » Sun Jun 01, 2003 6:40 am

Half the stuff you're talking about deosn't even concern OpenGL 2 specificalluy, about 80% of those items on your list can be handled via algorithms which hand off the specific computational tasks to opengl, for example: GL 2 can render cutom lighting models, this measn you can write loighting models which can mimic global illumination, I didn't say they perform each computational step but they can mimic the result. the M$ drones tackled that one a year ago via Direvct X 9s' programmable features.

Sub surface scattering, again that yet another ahder, its up to the shader monkeys to write these its all about the look of SSS and that can be approximated.

Real deformation can quickly and easily be handled via a vertex program, along with skin deformation and liquid surface simulation (I said surface simulation not fliud dynamics computation, although if you rack your brain long enough I'm sure theres a way to piggyback the process into a series of if not one or two vertex shaders.

This is not like the fixed function pipeline of the past, programmable graphics means you have to INVENT a way to perform the tricks in the hardware, fortunately GL 2 will let you INVENT ways to perform the tricks in GL@ as opposed to hardware coding "to the metal" as it's called.

Caustics can be a shader which works via various objects relationships to one another in space according to thier materiel properties although I'de think the cpu should do some of the work here as far as the spatial calculations and then hand off that to the shader.

Shadowmapping is supported in hardware now and so are volumetrics (albeit they're still clicking along) by the next iteration of 3d cards this should be a bit faster.

Nvidia had an OpenGL 1.3 demo with volumetric lights and shadows running on a geforce 3 @ siggraph
2000, when I saw it in the booth the people walking down the side stopped to see who yelled "THATS AWESOME!"

Lense abberations can simply be 2d filters routed to the output of a particular camera, there are other ways to handle that, ps2 has some nice hacks in Jak and Daxter which involve the torches and the "heated wavy air" affect throughout the game, the ps2 is the equivalent of a TNT 2 with vertex programs, its THAT old.

Motion blur is a walk in the accumulation buffer, depth of field same deal.

Stop thinking, "Does GL 2 support X feature?" thats like saying "Do the C and C++ languages support 3d graphics programs?" Well yes and you have to do the work to implement them too. Same thing with GL 2, it does many things but now it lets you INVENT new ways to do even cooler stuff. Will someone shoehorn a raytracer into a vertex program one day? That would be a hell of a feat wouldn't it?

I had an idea about that a while back, it involved some accelerated particle systems (via hardware) and a shader that loved them. Don't know how well it would work but it would be fun to try.

If you don't like the :look" of the GL 2 rendering system then you should surely be able to re route your data to another rendering system. Simple as that. Thats the beauty of modularity, me? I'm happy with realtime or sub realtime GL 2, the possibilities are just too exciting. Imagine 5 million polygon scenes flying by @ 12-15 fps addanother 5 and get super detailed scenes rendering at a paltry .5 to 1 fps beats the socks off a software renderer.

Its not just "preview" anymore...

Jamesk
Posts: 239
Joined: Mon Oct 14, 2002 8:15 am
Location: Sweden

Post by Jamesk » Sun Jun 01, 2003 9:39 am

dreamerv3 wrote:Stop thinking, "Does GL 2 support X feature?" thats like saying "Do the C and C++ languages support 3d graphics programs?"
Well, if that's a fair comparison, then I suppose it really is a good idea.

xype
Posts: 127
Joined: Tue Oct 15, 2002 10:36 pm

Post by xype » Tue Jun 03, 2003 9:14 am

Jamesk wrote:Upside: One could render out animations at a blazingly high speed.
Downside: Just about everything else except for the speed-issue.


Uhm, well, maybe you should put more trust in people like Ton and the Blender core (coders) group to make decisions that are good for Blender's future and not simply because it sounds cool. :wink:

ton
Site Admin
Posts: 350
Joined: Wed Oct 16, 2002 12:13 am
Contact:

Post by ton » Tue Jun 03, 2003 12:40 pm

Uhm, well, maybe you should put more trust in people like Ton and the Blender core (coders) group to make decisions that are good for Blender's future and not simply because it sounds cool.
Don't trust me! I am renown for making weird decisions!

Jamesk
Posts: 239
Joined: Mon Oct 14, 2002 8:15 am
Location: Sweden

Post by Jamesk » Tue Jun 03, 2003 11:17 pm

xype wrote:Uhm, well, maybe you should put more trust in people like Ton and the Blender core (coders) group to make decisions that are good for Blender's future and not simply because it sounds cool. :wink:
Fair enough. I'll just shut up and get back to modeling, then! :D

Post Reply