N.G.B or Future 3D technology DISCUSSION (We need one)

General discussion about the development of the open source Blender

Moderators: jesterKing, stiv

dreamerv3
Posts: 119
Joined: Wed Oct 16, 2002 10:30 am

N.G.B or Future 3D technology DISCUSSION (We need one)

Post by dreamerv3 » Thu May 29, 2003 9:43 pm

Hello, and welcome to this discussion thread, The reason I'm posting this is to start an open exchange of ideas within the community regarding the next iteration of blender or the derivitive 3d app or technology we want to succeed blender.


The first question to ask in a general sense is:

What do we want?

What I mean to say is, what are our ends for this technology?

Do we want to model all day and never texture and shade or do we want to simply animate and never wonder about modelling?

Do we want to texture all day long and never venture into the realms of the other two processes?

Do we want physics simulation?

And if so to what degree? Do we want to be able to analyse the structure of our objects in this application? Which should probably start being referred to as a collection of smaller modules or submodules.


What is it we want? because the INTENT of our ends, indeed the ends themselves will steer the design and architecture of the 3d app.

I for one think we should have a multifaceted 3d application, one which can handle 3d motion pictures as well as cad and dynamics simulation.

Does everyone agree that we need a purpose or a core set of goals setting out on this project? We need some targets that can be readily quantified, weighed and estimated in terms of developer time and technology strategy.

Here is what I've come up with, please feel free to submit your own ends so that we may better steer the design and development of the next blender.

I think that most 3d artist start out with a 3d app in an attempt to create as an end product... Animation, not bouncing balls or line drawings, there is a clear path of sucession of skills which points to the holy grail of 3d studies as far as Digital content creation is concerned.

We want to make movies, and furthermore we want to make good movies.

Animation master proves we want to make movies, 3d studio and maya both prove we will shell out huge quantities of money and people to make 3d movies...

Peripheral applications are in order of industry revenue or importance:

2: Games

3: Design (products, ideas, simulations)

4: Communications (userinterfaces, navigation, 3d depiction of reality, virtual reality)

1 is 3D movies/stories/ animation finished polished animation.

Each one of these applications has different dynamics, different variables, different needs.

Each one of them also has similar internals, similar components. they are all 3D and thus require or partake in the same 3D creation materiel assignment rasterization process.

How many of you after learning how to model, how to texture and how to animate would NOT start creating animation that siuts your fancy or look for work in the same industry?

yes?

How many of you get stuck on the way to the finished animation in one stage or another or the 3d animation process?

another yes?

Maybe its the modelling, mmaybe its the animation, maybe its the damn texturing.

We have posts for each process at major studios. 3D animation is being micromanaged on a human level.

You have to start thinking, when movies are made one person does not do everything, how can we expect the same person to make a 3d movie without the equivalent help of many people. The answer is that we cannot.

Each person has a job has a process has an algorithm(s) that they get paid to perform.

Why then can we not make a program to perform the same algorithms?

The answer is that we can.

Why can't we make a concerted effort to shift the work onto the cpu which does hard work very quickly. Cpu's can render, they can construct models (if we give them a context of what we want) they can set the lighting and repeat and modify animation cycles millions of times they can make scenes with variation. They can do all these things because we know how to do them. And because we know we can express them in a process. Because we can express them, a compiler can turn them into speeding bits of binary code.

This cannot be a monolithic program.

Who saw the second matrix film? If you have not I highly advise you to do so.

In the middle of the film Neo is chatting with the lady called the Oracle in the park.

She refers to a bunch of birds in the park, crows...

She says "...at one point a program was written to govern them..." to govern thier behavior... "...There are programs looking over all sorts of things from the sunrise to weather to the color of the sky...If a program is doing its job, you'de never even know it was there, they are invisable..."

Visual effects artists do thier jobs on major films, they are invisible, you'de never even know they were there. likewise you would never know the set sonstruction crew was there or the gaffer or the director of photography or cinematographer....

Eskil talks about individual programs linked via the internet, maya has micromodules that do one thing very well they are called nodes...

These are the jobs of people. Whats missing is a program that knows blue is a cool color, a program or collection of programs that know what Venice looked like in 1624 AD, they'll know because we told them.

Just like a student knows to follow a certain school of design because they are brought up to graduate in that atmosphere, now a program or a database can hold the information and an intelligent program can look for parts that "fit". "The sky is blue except when it is cloudly or nighttime or (____) or (____) it is cloudy when I say "give me (_____) weather" "

Just think about it. The designers of skrek did model grass, they grew it, they didn't model trees, they grew them...

Digital trees for a digital world. Digital biuldings.

How are we going to put the power of 500 people in the hands of one person.

How can we do this while:

1.)Minimizing latency both cross module and system wide

2.) Maximizing stabiliy and access to data AT ALL TIMES

3.)Speed we want speed, realtime rendering IS NOT FAST ENOUGH (realtime = 24/30fps)

4.)Ensuring developability, this application should be easy for programmers to add onto

5.)Ensuring maintianability, this isn't the house where a thousand languages live
We NEED to understand not just ourselves but the developer in australia who wrote
that SWEET lighting management engine. And we need to udnerstand his code
intimately.

6.) It should be as separate from the gui and interface as possible. (we all know why)

7.)It should have the internet as a design factor, it should have an internet facility.
Perhaps we should start talking about base object type libraries, maybe massive
content repositories or just base modifiable type are a good idea, it should have an
ability to talk to checkout data from the cities library and to check-in more data.

8.)OPENSOURCE!!! The layman and his ideas have been monetarily locked out of
cutting edge 3D technology, no more ($1 * 10 ^Whatever number of zeros) licenses.

9.)It should be as easy as talking to people, the time it takes to not only biuld or refer to a
city, use it and then film your scene should be on the order of at least half or quarter
the time it takes to achive its hollywood counterpart.

10.) The modules should talk in a way that is applicable to many industries so the physics engine can also perform crash tests of vehicle designs, in addition to say controlling the ocean for a beach scene.

11.) There should be a management system so we humans don't have to micromanage the process. Does a director micromanage a film? To an extent he/she does but the director does not model or build sets.

12.) You should be able to biuld a table by choosing the components of a table not the polygons. Therefore we should have a component editor so that we may build basic components and then TAG them (this side connects to X of X). You have an audi with legs where the wheels are and they should be made of wrought iron, a bit rusty and be of type "bengal tiger paw".

13.) Animation can and should be turned into cycls and we should have behaviors which are high level dexfriptions of groups of actions which are in turn groups of cycles.

This is a start.

What story would you like to tell?
Last edited by dreamerv3 on Thu May 29, 2003 9:59 pm, edited 1 time in total.

dreamerv3
Posts: 119
Joined: Wed Oct 16, 2002 10:30 am

Post by dreamerv3 » Thu May 29, 2003 9:45 pm

In response to a verse discussion a bit back, I read up on complete maya programming (the book)...

I read up on the maya architecture, and think this is an abstracted version of it, you have nodes (very basic 3d or otherwise tools)
which connect to other nodes (other basic 3d or otherwise tools)

So you end up with a pipe which takes data in one end and through this pipline of interconnected tools (kinda of like a conveyor belt concept) you get modified data out the other end which can be fed back into any other pipeline.

This leads me to think that this architecture from maya was designed to make disparate apps work together, eg: the apps most hollywood studios use are standalone exe's (they were exe's anyway they run in linux now) and they have developed this "pipeline" concept to link them together, maya is a refinement of that.

verse is a generalized abstraction of this...

Its just like plugins but now the plugins are standalone apps and they are linked via this protocol.

"deep breath"

Is this what we need? and why?

Is this the best way to go ahout fulfilling our needs...

The part I don't like is the internet concept, please excuse my thinking however I do not think we need "Next Generation VRML or better stated --VRMP-- Virtual Reality Markup Protocol" living in and through the premier opensource 3d app.

I love the idea of a 3d online community, but thats a peripheral concern.


In a given 3d application, modules (nodes or better stated micromodules) should sit on a local system and talk to each other through an internal networking scheme,

This way if some terror attack (cyber or physical) new code (insert color) virus or other networking problem occurs then the modeller can still talk to the rest of the application. In pratical terms I don't want to get tied up looking for verse servers to connect to in order to get my application to work, I don't want my functionality or access to that functionality to sit on a server in sicily (Italy) while my data is in detroit(Michigan, USA).

Think about it people...

The modules or micromodules(nodes) would do well to be coded in the SAME language with the SAME coding standards using abstraction concepts like logic and interface and data all seperate. But they would all sit on the same machine... See above :)

Why? because then the coders will be forced (yes forced or at least strongly persuaded) to write the application parts in a similar fashion so when the application receives new coders they can learn and get up to speed quickly.

When one coder or team gets old and dies

"Oh no they killed the rendering team, "you bastards!" "

Or otherwise is unable to keep on working.

then others can understand and maintain thier work because they wrote the code in harmony with everyone else. They didn't go off and write the most popular renderer in erlang.... (why oh why was wings written in erlang???)

These are valid logistical issues, if we start to speak in tongues then we'll have opened a pandoras box of complexity we don't want...

Lets say you decide to write an app in modules, or in a "local to the machine" network based system like maya.

If you really want to, you can write a linking client to talk to other systems...

Why would you decentralize so radically and destabilize an application framework so radically?

There is an underlying motivation here, to create 3d worlds online?

Is that what we all want?

It seems verse is geared towards online 3d world creation and maintenence. Thus it relies on and is based on networking.

But is this the best way to go about constructing a 3d application infrastructure designed for example:

3d animation, CAD, 3d physics simulation for industrial uses, 3d mock up and simulaion technology?

If you really want to diversify the languages a 3d framework can use why not go the way with multiple scripting interpreters python, perl, java?

Verse is an answer to a problem we (as a community) have not really discussed, and therefore do not really understand.

aphex
Posts: 0
Joined: Thu Mar 27, 2003 12:40 pm

Post by aphex » Thu May 29, 2003 10:40 pm

Wow. Great discussion here!

The feeling I'm getting from your first post is that you're taking a lot of creativity out of the process of film making. Don't forget that 3d is an art form at it's most basic level, and if you start making it too "mechanical" you'll end up with VERY bland output. Just look at Poser! ;) heh

At the end of the day, computers can't think for themselves. They can't innovate -- hell, don't even think about asthetics! :wink:

It's the old adage: "You get out what you put in".

Just some thoughts to get this discussion rolling! :)

aphex

thorax
Posts: 320
Joined: Sun Oct 27, 2002 6:45 am
Contact:

Post by thorax » Fri May 30, 2003 1:45 am

We need objects..

What I would do:

Take all of blender, make it an object, initially..
That way data is with code.. Then extract major
parts of the codebase like materials, textures, objects,
etc.. Allow blend files to be loaded into blender object
but allow blender object to farm out work to the material,
texture, such objects.. Until the objects outside of the blender
object are working, make it so that the blender object
is the main controlling code for the other objects..

This is just an idea of how to incrementally shift to a object
oriented design without a complete refactoring of blender..

Why objects?

I think for some programmers its pretty obvious, considering game blender is in C++ and the rest of blender is in C, go figure.. This is
what NaN was really working on not so much blender's animation
engine.. For the non-programmers, objects are programs with data,
they are like libraries that maintain their own data.. It allows
for stuff to be treated like self-contained things or objects.

In everyday life:
Laptops are objects, they have internal memory, they can run an
operating system.. But I can use a laptop oblivious to the details because
I have screen and a keyboard..

A fan is a object, it has a motor and electrical cables, it may be simple but should I have to risk getting shocked to use it, no its in a package that makes it easy to work.. It has a off/low/medium/high switch and a swivel base, so I can turn it on high and swivel it in any direction..

A non-object oriented world would be like having to hot-wire my
car everytime I want to go somewhere.. And having to know something
about binary math to work my microwave oven.. There is no sense in it,
why should software be the same way?

The idea of objects is to allow the objects to maintain their own
data, that data is always maintained/accessed/manipulated by something that knows more about it than the user.. If we use a procedurally biased
language approach (C), we must know which functions work on which data, we must add more parameters to functions to allow passing in and out of data, and how can we tell if the data will be modified, where does the result come out of the function, is it predictable how to use the functions, is there any way we can guard against the misinterpretation of the controls to the function?

An approach at object oriented design in a language like C
might look like this (lets take a NURBS surface):

NURBSSurface *aa;
int u,v;
u = DEFAULT_ISOPARMS_FOR_U;
v = DEFAULT_ISOPARMS_FOR_V;
aa = (NURBSSurface *) malloc(sizeof(NURBSSurface));
initializeNURBS(aa,u,v);
modifyvertex((void *) aa,n,x,y,z); // move the nth vertex to x,y,z ..

Description:
We create a NURBSSurface data type, create a data pointer,
then we assign the data pointer memory of a static size (the structure
contains memory pointers to the actual structure).
We initialize the data structure assigning internal pointers to
data structures internally and making the surface have the default
number of isoparms, we also must tag the data structure as being a NURBS surface by making the first byte equal to the code for a NURBS surface, this is in case the structure is cast as a void pointer. We then use a general function for modifying the point data,
to use the general function however we must cast the data structure
with a void pointer to avoid type mismatch errors..
"Modify vertex" checks the first character in the data structure to
recognize that it is a NURBS surface, this was done when we initialized the NURBS surface, and in the modify vertex
method we must include code for modifying vertices of
polygons, surfaces, lattices, etc.. Each dependent on the
value of the first byte in the data structure.. Note: any features
added to the NURBS surface must have code in the initializer
and in the modifyvertex code..

Okay how would you do objects in C++?


NURBSObject aa; // create a object aa of type NURBS, initialize it to the default of U,V isoparms and allocate all necessary substructures..
aa.modifyvertex(n,x,y,z); // move the nth vertex of the NURBS surface..

Description:
We create a NURBS object aa..
We modify a vertex of the NURBS surface..


See how easy that is??

Now imagine what would happen if any of the steps for the above were left out.. First with the C code:

NURBSSurface *aa; // if we leave this out, we get a "undefined variable aa error" later when memory is allocated to it..

int u,v; // same as above but for u and v..

u = DEFAULT_ISOPARMS_FOR_U; // if this is missing, this may crash the machine, especially if U is not null upon creation, it might be set to MAX_INT, who knows?

v = DEFAULT_ISOPARMS_FOR_V; // same stuff different variable..

aa = (NURBSSurface *) malloc(sizeof(NURBSSurface)); // if we left this out, aa is only pointing to a NULL (if the compiler is good, or to an undefined absolute memory location somewhere in the computer if the compiler is bad)..

initializeNURBS(aa,u,v); // note: if aa is null and this function does not check to see if it is null, then this function will most likely result in a core dump. If this function however does check for a null pointer, what if the compiler doesn't assign the pointer a null upon creation, then the function crashes and its tough to trace unless the programmer looks hard at his code to notice that the pointer is not allocated anything..

modifyvertex((void *) aa,n,x,y,z); // if we leave this out, of course we don't get what we want.. Also if aa is not defined and not checked to be
null, this will also result in a core dump..

Now for the object oriented code:

NURBSObject aa; you leave this out, then the code below just results in a "undefined variable aa error".
aa.modifyvertex(n,x,y,z); You leave this out, we don't get what we want..

So we can avoid core dumps? Yeah!


Now eskil continues to backup using C, why, I ask..


Its a good time to discuss what we want in blender, but to really get
what we want, we need to think hard about what architecture will
allow us to get what we want.. Those in favor of C say I, those in favor of
C++ or some object oriented language that is C compatible.. Say ME!!

Now to cover the if ands and buts of the programmers who use C,
yes I coudl core dump in the C++ but remember all the maintaining
code is scoped to the object.. I can put wrapper code in the object to traace usage of the methods.. I can inherit from from a general object
that relates polygons, surfaces, lattices, etc.. This allows me to
unify initialization code, to be able to enforce assumptions about the object, can I guarantee in C that my object if ever used will be initialized, can I assume proper values for variables, and I write methods without
having to write specialized code that checks for the object type, can I guarantee that the methods will not work on the wrong data? If I add a feature to an object can I find easily all functions and conditionals that
need to check for this new feature in the data? Without being able to make such assumptions about my objects in C, I am left to walk on egg-shells to think about the global state, my local state, the state of the data I'm working on, the idiosyncracies of the functions I use on the data,
and so on.. Effectively to keep Blender going everyone would have to be as knowledgable as Ton has had to become with blender, because features added somewhere else affect the assumptions made elsewhere
and if you don't account for everything you can bet a bug or worse a core dump will appear..

Note: Ways to recognize bad coding skills or programming in C =
- relies on cut/copy/paste when unifying features across the codebase..
- relies on global variables to track interface state
- functions written to verify data integrity at every step of process
- tons of DEBUG statements everywhere
- use of void pointers
- conditionals everywhere redundantly checking integrity of data by doing range checking..
- anonymous code that is not quite understood but continues to exist because it works
failure to modularize code and consider scope of variables

I'm sure some of you can think of more..

Bad programming in Java/C++:

- use of objects as libraries (a result of not fully understanding OOP)
- use of interfaces (its a nice approach for doing standards but if the standards are not understood it can make the objects hard to read, specially if the code assumes the existence of methods that are not defined in the object)
- considering the above, failure to unify methods to an object, sometimes a result of making a seperate class of objects for every new feature added a result of overuse of inheritance..
- making everything an object no matter how mundane in detail
- not considering design before coding (its impossible to hack in a object oriented language, C++ is easier to hack in than Java).
- trying to hack objects (it can work but why not work in C)


Easy ways to hang yourself:
Disregard the design, hack to hearts content, thinking up
new ways to code stuff all the while.. By the time you have something working you will realize that pieces of it don't fit together very well and you end up being primary technical support for the result..

Design out the wazoo, without ever doing any test coding to check feasibility of a design (exploratory coding, hacking a prototype).
However your prototype should NEVER be your target architecture
because things you did not consider others will know later and you become the butt of every joke from then on..

There is a balance.. Normally when designing a program you
might do things in this order:

Brainstorm - come up with the perfect concept of the application

Pseudocode/Graph - imagine how the code might look for this determine object relationships..

Select an architecture - pick the libraries, language that looks best for design..

Exploratory coding - test feasibility of certain details that are tough to imagine.

Rethink design - look at graphs/pseudocode, determine how what is known of architecture will affect the design.. Look at needs and determine how close the needs can be met, put the needs that can't be met into a vault and save for later or deprecate..

Graph general design, get everyone to write off on this and agree to this design it then becomes the guide book by which future descisions are made (way to settle arguments)..

Consider implementation languages and architecture and write generalized pseudocode to fit them..

Break problem up into easy to code pieces, arrange work in a way that reduces bottle necks during implementation.

Determine error checking code, how to test pieces individually to determine integrity and consistency of assumptions..

Consider people who can do the code pieces - give each to his skill..
Implement..
Test..

While unsuitable to release {
Fix..
Test..
}

Release..

We can't do this with blender really because blender is already
written.. But we can rule out most of the hacking exploratory
coding.. We also don't know the state of the code..

- First we determine what we want, or what model of system works
best for change..
- Read blender's source code and determine its dependencies,
collect the mysterious code, buggy code..
- Graph it out - so that developers can look at it before
considering a feature addition..
- Think of a better relationship between the data and code (preferably a object oriented one).
- Describe assumptions that code can make about data and other code.
- Document design, describe what each piece does and the assumptions that allow it to work and scale and be flexible later..
- determine what needs to change in blenders source to support future features..
- break up work into pieces..
- create a project for every logical group of pieces and determine what is needed of the programmer to implement these pieces, if possible..
- prioritize the groups of pieces to eliminate most bottlenecks (no point in adding one feature if its dependent on another).
- Implement

(see above for test/fix loop + release)

Now there is a way to organize this so that features can be released more
frequently by determining what are major changes and what are minor changes.. A change of the data structures that are required to support
a node-pipeline model would be a major change.. Adding Maya-style sub-div surface level of detail would be a minor change..

Minor changes are made to the code as it stands.. These changes
are more easily added to the new design because we make sure they
have no dependency on a major change.. The minor changes are
farmed out to beginner programmers and programmers that are less devoted to blender's effort..

Major changes are farmed out to developers that have time
and capabilities to work on blender's code over long periods of time..
Major changes are released in large development cycles (months),
while the minor ones are released more frequently (weeks, days, hours).

The Major developers need to be hidden from the users, they develop
stuff in a vacuum not considering bugs until the major devleopment is done.

The minor developers help users and consider what code is going to survive, adding features according to the survivability of their work in the later revisions.. Bug fixes should be considered in level of severity..

When a major part is released, all existing features that are not
dependent on this code will be reworked to use the major feature..
Code that belongs to other major work may be modified but
such changes should be given low priorty based on servity of need,
as code that will be knowingly replaced will be made obsolete later (doesn't make sense to work on code that will be removed later, not a good use of your time)..

Bug tracking software should be designed to count similar requests
not allow replication of similar bug requests, by offering up to the user a keyword search system that allows them to track down the bugs
before submitting them.. The current bug tracker doesn't seem to prevent this.. How many of the bug requests are repeats? Also bugs needs to be classified by system dependent bugs, system independent bugs.

Feature requests deserve a different system, that classifies features by location in the interface or use in the program. Feature requests
should be hard to enter into the bug database or at least
the process should be harder so users are encouraged to
use the feature system and not the bug system..

Developers should be able to see each other and track progress,
so if there are any bottlenecks the developers can plan their time more accurately and determine when is a good time to start working on their part. Also when code is added the developer shoudl document the feature add.. The CVS system that maintains these features should be
capable at some point of adding comments made in the CVS to the source code as comments and to implementation documents, or
the source code could contain a comment that contains a link back to the
documentation documenting the feature or the addition..

We might need a pert chart of sorts to give all developers and overview of the progress of blender's source developments.. Also there should be someone to be able to suggest to new developers docs to read and
things to consider before making changes, considering what is known of the design.. Once we are in implementation stages of a major option,
there should be no reconsidering the design unless the design
is massively flawed, so avoid reconsidering the implementation mid-way (this is a hackers urge, something a good coder shouldn't do, such decisions should have been worked out in the exploratory coding and design stages).

Integrity checking code (regression testing?) is required for small
code samples but may not be possible for large pieces of code that do combinatorially complex sets of stuff for which behaviour can't
be determined.. We may throw something like a "random tester"
at the code (also called "random monkeys").. But ultimately the code
should be designed from the get go to uphold as much assumptions (or statements about what we can assume about the code and its use) as feasible.. Code that acts outside the bounds of our assumptions are bugs, and if we don't consider the logic in our assumptions we will have bugs
in our code upon implementation.. Its easier to trace semantic errors in a design phase than in a implementation phase (which results in the "multitude of debug statements" everywhere syndrome).

eskil
Posts: 140
Joined: Tue Oct 29, 2002 10:42 pm

Post by eskil » Fri May 30, 2003 3:36 am

wow we need to cut down on post sizes.

First of all i must once (like that's going to happen) and for all say that verse node tree is very different from how maya works.

I have a small confession to make:

I actually have tried to avoid this topic. and i will try to explain why. I don't find it very constructive. I think it would be much better if every one just start coding, any one with a vision should just get started. And once some one has a good idea that people like they should join in and start coding. so that's what i did.

You are not going to convince me that C++ is better then C, and im not going to convince any one C is better, but that's ok. because i hope we can all work together. and this is what verse is about, any one can write what ever they want using verse, using what ever language and code style. verse is just a tiny thing to input/output data from what ever you have written.

Yeah i have lots of ideas on what things should be like, interface, tools, functionality object orientated tree structures and so on. But im going to keep them to my self, because if i want them i think i should implement them my self and not boss every one else around.

E

dreamerv3
Posts: 119
Joined: Wed Oct 16, 2002 10:30 am

Post by dreamerv3 » Fri May 30, 2003 3:50 am

Thorax: ME definetly, actually us since this is going to be an effort greater than an I or a Me alone. I would choose C++ and go the object route.

BTW if you're going to be @ Siggraph this year it would be cool to swap ideas.

Aphex: Check yourself. Have you seen Poser 5? They people aren't painfull to look at anymore, largely in part because the program has been made smarter.

Thorax is right, everything is just like an object.

Moreover you don't get an automated system, I think that autopmating the grunt work and allowing finer control are two peas that can live in the same pod, you can have both.

You can generalized and get a city and then zoom in and modify by metaphor or if you need to in an editor or via a "film crew" tool/node this makes sense.

You don't ned the whole city just the part where your scene will occur.

It does work and it has been working in many of your favorite movies namely shrek/ lord of the rings/matrix reloaded (the agent smith vs neo scene)

You get out of it what you put in, this includes programming foresight and forethought, the concept of poser finally has been taken out of the apple realm and into the hands of people who know the importance of variation. The first poser had leonardo davinci portraits as models, this led to soap operah syndrome, people looked like they were primaries in a soap operah.

It does work, machines execute instructions and programming is not a static machinery oriented field. Had you studied it more you would know as I have. I used to be one of the people wishing that the programming and dev teams or the commerical execs who command them would make an app like this. An app that takes the power metaphor but turns it into an intelligent application.

Then I learned that wishful thinking will get you 0 km/miles in the direction you want/need to go, so start walking, for me walking meant ayear of self studying c++ then some college coursework, next semester should be even more fun.
Getting back to the issue of automation, I would rather you call it farming out the work to individual crew members thats more in tune with the application design.

Blue is cool orange is warm red is hot, these colors MEAN something, we as artists have been PROGRAMMED to think that these colors have certain meanings. weather we were programmed naturally by evolution or we were introduced to this programming in school, it is there.

I'm simply stating the obvious, one man will never make a film, not with the current crop of tools NEVER. Even with a complete app like this the filmmaker has a long hard road ahead of him, he needs vocal talent and possibly a partner or two, but this application makes it POSSIBLE that a determined crew of less than 5 CAN make a story, in full vivid color without waiting a lifetime for rendering (OpenGL 2+) or another lifetime for funding, because resources that the computer potentially has unlimited supplies of have to be reproduced via human means.

As an artist you look up certain reference books on anatomy and lighting, as an artist you do research, a coimputer can look up the same references far faster and look for the same criteria if only a human has programmed it with algorithms which search for the same criteria an artist looks for when referencing the "rule of thirds" for example or the rules of composition. you can get a major chunk of your camera setup cut away and then only have to tweak the scene. Much of the time the tweaking will be the most time consuming.

But first you have to get it out of your head that for some strange reason computers cannot take input (scan a bitmap which represents a scene and then search for certain conditions) process that input and then make a decision based on the analysis, they can do it better than you know. This is what an artist does, as a photographer I've done this many times and as a 3d artist I've always wondered why after 20 years of research and development, we are still creating 3d from a low level assembly language point of view, we need an "assembly to c++" metaphor for 3d animation creation.

This is it.

Thorax again: We're going to have to talk about how to refactor blender into C++ OO and then abstract parts of it out, thats the quickest way I guess. This concept here will have to be designed around a more maya/3ds max component architecture concept.

You should do a post about refactoring blender and then I'll start digging into the code to look for the parts. At least the comments are readable now.

Edit(Eskil)
{
Yeah, I can understand your idea as far as "if you really want to make something just do it, and its a great idea to get one going, you gotta start walking sometime or else you'll never start. Verse is interesting and after I study it somemore I'll think about how it could or could not serve as an infrastructure to link the nodes together.

For the app I talked about above I think a maya like design would be interesting but with a different twist, reading about verse has changed my mind a bit, maybe a microinternet type of infrastucture inside the 3d application could serve as a way to easily locate and utilize the services of each node/tool/submodule. Kinda of like having a dns lookup but in a faster unitary domain way. the more I think about it the more ense it makes to utilize a network inspired architecture for such an application. I'll post some ideas here for critque soon.

As far as your idea that we can all talk in different languages via verse, I see the inclusive nature of the concept but from a practical maintainence point of view its genocide. What if a popular feature is written in a very different way and then the author goes away for some unkown or known reason? what then? What if it was written in oh say fortran? Pascal maybe? Maybe ruby. Or smalltalk? How many ruby and smalltalk programmers can code in 3d? I'm not trying to attack you or verse, I think the verse concept is a great idea, it just needs a context. I'm terrified that the modeller will be written in some horridly undocumented language or perhaps the rendering code will be the lucky winner to be in java... Its like biulding a wooden house and then inviting termites to chew it up.

Who's going to maintain the code written in carbon (which only runs on macs) which has a verse interface and kicks out some great renders?

If we all speak the same language then we can hold conversations, verse is like an interpreter on the fly for us if we speak in different languages butif I speak english and you french and thorax speaks chinese and we all take notes in our r4espective languages then we need to read and maintain thoraxs' notes, which one of us will learn chinese?

How can such a glaring problem be transparent to you?
}

matt_e
Posts: 410
Joined: Mon Oct 14, 2002 4:32 am
Location: Sydney, Australia
Contact:

Post by matt_e » Fri May 30, 2003 4:41 am

Some interesting posts! I'll add in my 2c, though it maybe won't be as long as what you guys have written ;)

Dreamerv3: Computers are never going to be able to replace human creativity, imagination and aesthetic sense. While I agree that a lot can and should be automated (such as automatic grass, trees, etc. in Shrek), that's a huge leap from being able to compose a shot or something. Creating art isn't just about following rules and mental programming - it's about expression and communicating a feeling. I may not be able to put down in words or explain via a flow chart why I composed a shot or painted a picture a certain way - I just do it because it 'feels right'. It's that sort of feeling that gives art it's magic and the power to move people.

The computer desktop publishing market has been around for about 20 years now and a lot more mature than 3DCG. You won't find anyone trying to get a computer to lay out a page though, even when things like typography, legibility, etc. are IMO much more mechanical in nature than movie making.

My philosophy is that tools like poser and plant generators can be great for certain uses and increase efficiency a lot for some things, however they should by no means be relied upon. These sorts of features should be designed as extra bonus things that people can use if they want to, not the main way of creating 3D (which people can then edit as a bonus). I dread the thought of all 'blender 3' artwork having a horrible cookie-cutter sameness that you'd get from having a computer attempt to work out aesthetic decisions.

I also think you guys should go easy on eskil and read what he has to say a bit more closely. He's volunteering a lot of his time and doesn't deserve to be so harsly criticised when he's giving away the products of his work freely. Eskil has mentioned many times before that verse is a protocol that helps facilitate different applications talking to each other, with a focus on 3D graphics information. It doesn't matter what language the apps are coded in, verse is about the communication between them. I also wouldn't be worried about 'servers going down etc'. You can still run a server and a client on the same computer to act like 'normal' software and to not worry about networking. I'm not sure what the performance ramifications of this are, but I'm sure eskil has given it some thought.

It's dangerous to concentrate on 'making movies' as the target market for such a future Blender. Although the film industry gets a lot of media attention with 3D, there are other industries that work with 3D graphics that are FAR larger than the film industry such as the 3D games industry, motion graphics/broadcast design, print design/illustration, architecture, CAD, etc. As well as these established industries, there are lots of future growing media forms like virtual reality, augmented reality, and things we haven't even conceived of yet. In designing a system for the future, it would be unwise to base that design on the priorities and needs of today (such as film). That's one reason why I really like the idea of making the inter-component communication like verse take place over a network - the internet has already revolutionised media in the form of the web (perhaps seen as a progression from print media and video combined), and there's a huge amount of potential for all sorts of other new media that can be facilitated by networking and communications. I feel that there's a huge potential for 3D graphics to grow beyond the narrow scope that it currently occupies - it just needs the tools and opportunites to do so, and I think the idea of verse is a step in the right direction.

eskil
Posts: 140
Joined: Tue Oct 29, 2002 10:42 pm

Post by eskil » Fri May 30, 2003 5:17 am

Broken:

Thank you for the appreciation.

dreamerv3:

I will try to answer your concerns in the later part of your post:

In my mind, verse is not an application, and my vision is not of the NGB as a new app. its a network of applications. where is much more chaotic and free than the traditional 3D solutions.

Some people have compared verse to a operating system kernel and in this case it might not be a bad comparison, Even if you write the worlds best operating system, people will still write bad apps, apps that are poorly documented, or maintained, require a specific setup, are written in strange languages, and basically suck. You cant blame Apple for all bad OSX apps and you cant blame Microsoft for all bad win32 apps (but some of them...). There might be many different apps trying to do the same thing and that's ok too, no one will stop you if you want to write yet another mp3 player or browser. All you can hope is that the apps you need are available and that you are happy with them, each user of an operating system is free to select what to use and what not to use, and i like that.

Both Microsoft and Apple have style guides, but I kind of like when apps (like blender) break them and try something different. The guide lines are there to help but in the end its up to you as a programmer to choose how to write your app, and its up to the users to choose to use your app or not.

E

dreamerv3
Posts: 119
Joined: Wed Oct 16, 2002 10:30 am

Post by dreamerv3 » Fri May 30, 2003 8:06 am

Broken: Why is it whenever someone mentions computers, this automatic assumtion of the cookie cutter analogy comes up. Movies aren't the only application for 3d of course but its one of the toughest to tackle. Moreover the tools themselves aren't geared for animation production persay, its the MANAGEMENT of those tools that does the magic of movie production assistance.

The tools in and of themselves simply process data and return results, just like people.
To imply that there is something magical there is to imply that high level control of a project really isn't control. And we all know where the control comes from, it cmes from the top. in this app the top would be the human user.

And as far as sameness, media carries with it restrictions 2d animations to an extent looks similar, 3d looks the same even well done 3d for now anyway.

I always say its the content not the media., south park isn't popular because of the horrible graphics, its the characters, the performances, the storylines, the magic lies with the human, the computer doesn't write the story, its does everything except that.

Assembly level functions can be romanticized until we're all sick and tired 0of stills rendering because we cannot animate because we don't have the resources because it takes to damn long, because you're writing a story on a typewriter with one finger when you could be typing with all 10 thats the difference.

Thats not to say sameness is good, but this isn't even an issue because you don't look @ the biuldings in a shot you look at the communication of the shot and that is ultimately controlled by a human. The analogy of this app is assembler to c++ or higher level take it with a grain of salt, you still have fine control over your app with c++ but the tedium of assembly is eliminated.

But people are entitled to thier opinions and hangups as well.

Eskil: Hey I wish verse the best, I might even end up writing a plugin to use it. porbably, maybe, lets allows Siggraph to come and go, we'll all have a better idea of our respective directions and also collaborations by then.

IMO: To be honest 3d application programming is still too niche to be as spread thin as verse would allow on disparate platforms with who knows what libs and api's I've just seen this movie before with all the 3d apps on sourceforge that never really made it because there was no concerted effort. I don't want the blender technology to suffer the same fate, there's interest here, people, a few glowing embers maybe with a few more we can start a fire that will keep burning.

If verse is a part of that then great.

But For the last year every single 3d app on sourceforge has compiled and worked (I didn't say they were well designed) on my linux box, wings 3d which rocks as a modeller (written in erlang an interpreted language) still doesn't work. Due to erlang and all its dependant wrappers and libs. Clean c and c++ apps tend to compile and link with minimal fuss. Allowing someone to go and use some unkown language to me and albeit a few others is an open door liability to the future. If verse said : You have to use C, C++ or Java and thats it, then at least we could deal with the codebase, but verse doesn't even require contributors to be opensource apps, what madness is this?

The way verse is designed allows it to use the services of an opensource app but pay none of the requirements of sharing its own technology. Are we giving the private sector free technology for nothing in return? How many open doors?

Surely you can see where I'm coming from.

The farthest I would venture would be a plugin to verse.

Zarf
Posts: 46
Joined: Mon Oct 14, 2002 3:54 am

Post by Zarf » Fri May 30, 2003 9:15 am

And as far as sameness, media carries with it restrictions 2d animations to an extent looks similar, 3d looks the same even well done 3d for now anyway.
2 peices of 2d animation look similar to the extent that they are moving pictures yes. Thats about as much as I will allow as far as their being intrinsic similarities. Ive seen animation that was done by an artist taking shots of drawings on paper in charcoal then erasing them and drawing over that for a new 'cell'. The world is wide open.
IMO: To be honest 3d application programming is still too niche to be as spread thin as verse would allow on disparate platforms with who knows what libs and api's I've just seen this movie before with all the 3d
Could you explain what is meant by this?
But For the last year every single 3d app on sourceforge has compiled and worked (I didn't say they were well designed) on my linux box, wings 3d which rocks as a modeller (written in erlang an interpreted language)


Small nit-pick. Erlang runs as byte-code in a VM like java. I don't know if this would count as interpreted (not by my definition but then again I am weird).
still doesn't work. Due to erlang and all its dependant wrappers and libs.


Another nit pick. Not sure what the problem is under linux, but on win32 which I use to write erlang code on I have had no problems whatsover. I would assume you get the same problems with Java under linux though.
Clean c and c++ apps tend to compile and link with minimal fuss. Allowing


Errr if you say so...
someone to go and use some unkown language to me and albeit a few others is an open door liability to the future. If verse said : You have to use C, C++ or Java and thats it, then at least we could deal with the


I have to strongly disagree here. There are many times that c or C++ simply isnt the best tool for the job. Furthermore your kind of suggesting that one of Verse's most attractive features be stripped of it. At large production houses like ILM and Pixar a lot of tools are used written in different languages and glued togather in various ways. Why? because some things are better at some tasks than others. Verse as 'glue' means that less time would have to be spent on doing silly things getting apps talking to each other.

There are a multitude of languages out there, many of them much better than c/c++ (many of them worse as well.) to lock these developers out of working with verse is pretty lunkheaded move IMO.
codebase, but verse doesn't even require contributors to be opensource apps, what madness is this?
I'm not sure how this is madness. I assume that forcing others to keep their source code 'open' is a form of freedom for them in a way that I am not farmilar with.

So I guess this brand of 'madness' is called reality.
The way verse is designed allows it to use the services of an opensource app but pay none of the requirements of sharing its own technology. Are we giving the private sector free technology for nothing in return? How many open doors?
This is not a new thing (see BSD license). The argument about which system is better is very very old and each side has merits. The chances of you changing someone elses opinion are rather slim. Furthermore the presumption that someone dosnt have a good reason for taking the side they have is foolhardy (see 'madness').

I would rather see less politics involved in this. Being LGPL'd (I think?) means that no core modifications can be made to verse without giving changes back to community (at least it would be illegal to do otherwise.) All this means is that the opensource 'community' has to compete with big bussiness on their own merits and not by cannibalizing technology. If they can't do it, well thats their problem.

Well thats it for my yearly post....

Jamesk
Posts: 239
Joined: Mon Oct 14, 2002 8:15 am
Location: Sweden

Post by Jamesk » Fri May 30, 2003 10:08 am

A very short post here, but it's mostly a sidenote::..

Could someone perhaps explain, just roughly outline, the algorithm that would go into making the future module known as the "director-of-photography-module"?
The input data would be "objects_characters_environment" -
we issue the command "compose_a_great_looking_shot" -
and get as output.... what?

That aside, I agree on the pipeline architecture based on "standalone" nodes. But huge collections of premade stuff, be it objects or lists of cool colors, is not an interesting way to go, IMO.

ideasman
Posts: 0
Joined: Tue Feb 25, 2003 2:37 pm

Post by ideasman » Fri May 30, 2003 10:56 am

Thats a great idea. Just program a nural network that can understand simple stuff like 'Colours that look good' and 'shots that are used in good movies' etc and throw in a 'level of realiscness' variable and there you have it.

I think thats where it is headed...

thorax
Posts: 320
Joined: Sun Oct 27, 2002 6:45 am
Contact:

Post by thorax » Fri May 30, 2003 11:48 am

deleted
Last edited by thorax on Fri May 30, 2003 12:24 pm, edited 2 times in total.

Money_YaY!
Posts: 442
Joined: Wed Oct 23, 2002 2:47 pm

Post by Money_YaY! » Fri May 30, 2003 2:10 pm

aaaa
Cut it out with the HUGE posts.
I have no time to read them all.
If you are going to do such at least Indew them.
Or Anchor them in HTML on your web site and provide a link.

My vision of blender.

Make it better. Have every last little tool and method their is.
More toys all the better. I just use what ever is new. And make stuff
from it. It has to be fun for me.

Implant EVERY single feature request from the long lists that exist.

Then organise it in a unique method. But first and for most.
Start making everything.

Post to every single borad out their to get more coders in here.

As I see things their are trippy'r OpenGL apps out their
I wish blender would do some of that good GL stuff.

Esssiee was doing some of that stuf.
And breve AI is another ...


bblah blas
^v^

leinad13
Posts: 192
Joined: Wed Oct 16, 2002 5:35 pm

Post by leinad13 » Fri May 30, 2003 6:37 pm

This Blender roadmap is different in everyones mind, but the thought of being able to make a movie like in (I think) Microsofts Movie Maker, where you choose your scene from a list of 5, and then your caracters from a list of 10, then what they can do from a list of 20. Thats ridiculas, they crushes imagination and creativity and we get 100's of shorts all the same.

Will we be writing Blender 3, or NGB, or whatever its going to be called with OpenGL2 in mind, which is going to revolutionise computing. (I hope)Or are we going to be limiting ourselves to OpenGL1.

About getting more coders i strongly agree, very strongly agree. :D
We need more coders
-------------
Over to you boffins

L!13

Post Reply