N.G.B or Future 3D technology DISCUSSION (We need one)

General discussion about the development of the open source Blender

Moderators: jesterKing, stiv

dreamerv3
Posts: 119
Joined: Wed Oct 16, 2002 10:30 am

Post by dreamerv3 » Fri Jun 06, 2003 6:07 am

As far as rendering is concerned, it would be easier if there were subsystems inside the core package, essentially chapters of the API would be divided up and serve these subsystems, as far as prerendering is concerned, I think you're misunderstanding it, you can't really prerender something...

Rendering susbsystem
{
1.) Realtime render engine: -- /* This includes the defualt GL 2 based render engine. */

2.) Offline Rendering connection module: -- /*This module would handle said
"pre-rendering activites" although it would have its own chapter in the API spec, thus any plugins would have to know how to talk to it and then how to translate the data to the target renderer. */
}

You can take your scene data, and generalize it, kindof like a holding area for lights, geometry, surface textures and shaders, you would have to keep it within the blender spec, but abstract the information which could cause problems in translation plugins to other renderers.

It becomes essentially in terms of flow:

Software renderer:

Tool output(changes geometry/animation/surface/lighting/scene data) >> kernel(offline render flag is true) >> rendering subsystem is called >> data is passed to currently configured renderer.

OpenGL 2:

Tool output(changes geometry/animation/surface/lighting/scene data) >>
kernel(realtime render flag is true) >> rendering subsystem >> GL 2 engine renders scene via hardware.

This way the burden of writing the plugin does not rest on the "prerendering jmodule" it rests with the party who wants to output to whatever renderer. That is as it should be.
The realtime rendering engine is the defaultblender renderer (in this scenario) and as such it is optimized for blenders' geometry/shaders/animation and scene tools.

It should be the option that will always work no matter what, even if yafray or renderman ot brazil screws something up. The GL 2 render engine should be the rock solid option.

A rendering subsystem would serve this part well.

But for the game creation potential the render engine should still be just another pluggin.

Blender::Render(Opensource Game Egnine Context) would be best served by a stripped down rendering subsystem designed to take less blenderized and more bland geometry data from whichever game would be piping data to it and simply render based on the OpenGL 2 spec as far as shaders and materiels are concerned, vertex programs would run based on GL 2 spec as well.

Whereas...

Blender::Render(Connected To Blender Context) would benefit from the more comprehensive rendering subsystem features which would be turned on and talk the idiosyncracies of the rest of the blender application(s), things like shader tricks and tighter integration with the geometry tools.

The package would still be Blender::Render (or whatever name we decide on) but instead of having lots of modules running around they would be consolidated into a package (which makes life cleaner and easier) which could run in one of two ways, first and foremot as the blender rendering engine and secondarily as a pluggable opensource game engine.

This way all interests are served AND we have a clean system layout.

cessen
Posts: 109
Joined: Tue Oct 15, 2002 11:43 pm

Post by cessen » Fri Jun 06, 2003 8:17 am

Perhaps I'm misunderstanding you, dreamerv3, but it sounds to me like you are suggesting that we combine the non-realtime/high-quality renderer program with a realtime/game-engine renderer. I fail to see the advantage in doing so. The realtime renderer should be integrated into the game-engine, not into the non-realtime renderer.

Also, perhaps it's worth mentioning that my ideas are primarily from the standpoint of someone who doesn't really care much about the game-engine (though I do acknowledge that there are other people that do), and who is much more concerned with creating still-images and animations.
(Personally, I think that the game-creation stuff should be an entirely different suite, because it is a very, very different problem.)

Jamesk
Posts: 239
Joined: Mon Oct 14, 2002 8:15 am
Location: Sweden

Post by Jamesk » Fri Jun 06, 2003 8:37 am

xype wrote:What you are describing is what Verse is supposed to be used for. A Verse server holds the data and whatever application "understands" the Verse protocol can work with the data.
I was under the impression that Verse was a networking 'protocol'. I'm just talking about good old dynamically linked libraries or equivalent sitting nicely in my own box. Most other applications I have used are divided into a bunch of subprograms (in my case they are DLL's), but that has absolutely nothing to do with networking.... ;)
cessen wrote:Perhaps I'm misunderstanding you, dreamerv3, but it sounds to me like you are suggesting that we combine the non-realtime/high-quality renderer program with a realtime/game-engine renderer. I fail to see the advantage in doing so.
You and me both.

xype
Posts: 127
Joined: Tue Oct 15, 2002 10:36 pm

Post by xype » Fri Jun 06, 2003 9:00 am

Jamesk wrote:I was under the impression that Verse was a networking 'protocol'. I'm just talking about good old dynamically linked libraries or equivalent sitting nicely in my own box. Most other applications I have used are divided into a bunch of subprograms (in my case they are DLL's), but that has absolutely nothing to do with networking.... ;)

Yah, but with Verse you can run the "subprograms" independant of the main application, whereas with .dll files your dll or application crashes and takes all your data with it. Verse does not mean that you _have_ to use it over a network or whatever, just that you store your 3D data independantly from your application/subapplications. You can have plain C/Glut/OpenGL apps as your Verse clients and they will compile nicely on most platforms, whereas for multi-platform plug-ins you'd need quite more of an overhead to make cross-platform releases (dlls don't work on Linux or OS X).

Quake3 Arena is a multiplayer game. Do you have to run it on a network? No. Does it have a Server/Client model? Yes. Can you have fun playing it against bots without a network? Yes. But you can also play it over a network/the internet, and it's just as fun.

I can't get rid of the impression that your views of Verse are just a bit too narrow than Verse really deserves.

Jamesk
Posts: 239
Joined: Mon Oct 14, 2002 8:15 am
Location: Sweden

Post by Jamesk » Fri Jun 06, 2003 9:36 am

xype wrote:I can't get rid of the impression that your views of Verse are just a bit too narrow than Verse really deserves.
Maybe they are... You could perhaps clear something up for me. Let's do the Quake comparison: The client/server model of Q Arena is probably something like a) a game engine (server), and b) gamers connecting to that (clients). The gamers could be real persons on their own computers, or they could be bots inside the same box on which the server runs. (Forgive me if something goes wrong here - I've never played Quake Arena :)

NOW - since the game is net-enabled, I guess all communications between game-engine and clients use the TCP or UDP layers to "talk". Which is fine because then it's really easy to do multiplayer gaming over the internet. If one single person fights some bots, on his own computer, then the communication STILL runs over TCP or UDP, only via the localhost at all times. Is this correct?

And would not Verse do that too? That is, run all communication over network sockets, even if it is one user, one box, data would still be passed from the modeller to the meshmodifier to the renderer and so on using calls over sockets and through the entire TCP/IP-stack?

If this is correct, then it seems a bit inefficient. In a Quake-session this is not a problem because the amount of data that needs to be transported is very small, just positions, inventories and things like that I imagine.

In a NGB-session we could be looking at moving +2.000.000 polygon meshes back and forth between various components. Doing that a lot over a local/loopback network connection should be somewhat sluggish. But I may be totally wrong about that.

If this is not the real nature of Verse, then I have in fact got a 'view too narrow of it' and I have admittedly misunderstood the concept and for which I, if that should be in fact true, apologize in advance.

xype
Posts: 127
Joined: Tue Oct 15, 2002 10:36 pm

Post by xype » Fri Jun 06, 2003 12:03 pm

Jamesk wrote:Maybe they are... You could perhaps clear something up for me. Let's do the Quake comparison: The client/server model of Q Arena is probably something like a) a game engine (server), and b) gamers connecting to that (clients). The gamers could be real persons on their own computers, or they could be bots inside the same box on which the server runs. (Forgive me if something goes wrong here - I've never played Quake Arena :)


Correct.
Jamesk wrote:NOW - since the game is net-enabled, I guess all communications between game-engine and clients use the TCP or UDP layers to "talk". Which is fine because then it's really easy to do multiplayer gaming over the internet. If one single person fights some bots, on his own computer, then the communication STILL runs over TCP or UDP, only via the localhost at all times. Is this correct?

I think that for single player, you do run a Server but do not access it via TCO/UDP but _directly_ - since it's the same application and instead of sending, recieving and interpreitng the game data you only interpret it - you have it in the memory anyway. Now if you "open up" the server, other clients can "join in". That is probably how NGB with a Verse server could work.
Jamesk wrote:And would not Verse do that too? That is, run all communication over network sockets, even if it is one user, one box, data would still be passed from the modeller to the meshmodifier to the renderer and so on using calls over sockets and through the entire TCP/IP-stack?


Depends on the implementation, you could have it both ways - but I think NGB could "start" a server and so open up a gate to it's 3D data. So you can work with your buddy on the same 3D scene, and after a period of time, he "opens up" his scene and you send him an updated version of the model/animation/material/whatever you've been editing. You could also have it updating all the time, if you had the bandwidth. :)
Jamesk wrote:If this is correct, then it seems a bit inefficient. In a Quake-session this is not a problem because the amount of data that needs to be transported is very small, just positions, inventories and things like that I imagine.


Depending on how the game works, you can have only a few positions sent around or you could have velocities, ammo projectile types, environment changes, etc changed and re-sent all the time (the fact that Doom3 will have environment that can be seriously altered made iD decide not to make it too much of a multiplayer game).
Jamesk wrote:In a NGB-session we could be looking at moving +2.000.000 polygon meshes back and forth between various components. Doing that a lot over a local/loopback network connection should be somewhat sluggish. But I may be totally wrong about that.

You could also just send changes across the net, reducing the data loads quite a bit and in the process building up the scene on both sides. Or re-send the whole scene.
Jamesk wrote:If this is not the real nature of Verse, then I have in fact got a 'view too narrow of it' and I have admittedly misunderstood the concept and for which I, if that should be in fact true, apologize in advance.


Maybe I was a bit harsh - you do understand it (to some point, hehe), it's just that Verse is flexible and there are many ways of making an efficient implementation.

I think it's a really nice concept since it would allow people to add to NGB regardless of whether their "experiments" fit into the grand NGB scheme. NGB could be a nice animation/modelling suite but adding a game engine that works with the same data would be far less complicated with Verse than trying to bundle it into one application. Also the "network traffic" that would happen on a local machine, were you using a few clients, would not be hard to handle by the system (unless it's a few gigabytes of data).

You could also make your own file loaders - a command line app that loads, say, a 3D Studio file and sends it to the NGB Verse Server. You wouldn't need to know how NGB handles data/file loading internally since NGB would translate the Verse input by itself.

Although NGB by itself will be one application you could extend it any way you wanted and integrate it with your workflow without changing it and pi*sing off other NGB users by doing so. It would certainly add appeal for companies and people who like NGB but whose ideas for additions are not approved by the NGB board.

Jamesk
Posts: 239
Joined: Mon Oct 14, 2002 8:15 am
Location: Sweden

Post by Jamesk » Fri Jun 06, 2003 12:21 pm

Let me quote the parole board of "Arizona Junior":

OK, then!

cessen
Posts: 109
Joined: Tue Oct 15, 2002 11:43 pm

Post by cessen » Fri Jun 06, 2003 4:30 pm

(Bare in mind that all of this is "IMHO".)

I think that verse would be a great way for things like Blender:Renderer and Blender:Modeler/Animator to communicate with each other. However, I think that it would be dangerous to split the Blender:Modeler/Animator program into smaller sub-programs that interact via Verse--doing so could easily result in a K3D style app, which is great for programmers, but horrible for users.

So, I would warn against making the components too small. I think that a general rule of thumb would be: if a piece of a program depends on another piece in order to run (or be useful), they should be combined into one program. Because if there are dependancies, it would be rather pointless (not to mention a nightmare for the users) to split them up.

Also, I think that we should decide on a single programming language for everything to be programmed in, simply for the sake of consistancey and making things easy for developers (trying to navigate code of interacting modules that are each potentially written in a different language is a pain in the rear). I am thinking that either C or C++ would be the best choice (since they are so widely used, and so powerful), but I am completely open to other suggestions as well.

xype
Posts: 127
Joined: Tue Oct 15, 2002 10:36 pm

Post by xype » Fri Jun 06, 2003 5:08 pm

cessen wrote: So, I would warn against making the components too small. I think that a general rule of thumb would be: if a piece of a program depends on another piece in order to run (or be useful), they should be combined into one program. Because if there are dependancies, it would be rather pointless (not to mention a nightmare for the users) to split them up.


Of course. It would all end up in NGB, unless a "component" of the suite would be too complicated/big to merge with NGB (like, say, a game engine). It wouldn't be clever to make stuff like subdivide and delete faces external apps..
cessen wrote: Also, I think that we should decide on a single programming language for everything to be programmed in, simply for the sake of consistancey and making things easy for developers (trying to navigate code of interacting modules that are each potentially written in a different language is a pain in the rear). I am thinking that either C or C++ would be the best choice (since they are so widely used, and so powerful), but I am completely open to other suggestions as well.
Well, I would certainly consider Objective C myself (seeing how nice it works in Cocoa), and it's also compileable with GCC. What I think would be more important than the language itself, though, is a robust Python implementation, since the low level stuff will require platform/OS specific code anyway and the User should be able to have a flexible scripting language to work with without the need to worry whether he is calling MFC or Cocoa for his drag'n'drop operations.

From how I judge the languages that would be an option:

C - plain, simple and effective, wouldn't allow any Object Oriented approach that might be nice for some developers
C++ - OO in addition to C, but introduces a lot advanced stuff like templates that can go terribly wrong if one is not cautious
Objective C - Nice in theory (C with a OO touch) but probably not known outside of the NextStep world

Although I am an Apple fanboy, a C++ approach with well defined coding guidelines (what can be used and what not) would certainly be a viable solution. You can check out ObjC anyway, though. :D

Jamesk
Posts: 239
Joined: Mon Oct 14, 2002 8:15 am
Location: Sweden

Post by Jamesk » Fri Jun 06, 2003 5:56 pm

xype wrote: It wouldn't be clever to make stuff like subdivide and delete faces external apps..
Heh, no... modularity is only a good thing to a certain degree :D

As for language choices::.. my vote would go to C++. Considering the amount of coders working independently from each other, the only way to be relatively safe is to implement a strong OO-design from the very beginning.

If language was the only important thing, I'd want to do it in Java. It's the only lang I personally feel fairly comfortable with - but in reality that would be an outrageously stupid thing to do. :D

dreamerv3
Posts: 119
Joined: Wed Oct 16, 2002 10:30 am

Post by dreamerv3 » Fri Jun 06, 2003 11:00 pm

I apologize for the posts' length but seeing as my "Lifetime hosting agreement seems to be going under due to a inability of my webhost to deliver (www.webhostingfactory.com) I have to find a real world host who I can expect to pay regularly and get good service from... At leat the domain names are safe...

When I get decent affordable hosting 1 -2 months then I'll post links...

Back to my reply...

I'm for Verse, but I would motion the 3d concepts of verse to be considered for the internal 3d data formatting of N.G.B

I mean no disrespect to Eskil and his work and I quite like the idea, but in the interest of multigigabytes of data requiring a fast uninterrupltible connection between modules, a TCP/UDP internal communication system would not be the best way to handle rendering 50,000,000 polygon scenes....

We're not playing games all the time, N.G.B could well be used for doing high level animation.

As far as rendering subsystems and conusion thereof:

I'm NOT suggesting we merge the realtime renderer and software renderer(s) together quite the opposite.

Think of it this way:

You enter an airport right, you need to go to (whatever airline) and board flight number (X).

Well airlines are typically grouped in clusters in different parts of an airport they are not all in one big pavillion. Think of the grouped clusters as the subsystems, for example north american airlines, south american airlines, western europe, eastern europe asia, africa, etc... metaphorically these continents could be subsystems where an airline is a plugin for that subsystem, each subsystem would have clearly defined rules via an API for the purpose it serves, a plugin writer would have to read the api function calls required to use the services of the subsystem under which thier plugin falls.

the two options for the rendering subsystem can either be realtime (GL 2) or offline rendering (software/slow) Think about it, is there any OTHER kind of rendering???

These would be implemented under the rendering subsystem, the rendering subsystem would accept the data from the kernel and (pre-render) prepare it for output to the realtime engine which would be tightly optimized for N.G.B or it would output it to whatever offline (software) renderer plugin had been installed/setup to accept the output of the rendering subsystem.

This keeps the cohesion of the application intact whilst still offering multiple output paths via a less complicated approach for the developer since all they have to do is read the API spec for the rendering subsystem (a mere chapter compared to a whole book consisting of mutliple volumes of API functions).

Life is easy for the developer of the plugin, life is easy for the user, and the N.G.B teams are free to change the underlying technology with minimal interference to the plugins though this last point is a matter more of implementation than of system design.

If we don't have a viable realtime engine in the works I've been following neoengine (GPL) very closely and it seems like a very capable realtime renderer which works, and unlike the cyrstal space rendering system it isn't nearly as bloated. I'll have to read up on the engine API though I'll mention it @ SIGGRAPH to gauge interest this year...

Verse could contribute the spec for defining the 3d data inside N.G.B which would make the data very generalized and easily adaptable to mutliple uses.

I'm sure there would be a cool use for the verse networking capability.

Drawing on the quake 3 arena analogy...

Me and a great many others thought N.G.B was to be a generalized DCC and authoring system/suite Q3 Arena is a virtual reality implementation (albeit its a kickass bloodfest implementation), you don't make movies in Q3 Arena, you don't make stills with millions of polygons in Q3 Arena, you don't run design simulations in Q3 Arena, to get any type of performance out of the Q3 arena architecture for the aformentioned apps you would need to run it exclusively as a local loopback system, WHY? speed, thats why!

Even if you send updates only, that still means HUGE wait times for the initial upload of the 10,000,000 polygon scene then all the physics and characters and (you name it) would kill your feedback, thus a social disatvantage is created where those with DSL/cable/T1 have usable (barely) access to the verse server sitting on some web host creating huge data transfer bills for the owner of the account (takers?)

While those of us with 56K modems are left drooling and wishing for more salary to pay for a dsl line, now if we only sent change to small low bandwidth scenes then the verse system would work, we would have 1,000 polygon characters and blocky worlds...

Welcome to the N64 era people, step right up and model low polygon trees, nothing looks remotely real... I though we were advancing beyond that...

Eskil: I apologize for sounding extreme, but isn't good esign all about anticipating the worst possible scenario and then designing to eliminate such a scenario from ever occuring... The internet is like a big soup of unknowns now we decide to send 100 megabytes of blendervision over it to a verse server... $$$ access charges.
:( :( :( disappointed users who cannot connect/cannot get the 50,000 poly diablo model fast enough...

I'm shutting up about multiple programming langs, I KNOW that will be a pandoras box, but out of respect for your work and in the spirit of not meaning to offend you I'm keeping quiet...

Now imagine verse is a network communication protocol implemented into N.G.B as a network plugin, now we can work on our model FAST on our own system save it locally and then DECIDE to send it when we can or want to.

I have enough problems accessing my webmail via my DSL lines (somedays it takes forever to login) thats just e-mail!!!

Imagine a virtual world... And I have 760k dsl...

I totally repespect your effort, I really do, and I'd shake your hand if ever we meet, but lets apply "Murphys' Law" to this design shall we?

Murphys' Law for those who don't know, emphasis being prepared for every possible contingency because, whatever can possibly go wrong will go wrong...

Now design for that kind of situation...

As far as being easy to program for:

I devoted an enitre year to painful agonizing ( it was quite enjoyable really opened up my mind a lot) study of c++, THEN I took a programming course in school, now I'm gearing up to tackle OO soon and then its onto the math of 3D.

Wouldn''t you expect at least this level of dedication in your coders?

I mean you want serious people working on such projects, people who care enough to read 1000 page books about differential equations and then OPenGL specs and API conventions...

I really don't see the language of c++ to be an impediment although you might, but if you're going for an installed base of coders, then there is a vast army of c++ talent out there, isn't that a good thing?

If wings were written in c++ we would have our modeller, and a damn good one at that...

I'll shut up about that now, out of restraint. (spare the smart remarks please)

Jamesk
Posts: 239
Joined: Mon Oct 14, 2002 8:15 am
Location: Sweden

Post by Jamesk » Fri Jun 06, 2003 11:47 pm

dreamerv3 wrote:- - -but in the interest of multigigabytes of data requiring a fast uninterrupltible connection between modules, a TCP/UDP internal communication system would not be the best way to handle rendering 50,000,000 polygon scenes....
That pretty much covers what I was thinking too. As far as local systems go (which is, honestly, what 99% of all Blenders execute on) there must be a noticeable performance penalty involved, even if it's just local loopbacks. Most intermodular communications on single-user applications do use more efficient ways for datatransport.
An optional networking plugin? - sure, go ahead! But it should not be the default method.
dreamerv3 wrote:- - -the two options for the rendering subsystem can either be realtime (GL 2) or offline rendering (software/slow) Think about it, is there any OTHER kind of rendering???
Umm.... not really. Therefore, as long as NGB supports both kinds, everything will be jolly good. However, tagging offline rendering as "slow" is a bit agitative - I'd rather call it 100% flexible, independently extensible, not-hardware dependent and futureproof.

xype
Posts: 127
Joined: Tue Oct 15, 2002 10:36 pm

Post by xype » Sat Jun 07, 2003 12:02 am

dreamerv3, I don't know whether you did read any of the replies (I guess you didn't), but I think that you are wrong with both of your main points.

1) If NGB uses Verse, it will most likely implement a Verse Server, which means NGB itself wont need to send that data it already has to itself via UPD loopback or what not. If there are external clients, ok, if there are none just don't even open a network connection. You don't neet a network connection to play Quake3 in single player mode, either, which was my point.

2) GPUs are reaching a point where they don't only use predefined functions but are "programmable". They are becoming processors specifically designed to handle 3D math. Software rendering is 3D math done with the CPU. OpenGL 2 will allow to offload the rendering to the GPU, making it a lot faster. For complicated stuff, it may still take a few minutes per frame, but it will bring the same result as software rendering. That's why Maya 5 does GPU rendering already.

Sooner rather than later the two rendering methods will merge, so there is no real point in planning/making two versions for NGB. 3D rendering is the same whichever you look at it and if an ATI/nVidia card calculates it faster than the CPU I'll take the ATI/nVidia card approach any day.

I hope you had the patience to read this far. :)

dreamerv3
Posts: 119
Joined: Wed Oct 16, 2002 10:30 am

Post by dreamerv3 » Sat Jun 07, 2003 6:24 pm

xype:


As far as N.G.B: Sure if the client already has the 50,000,000 scenes then it won't need to send it to itself, HOWEVER the verse design as described my Eskil, (perhaps there are OTHER possibilities in the design which could be more mainstream) implied that multiple clients (each one being a modeller/texture/animation program) connect to a central verse server or servers (ring) and ADD (upload, show me the bandwidth!) CHANGE (modify, ping rates apply to YOU!) and DOWNLOAD (bandwidth again) scene data.

Now Q3 Arena ships with standard maps, are we then to assume that N.G.B will ship with standard scenes? everybody has the standard scenes? and then people modify them?

What if someone goes off and makes a new scene or changes the template so much that it now has tons of textures and whatnot, we end up with a huge file (block of the verse database perhaps) that must now be transmitted to the server OK, fine, now evetybody who wants to view it has to download it. Unless the server does the rendering too and sends the images only? (that last sentence was a joke for those following this seriously.)

It works as a data model and client server model on a LOCAL machine I'll give you that, but verse has all this pro-internet description, for that to work we'll need some breakthroughts in network bandwidth and by that time the world will have changed due to each user having access to 100Mbps network pipes. In THAT scenario Verse as an internet based system of communication can flourish and then you have to ask yourself, if you want virtual reality, since when did virtual reality beomce a centerpiece of N.G.B ?

Virtual reality is reallity good for games, is verse going to be used more for internet play of games created with blender? Is that the whole internet based server concept?

Verse has two modes here it seems (not really modes in the spec but practically they become such in the human experience), onemode is the localhost based system where both server and client reside on ONE machine, the other is the seperate mode where the server is off somewhere else.

Are you saying that for N.G.B the design will be favoring the localhost implementation (you have to design certain things with bandwidth in mind) and will reatin the option of having verse servers out there on the internet as well, and if so what real world role will they play considering thier internet based limitations?

On GL 2:


I totally agree with the GPU thing, however (and I'm not sure about this) do you mean to say we should more or less merge the functionality of offline rendering algorithms and other "calculation heavy" rendering methods onto the gpu which (again it is my understanding, perhaps it is not so in real life) are more or less optimized to achieve a software (CPU) rendered look but with lightweight cacluation methods that are perhaps not 100% accurate but very close.

That was my understanding, now you're telling me we can dump volumetric rendering code verbatim onto a gpu? Wouldn't that slow down the gpu to minutes per frame which is exactly what we're trying to avoid in the first place?

Most Nvidia cards use textures to store things like normals and other data which on a traditional CPU based setup would all be calculated per frame.

Are you trying to say Nvidia and Ati and 3D labs have merely "focused" all the transistors on the new GPU's to only handling 3D math? and that is the sole reason for 60fps bumpmapped/per pixel lit scenes? with no other tricks that speed up the process?

To imply that is to say would could just run the renderer on the hardware, I think there ARE optimization methods which might skimp on 100% accurate calculation, but in return offer realtime performance.

Clarify this please...

thorax
Posts: 320
Joined: Sun Oct 27, 2002 6:45 am
Contact:

Post by thorax » Sat Jun 07, 2003 10:03 pm

xype wrote:dreamerv3, I don't know whether you did read any of the replies (I guess you didn't), but I think that you are wrong with both of your main points.

1) If NGB uses Verse, it will most likely implement a Verse Server, which means NGB itself wont need to send that data it already has to itself via UPD loopback or what not. If there are external clients, ok, if there are none just don't even open a network connection. You don't neet a network connection to play Quake3 in single player mode, either, which
was my point.
I was going to suggest this idea but I think you got it..
The idea that blender could be a peer (client/server) for Verse.
One idea I had was being able to save the world state out as a blend
file so that the server could periodically backup user's work
to disk in case the server should go down. Also my ideas about object's (data + methods) in Verse could be done also by making a language
like Python the standard for object-based methods, then a standard
of objects with methods could be made, the idea being that when you
want to enter a new object type into the Blender-Verse world, you can do so without having to add it to the Verse standard, and over time
if the object is used a lot, it can be made apart of Verse. But
by also using objects that have methods with them (rather than
only being able to call methods on a remote server, which is what Verse does now) you can guarantee 100% compatibility with all
object types that support the python-like methods..

The python methods would mimic the math used to
do the 3D graphics (which is not very complex). An example
of a method for an object might be to have translate, scale, rotate,
moving vertices, adding UV texture coordinates to vertices, and so on.
Then where there is a lacking of support for such features on the
client-side the client call the methods on the objects locally
and make changes to it, without taking a hit to the server (which
costs as much time as it take for the instructions to propagate
across the network and for the data changes to return to the client).
This way optimizations on the update of the server could be done,
like if the client is doing a lot of point modifications, after every tenth
point modification the ten last operations on the object can be
sent to the server at once. Otherwise the time it take to
propagate a RPC on the server will slow down work on the client,
because the client will need to obtain the ack of the update on the server before it can update the client's view.

Server's that use objects as opposed to servers that use RPC
will allow for some freedom in configuration.. Like if you want to minimize the hit on the server you can increase the update cycle and send the client most of the open sourced methods with the objects. Where the object's routines are not implemented the RPC method could be
used on the object (but that would involving transferring the object
back to the server, running the method on the server side and
sending the modifications back to the client).

So

NGB->Verse->Objects
NGB<-Objects<-Verse
NGB->Objects
NGB->Objects->Verse

read

NGB calls Verse to perform functions on objects
NGB retrieves objects from Verse
NGB directly operates on objects without Verse
NGB sends objects to Verse

There would be an option to overload methods that
exist on the server and put the methods in the object directly,
if the method doesn't exist in the object the method can
be performed by the Server. Its possible the server
could even perform methods on the object that do exist,
this would be like a new method search that starts at the object.

In OO design this could be read

Verse Object inherits from Verse Server,
Thus a Verse object can subtitute for methods in the Verse Server,
minimizing the load on the server.

An extreme form is where the objects can perform all the functions of
a verse server and distribute itself as clients of the server. If you
have messed with Java servlets and applets, its the same idea.. In the case of servlets, rather than write routines into the webserver that support low level functions, allow the CGI's on the server side to inherit functionality from the web server and handle client requests directly, allowing server requests from the client to be handled on other processors (distributing the load). In the case of applets (or client-side methods) the server is saved from the hit taken from client's
that need to perform interactive operations (like sensing mouse-overs
and checking the quality of information in forms). Except
in the case I'm outlining for verse the servlets and applets, the
servlets and applets would be the same object types and the
interaction between client and server would be tighter (coupling), so methods called on the applets could be passed off to the servlets or even the server, and the server/servlets could call methods on the
client side to offload computations that would normally be performed
at the server. And periodically the server could obtain a
update of the model that the users is working on (either through
batching of methods performed client-side or by sending a
copy of all attributes changed on the model).


2) GPUs are reaching a point where they don't only use predefined functions but are "programmable". They are becoming processors specifically designed to handle 3D math. Software rendering is 3D math done with the CPU. OpenGL 2 will allow to offload the rendering to the GPU, making it a lot faster. For complicated stuff, it may still take a few minutes per frame, but it will bring the same result as software rendering. That's why Maya 5 does GPU rendering already.
Well there has been processors designed to handle 3D math all along,
all is needed is a 4x4 matrix, matrix multiply (for scale, dot products), matrix divide (for perspective projections), matrix add/subtract (dot products, translates).

Yeah GPU's are cool.. Its what I miss from the SGI's..
I didn't know the Geforce cards didn't really have GPU's
until someone told me that the CPU does the pre-computations
and the Geforce's just do polygon rendering and texture mapping..
The new GPU fragment programs sounds like dynamic microcode
reprogramming, where you can add your own features to a
GPU by reusing common features of all 3D graphics rendering..

But I'm not sure how well the GPU's can be used to do ray-tracing.
But you could do selective ray-tracing by doing rendering in phases..
It would be like rendering first level eye rays, then second pass would
compute second level eye rays, and so on.. To compute second level eye rays you just render each reflection normal to the surface of the reflective object.. Then at the end of all the phases all the colors at all the levels could be averaged into the color of the pixel.

Sooner rather than later the two rendering methods will merge, so there is no real point in planning/making two versions for NGB. 3D rendering is the same whichever you look at it and if an ATI/nVidia card calculates it faster than the CPU I'll take the ATI/nVidia card approach any day.

I hope you had the patience to read this far. :)
Well to the users it will have to merge, but more freedoms can be taken when the math is specialized and implemented as hard wired instructions in the cards.. The determining factor
is the market for better 3D processors, as long as people
continue to play games that require better 3D graphics there
will be better processors and each vendor will try to find some
kind of specialization of the math to leverage the market
in their favor.

I've heard ATI tends to try to do the math/routines for 3D
overall more consistently and NVidia tends to optimize the routines that
are used the most (this sounds like CISC versus RISC architecture,
CISC processors implement all the routines that software expects to be able to use, and RISC processors are specialized against some statistically more higher used set of instructions, the more highly used instructions the more hardwired {hard wired instructions require a
single instruction cycle to run}).. That means where
one has an advantage is being consistent throughout, the other
has to make up for its lack of integrity by implementing routines in
the library that make up for a lack in the processor, but the
hardware that is optimal for most uses tend to run faster than
the ones designed to work consistently across all programs.
OpenGL is just an interface on the general concept of what is
possible with graphics cards in general, and the implementation
is specific to the graphics card but OpenGL interface may not
allow the developers to make use of the features that are
specific to the graphics card, so the manufacturers will continue to
try to weight things in their favor so they can get more love
from the consumers (what I also call Leveraging).

Post Reply