I was undecided as to where to post this, so my apologies if you feel its inappropriate in this location.
I've been thinking about the areas of Blender that could do with substantial improvement, and one that sticks out is the sequence editor. Now I do like the current sequence editor, and it fulfils its purpose, but it has some severe shortcomings:
1. Meta-strips are nice, but quickly become complicated
2. Plug-ins must be written in C
3. The interface is utterly unintuitive, and a little clumsy even when you are used to it
4. I'm not convinced about colour accuracy (sorry I'm english so its got a 'u' in it!)
Tackling the colour accuracy first, what would be really nice is to be able to access things like the unified renderers 96-bpp data and bring that in for processing in the sequence editor, and then keep at that level of precision throughout compositing, before final output at a lower bit depth. There might be use in bringing the Film Gimp's colour processing on board here, although I'm not sure about licensing issues with that.
Secondly I'd like to think about the interface. The way I see it progressing is more like an audio application, like for those of us familiar with them, Audiomulch and Cubase VST/Logic. In this way you could have "tracks" of images as we currently do, but I would add a "composite node tree" for performing the operations in. This tree would be a new window, and consist of block such as a track from which an image could be sourced and maybe even ipos for parameter input etc. Dials and sliders could also be presented in this tree for plugin parameter control. I'm envisioning something a bit like the logic bricks for game control crossed with the OOP window here.
You'd also have the image windows as they currently are, although I'd add a few things like a pipette for picking up colours (maybe things like Z-info as well) and zooming. We'd also need to think about making that display accurate, so providing a means of calibrating the display.
As an example of workflow you could load up a strip on to a track. You could then load up a blur plugin into the node tree window and connect the strip output into the in of the blur plugin, who's output you connect to the image out (which is a permanent output, although being able to have a general, 'image out to' node would be handy for saving at different points in the graph/tree). You could then import an ipo block and connect it to the blur radius parameter of the blur plugin and easily have a fade between two settings.
I will try to mock up what I'm on about, but I believe this interface would be a lot more intuitive and scale with complexity better than the current system.
Lastly I'd like to think about the plugins themselves. I'm really thinking about two things, multiprocessing and languages. I'd like the system to take advantage of as many processors as the OS has access to (maybe even have built in provision for future cross network work) and I'd like plug-ins to be in any language. It seems a tall order at first, but I think the key thing would be a "plug in definition file" which would state the interface requirements for the plugin, and the command line required to run it. That means that when the plug-in comes to run, Blender executes the command line appended with appropriate parameters and the OS can deal with what processor to do it on (Blender could then go and start another process going on another part of the tree whilst another processor does something else).
Now, given someone has written a module for the language which makes handling the image buffer and provides access to Blender's image processing functions, plug-in development would be made a lot easier. That would be very important since I see virtually every function of the sequence editor except for ins/outs and ipo drivers, to be done by plugins. When I say plugins I mean things like wipes/fades (which would be done with a node with two image inputs and possibly an ipo to drive them, or one of a set of predefined speed graphs) and things like colour correction, alpha keying etc. which could all advance a lot faster if access to higher level development tools for plugins was provided. (You can conceive a python based colour correction plugin would be just a few lines long).
Of course theres a speed penalty for not using straight procedural C on a single processor system, but I think with increased use of parallel machines and overall speed the bottleneck would actually be writing the code (there's still not that many really useful sequence plugins). The ability to have plugins in Python, Ruby, C, Java, K or anything else all working on the same project would be fantastic, both for this community, and in the computer graphics community such a thing would be of great use to a lot of people.
The way I conceive it is that Blender could also form the basis of a professional spec compositing system, and I think this would be the way forward for this side of it. I've got a few more ideas in my head (!!!) about how this might work, but I thought I'd post what I'd thought of so far just to assess reaction. My personal feeling is that it can do everything you can do already, but also an awful lot more, but I'm posting it here so you can try picking it apart before I invest too much effort prototyping it.
(Regarding the prototype it will obviously be a heavy piece of object oriented design, so it will probably be some time, and I'll be doing it separate from Blender itself, with the possibility to bring it in later.)
Compiling, libraries, modules, coding guidelines and porting
1 post • Page 1 of 1