interface

The interface, modeling, 3d editing tools, import/export, feature requests, etc

Moderators: jesterKing, stiv

lukep
Posts: 0
Joined: Sun Apr 04, 2004 1:39 pm

Post by lukep » Fri Aug 24, 2007 7:18 pm

Antlab wrote: - your two images are VERY interesting. In the second, about Maya, you ONLY look at the far right, an impartial observer should ALSO notice, well...
entire rows and one column full of.... ICONS, for every sort of function (those designed by Azrael are nicer though :-)). And what I wrote some posts ago? That using icons to access more detailed panels is more rational respect to the famous enormous matrix of text buttons directly on the screen. So, if you calm down and look with more lucidity at your own examples, you could easily agree with my positions (I think quite similar to those of Sabrina and many others).
It was said over and over in this thread, that for a tool based software, icons make lot of sense. For an action-based one, they contradict with the workflow.

Guess what ?
Maya use the tool-based paradigm, Blender the action based one.

We wont change the base paradigm, because it is main blender strength.
So even if icons have their place, coying the icons palettes from maya or C4D as they are is not an option.

And icons to access button palettes are already used in blender, as context choosers. this use of icons is ok in an action-based workflow.

Adding more sub-contexts is possible, but it is an extra layer of complexity. some care is due here to not make things worse.

I'm currently doing an analysis on the global UI, and what i found so far is *not* a lack of shiny icons, but rather basic conflicts that dont need any graphic enhancement to be solved.

graphic enhancements are of course still seeked because of the "Oh shiny!" factor, but dont think this would solve the usability problems.

look at the tuhopuu experiments. buttons layout and apparence were far better. But the UI was globally unchanged, and the problems were still there. More pleasant to the eye, but no real gain.

Antlab
Posts: 0
Joined: Sat Aug 11, 2007 12:50 am

Post by Antlab » Fri Aug 24, 2007 8:17 pm

lukep wrote: It was said over and over in this thread, that for a tool based software, icons make lot of sense. For an action-based one, they contradict with the workflow.

Guess what ?
Maya use the tool-based paradigm, Blender the action based one.

We wont change the base paradigm, because it is main blender strength.
So even if icons have their place, coying the icons palettes from maya or C4D as they are is not an option.

And icons to access button palettes are already used in blender, as context choosers. this use of icons is ok in an action-based workflow.

Adding more sub-contexts is possible, but it is an extra layer of complexity. some care is due here to not make things worse.

I'm currently doing an analysis on the global UI, and what i found so far is *not* a lack of shiny icons, but rather basic conflicts that dont need any graphic enhancement to be solved.

graphic enhancements are of course still seeked because of the "Oh shiny!" factor, but dont think this would solve the usability problems.

look at the tuhopuu experiments. buttons layout and apparence were far better. But the UI was globally unchanged, and the problems were still there. More pleasant to the eye, but no real gain.
Hi Lukep, thanks for the answer. You are clear in the explanations, and I think to have quite understood your base concept about the difference between tool-based and action-based paradigms. Anyway, something is still unclear to me.
Take the simplest example: the primitive meshes.
Now we have: ADD (from text menu or spacebar)->MESH->various types.
Which is the obstacle to simply add a toolbar with the icons for different meshes, directly accessible with one mouse click? Do you think that similar solutions would damage the Blender workflow? I ask this because sincerely I still have to fully understand which are the unsurmountable limits due to the action based paradigm regarding the use of normal graphical elements

Thanks

Ciao

Antonino

kAinStein
Posts: 63
Joined: Wed Oct 16, 2002 3:08 pm

Post by kAinStein » Fri Aug 24, 2007 10:03 pm

Antlab wrote: Exactly :-)
Maybe you use that sort of attention called "selective", let'see:
-for you the success of Microsoft is ONLY related to DOS and text based appilcations (I use your exact words), for the rest of the world the history did not stop in 1990, and the substantial monopoly of Windows (GUI here, attention) is a well known fact.
You not only seem to be extremly stupid (or else you would have known what I told you several times and I am really getting fed up with it), you are also a liar because I've never written anything that you state. It's not even close to what I wrote! Actually you are twisting words to the opposite without facing the facts I named you. You didn't get the whole point at all and I tried it quite politely in the beginning. So get your head out of your rear before making false quotes of things I obviously have not said.
- your two images are VERY interesting. In the second, about Maya, you ONLY look at the far right, an impartial observer should ALSO notice, well...
entire rows and one column full of.... ICONS, for every sort of function (those designed by Azrael are nicer though :-)). And what I wrote some posts ago? That using icons to access more detailed panels is more rational respect to the famous enormous matrix of text buttons directly on the screen. So, if you calm down and look with more lucidity at your own examples, you could easily agree with my positions (I think quite similar to those of Sabrina and many others).
Again: What you call an "enourmous matrix of text buttons" is exactly the same that is on the far right of Maya or any other package I know of - even the layout is similar though the Blender widgets are quite different in their appearance. So if you are comparing things then they should be of the same kind ("My lear jet moves a lot faster than your living-room standard lamp!")! I even showed you that (visually) and you still don't understand. It won't go away even there would be 1,000 icons on the screen! My conclusion is now that you have never used a 3D package before and you don't know what you are talking about! To the icons I already have made a proposal based on existing code which Matt Ebb has already written. If you actually would be able to read then you certainly would have understood it. Azrael got it, I'm sure and it seemingly made sense to him. Anything else that you write does only show an extreme lack of knowledge, lots of ignorance and plain stupidity: So please simply shut up and do something useful!

. o O (We need an ignore list on this forums, geez!)

---

Now to something more productive (I really had the hope that this thread could have become that - so I don't give up):

I made a few little tests on efficiency and economy in the usage of 3D modellers. It's something more for people who like indices and statistics but perhaps interesting. I for my part was surprised with some results.

I used following programs with the following types of usage (all free software - I don't have any Windows installation anymore - so I couldn't install the last Maya version I own which also is quite old - I used K-3D instead which uses similar concepts):

K-3D - using only icons + using the shortcuts
Wings3D - haven't used any shortcuts because I haven't set up good shortcuts
Blender - traditional usage + using the manipulator widget

The tests consisted in creating a cube, doing some few modifications like extrusion and some basic transformations like translation, scaling and rotation. For simplicity I only worked in face selection mode on all programs. But this doesn't matter because selections are quite identical on all. The efficiency was calculated using a simple GOMS model (like described in "The Human Interface" - thanks - I had a reason to get it out of the shelf again). So I don't take ergonomics, user fatigue or other factors into account. The same goes for the time invested in learning or the fact that the programs have different complexities - though it doesn't matter in this simple test.

The overall efficiency (for the same simple modelling task, of course) resulted into the following (rounded):

K-3D using the icons: 30%
K-3D using shortcuts: 47%
Wings3D: 30%
Blender: 55%

So seemingly not a big difference between Blender and K-3D. But this was a very small test and probably not representative. So I decided to analyse the basic actions which are the most used - there I had the first surprise:

K-3D using the icons: 36%
K-3D using shortcuts: 71%
Wings3D: 36%
Blender traditional usage: 100%
Blender using the widget like in K-3D without shortcuts: 36%
Blender using the widget with shortcuts: 60%

This means that Blender only needs the minimal information that is required to perform simple tasks like transformation, scaling, rotating and the like. These actions are actually the most used, which means that the limit for an overall efficiency would be 100% - though never reaching it. The same would apply to the other overall efficiencies. This was a big surprise for me because anything over 60% is good and I thought that 100% would actually be impossible.

So I tested what could affect the overall efficiency. The result for adding a mesh cube (100% would be a single keypress which would be the minimal amount of user action that accomplishes the task):

K-3D if the tab is visible: 63%
K-3D if the tab is hidden: 32%
Wings3D: 39%
Blender using the spacebar menu: 16%
Blender using SHIFT-A: 24%

This was the second surprise regarding Blender. Though it could be raised up to 32% (25%) if the NUMKEYs would affect menu item selection. Using the menubar would even be worse for the user. So it would make sense to have an object palette for people who are constantly adding new objects into the scene. But I guess that this doesn't happen that often to justify less space on the screen. But it's clear information.

Then I took a look at the mouse movement because it is a factor in efficiency that I did not take into account. I used Kodo to get the way the mouse pointer went. This is not really accurate but should be enough to show the tendencies. I calculated some kind of index which show the relations between the best value and the other values (1 is the most economic, 0 the worst):

K-3D only using icons: 0.42
K-3D using shortcuts: 0.75
Wings3D: 1.0
Blender traditional usage: 1.0

So Wings3D and Blender need quite exactly the same amount of mouse movement and K-3D needs more. This means that Wings3D and Blender are more economic than K-3D. Wings3D and Blender work similar while performing transformations: There is no need to move the cursor to the widget and point exactly on an axis (or the midpoint) of the widget and dragging it around. This leeds to less mouse movement. Using Blender the same way K-3D is used (means "using the widget") would decrease the economic factor to the same amount. In addition there is no need to hold the mouse button pressed with Wings3D and Blender which would certainly fatigue the user after a few hours resulting in less accuracy and less speed - or even a strained and injured arm.

That was it. I certainly will do further investigation when I've got the time for it.

What does it mean for a future interface and for migrating from another package to Blender? The Blender interface is extremly efficient in the way it works. This is the biggest point for a migration. It was a inhouse software with the most possible productivity in mind. It is solid, reliable and mature - though not easy to learn. The biggest obstacle in a software migration is mostly the need to (re-)learn the new program (relearn in the way that people who master some software have the notion that they already know everthing - which is natural but often not true). So one point would be to minimize the need to relearn the already known. This can be achieved by giving the possibility to adapt the program to the habits of the user - means: A first step has been already made with the transformation widget. An absolute must would be to change keyboard layouts. This could be either used for setting up a mimic of another interface with a loss in efficiency or for just assigning shortcuts to the traditional way Blender works and having a 100% efficiency on most-used basic operations. This is on the way with the event system refactor. A second thing to be done could be having (like in Matt's proposal) the possibility to create toolpanels inside a view and give some setups for several tasks/ views along with Blender, including for example basic modelling operations like deletion, extrusion, mirroring, knife tool/ loop cut and the like - and in addition a panel with some basic objects since the analysis has shown that having icons for that would be more efficient and also users of other packages might need them. For the latter I would remain only on mesh objects because anything else isn't used that often. If someone needs it then the possibility to add it still remains. This also avoids clutter in the viewport. The advantage of doing it this way instead of creating a new window type is clear: When maximizing a 3D window the panels are still there and can be used. These actions taken would reduce the need for basic instruction quite a lot as it would reduce the need to break with habits (the way of doing things or screen layouts for example) and so making a migration a lot easier. In addition they wouldn't break the common Blender workflow and keeping the efficiency and economic mouse usage and could even bring a better way of working into existing workflows of people that haven't used Blender before.

Rests the question about existing data and integration with other tools which also counts a lot. But this isn't a interface issue.

lukep
Posts: 0
Joined: Sun Apr 04, 2004 1:39 pm

Post by lukep » Sat Aug 25, 2007 3:44 pm

Antlab wrote:Take the simplest example: the primitive meshes.
Now we have: ADD (from text menu or spacebar)->MESH->various types.
Which is the obstacle to simply add a toolbar with the icons for different meshes, directly accessible with one mouse click? Do you think that similar solutions would damage the Blender workflow? I ask this because sincerely I still have to fully understand which are the unsurmountable limits due to the action based paradigm regarding the use of normal graphical elements
ADD is neither an action nor a tool, as it does not rely on a current state (a selection). It is a command.

Commands can be implemented as icon palettes, but think a bit about this :

- there is 35 primitives in blender. Even with the minimal size of 16x16 for icons, that is 8960 pixels of real estate for something you use roughly 1% of the time. with the rather more normal size of 32x32 pixels, it is the full width of most screens.
- of course we can cheat a bit and add a context, one icon for each base type and a second row with vary according the first selected.
This time we need 10 icons on first row and 10 on second. It slow down the interface for a little gain of real estate. Grouping the singletons of first row make it down to 6, but this last group is not logical. If the second row is a shared area between all commands it may even be bearable. One example of such interface is found in solidworks.
- But the difficulty is not there. Can you design 35 icons (+6 or 10 if you go the 2 steps way) which are both meaningfull and can be identified at a glance ? As Susan kare puts it :

"I would say an icon is successful if you could tell someone what it is once and they don't forget it"

For my part, i know i cannot design 35 icons like that. Maybe a genius designer can, but i want to see the result first before even thinking of implementing this.

Commands are normally part of menus.
kAinStein wrote: The efficiency was calculated using a simple GOMS model (like described in "The Human Interface" - thanks - I had a reason to get it out of the shelf again). So I don't take ergonomics, user fatigue or other factors into account. The same goes for the time invested in learning or the fact that the programs have different complexities - though it doesn't matter in this simple test.
kAinStein, you are cheating :twisted:
You can not know that a GOMS model will favor a action based tool when compared to a tool one ! This is even why Raskin designed the macintosh UI this way. Your comparisons are of course right, but dont show the whole thing.

Shear efficiency measurement should not be used without ergonomic in mind.

A tool based interface is the most ergonomic if and only if there is a reduced number of modal tools that can be reused over and over.

For example in the sculpt mode, we are in a tool based sub-context, and it is not arguable that in view palette of icons would be more ergonomic interface. doing it right is however hard.

for the other modes, which are action-based, using icons palettes does not make sense i think except for the few pure commands which dont rely on pre-settings.

kAinStein
Posts: 63
Joined: Wed Oct 16, 2002 3:08 pm

Post by kAinStein » Sat Aug 25, 2007 3:57 pm

lukep wrote: kAinStein, you are cheating :twisted:
You can not know that a GOMS model will favor a action based tool when compared to a tool one !
That's true. That's why I mentioned it. Depending on models factors can be different. Additionally it's not weighted.
This is even why Raskin designed the macintosh UI this way. Your comparisons are of course right, but dont show the whole thing.
It wasn't my intension to show the whole thing and I think I mentioned it.
Shear efficiency measurement should not be used without ergonomic in mind.
Right. That's why I compared at least the economic factor regarding mouse movement. It would take much more effort to analyse all. I will come back to it.
A tool based interface is the most ergonomic if and only if there is a reduced number of modal tools that can be reused over and over.

For example in the sculpt mode, we are in a tool based sub-context, and it is not arguable that in view palette of icons would be more ergonomic interface. doing it right is however hard.

for the other modes, which are action-based, using icons palettes does not make sense i think except for the few pure commands which dont rely on pre-settings.
Not nessessarly.

GDP_Sabrina
Posts: 0
Joined: Sat Aug 04, 2007 9:50 pm

Post by GDP_Sabrina » Sat Aug 25, 2007 8:17 pm

Sorry folks, for being out for some days and leaving that post uncommented.

Lately I have studied some (video) tutorials and played around with blender for a few nights now.

Here is my summary of the "successes":
a) I created a "hair example" by watching the video tutorial and doing everything step by step using blender and the excellent video tutorial
b) I worked through some mesh modeling tutorials using a gallery of keystrokes, which I watched at the tutorials
c) I created materials, lamps and objects, positioned them and, created some subsurfaces and played around with these objects, installed and used YAFRAY and played around with some rendering options and the sky options

My problem with blender still is, that there are so many keystrokes, buttons and panels, which are not very self explaining and which don't explain their meaning and their workflow in between. For example, being in Edit mode ( "F9 Editing" ) there is a button "smooth" in the "mesh tools panel" and a button "set smooth" in the "link an materials panel". It is absolutely not self explaining, what these buttons mean or when they should be used differently. Following the dice-tutorial, I made it to create a smoothing effect, but IMHO many of these wonderful functions which are used only occasionaly are not intuitively usable. Giving these buttons redundant names makes it more harder IMHO.

I think blender has lot of wonderful features, but my summary of the experience with the interface is, that blender has no goal or workflow oriented interface, but more a technically oriented collection of buttons and comboboxes, which make it hard for occasional users to find out what to click. Pros will surely be able to reach everything in miliseconds, but as long as you don't know the (hidden) workflow of buttons, you are stuck.

Please don't understand my post offensive !
As an occasional users, I find it rather hard to achieve results - especially without having those tutorials on hand.

Regards
Sabrina

kAinStein
Posts: 63
Joined: Wed Oct 16, 2002 3:08 pm

Post by kAinStein » Mon Aug 27, 2007 5:38 pm

GDP_Sabrina wrote:Sorry folks, for being out for some days and leaving that post uncommented.
Same here.
Lately I have studied some (video) tutorials and played around with blender for a few nights now.

Here is my summary of the "successes":
a) I created a "hair example" by watching the video tutorial and doing everything step by step using blender and the excellent video tutorial
b) I worked through some mesh modeling tutorials using a gallery of keystrokes, which I watched at the tutorials
c) I created materials, lamps and objects, positioned them and, created some subsurfaces and played around with these objects, installed and used YAFRAY and played around with some rendering options and the sky options
That's already a lot for that short period of time - honestly. I told you that you'll get it quite fast.
My problem with blender still is, that there are so many keystrokes, buttons and panels, which are not very self explaining and which don't explain their meaning and their workflow in between.
Most of the keystrokes are self-explaining: "R" rotate, "S" scale, "G" grab (this is a suboptimal name for it), "E" extrude, "A" select all/ deselect all, "SHIFT-A" add, "U" undo, "M" mirror, "B" block select, "D" draw mode, "L" select linked, "P" play (game engine), "I" insert keyframe, "F" (in edit mode) create a face, "H" hide (ALT modifier inverts the effect quite consequently throughout the program - unhide in this case), "K" knife, "SHIFT-D" duplicate, "SHIFT-S" snapping, "N" numerical input, "T" texture space (I don't like this one...), "C" center view on cursor, "V" vertex paint, "1" to "0" for the layers 1 to 10, "ESC", etc. They are not that hard to learn if you know the name. Using them becomes automatic and extremly accurate at some point. I think lukep was it that told you that the gallery results need practice and that is true.

But there are also keystrokes that can't explain themselves because the key is already in use. The required key often is not that far away: "W" is the special functions menu, but "S" is already used. The usage becomes equal to the others though learning them can get troubling.
For example, being in Edit mode ( "F9 Editing" ) there is a button "smooth" in the "mesh tools panel" and a button "set smooth" in the "link an materials panel". It is absolutely not self explaining, what these buttons mean or when they should be used differently.
Ok, let's see: "smooth" smoothes your selection by avaraging edge angles - means: It changes your mesh. Well, it's the opposite of "noise" in some kind. Imagine you have a plane subdivided multiple times like for a terrain mesh for example, and that mesh would have a lot of spikes you want to get rid off. "smooth" avarages the angles of each edge so you get a smoother mesh. "smooth" is not that often used.

"set smooth" sets the shading of your selection to smooth (soft) shading. It does not change your geometry - only the lighting. So you are right: "smooth shading" or "soft shading" would probably be a better name than "set smooth" as it actually doesn't smooth your mesh. The opposite would be "solid shading" or "flat shading". It's surely not a lucky name, I agree. On the other hand these buttons are on the link & materials panel while the others are on the mesh tools panel - and they do what they are supposed to do: They set the shading to smooth or to solid.
Following the dice-tutorial, I made it to create a smoothing effect, but IMHO many of these wonderful functions which are used only occasionaly are not intuitively usable. Giving these buttons redundant names makes it more harder IMHO.
That is true and it caused some discussion about naming conventions. Most of the users and certainly also the developers are aware of it, I think. There are some names that would better be changed. But you should also be aware of the fact that someone whose job is to create 3D art should supposed to know basic terms and techniques. This includes shading techniques.

But to make you feel more comfortable: I'm also quite unhappy how the editing window has become. It became somehow too crowded.
I think blender has lot of wonderful features, but my summary of the experience with the interface is, that blender has no goal or workflow oriented interface, but more a technically oriented collection of buttons and comboboxes, which make it hard for occasional users to find out what to click. Pros will surely be able to reach everything in miliseconds, but as long as you don't know the (hidden) workflow of buttons, you are stuck.
That's not really true, so it might seem to you. But I agree that it's hard to get through that all if you have different habits.
Please don't understand my post offensive !
As an occasional users, I find it rather hard to achieve results - especially without having those tutorials on hand.
Your posting wasn't in any way offensive. None of your postings has been in any kind offensive!

GDP_Sabrina
Posts: 0
Joined: Sat Aug 04, 2007 9:50 pm

Post by GDP_Sabrina » Mon Aug 27, 2007 10:50 pm

Hello kAinStein,

just watched the three amazing tutorials at Blenderunderground.com:
http://blenderunderground.com/2007/08/1 ... 3-is-live/

Woah !!!

Now I start to understand, what you guys are always talking about. This interface is highly adaptable - that's amazing (though I still miss icons attached to menu items since I just don't like to hoover over all those tooltips to understand what these buttons should do).

Two questions:
a) Would it be possible (with reasonable effort) to add icons to the menu items - maybe customizing my menus using my own icon themes - just like setting my blender interface using colour themes ?
b) Another question is, if I could create my own panels by adding menu items (with my icons), selectionboxes, buttons etc. and rearrange them - maybe my panels could derive from the blender standard panels, so in 3D view I would only place 3D relevante buttons !?


Regards
Sabrina

kAinStein
Posts: 63
Joined: Wed Oct 16, 2002 3:08 pm

Post by kAinStein » Wed Aug 29, 2007 12:46 am

GDP_Sabrina wrote:Hello kAinStein,

just watched the three amazing tutorials at Blenderunderground.com:
http://blenderunderground.com/2007/08/1 ... 3-is-live/

Woah !!!

Now I start to understand, what you guys are always talking about. This interface is highly adaptable - that's amazing (though I still miss icons attached to menu items since I just don't like to hoover over all those tooltips to understand what these buttons should do).
I understand that.
Two questions:
a) Would it be possible (with reasonable effort) to add icons to the menu items - maybe customizing my menus using my own icon themes - just like setting my blender interface using colour themes ?
Well, yes and no:
Yes, you can have custom icon themes through an own icon map and the menu items can have icons assigned (there are some which have icons, if you look through the menus). So the base for it would be there to some degree. But:
No, at the moment there aren't much icons assigned which is a hardcoded thing and the icon map is quite crowded. So first there must be made a better and more dynamical approach, I would guess. Though it's certainly desirable (This would lead to what I proposed - and having icons only tool palettes that way).
b) Another question is, if I could create my own panels by adding menu items (with my icons), selectionboxes, buttons etc. and rearrange them - maybe my panels could derive from the blender standard panels, so in 3D view I would only place 3D relevante buttons !?
Not at the moment. But let's see what the UI and event system rewrite brings. I don't know what is planned. But what you asked is exactly something I also would like to have and it would be consequent in regards of UI customization.

GDP_Sabrina
Posts: 0
Joined: Sat Aug 04, 2007 9:50 pm

Post by GDP_Sabrina » Wed Aug 29, 2007 1:12 am

Hello kAinStein,

thank you very much for this information !

Do you expect this UI rewriting being done with the release of version 2.50 or sometime later ?

Is there any group that works on this item, so that I could ask them about process ?

Regards
Sabrina

kAinStein
Posts: 63
Joined: Wed Oct 16, 2002 3:08 pm

Post by kAinStein » Wed Aug 29, 2007 11:44 am

GDP_Sabrina wrote:Hello kAinStein,

thank you very much for this information !

Do you expect this UI rewriting being done with the release of version 2.50 or sometime later ?
From what I could read until now this is planned to be done for 2.50:
http://www.blender.org/development/curr ... since-244/
But I think this will take some time. The 2.45 is already getting late, the Blender Conference is near, Peach is on its way, too, and there will probably be some major refactors. Hope for it this year, but actually don't expect it.
Is there any group that works on this item, so that I could ask them about process ?
I think so. As LetterRip and I think joeri have mentioned: It's a priority for Ton and he is or will be on it (as soon as he can).

You could ask on the developers' irc-channel:
Server: irc.freenode.net
Channel: #blendercoders

_nnx
Posts: 0
Joined: Fri May 01, 2009 7:47 am

Wait.. what?

Post by _nnx » Fri May 01, 2009 8:03 am

Halt Halt Halt.
It seems like this discussion has diverged into a meaningless flamewar and it doesn't have to be this way. Let's take a look at the scenario.

We have a current group that very much likes and prefers blenders shortcut-key driven interface and workflow. The minimalistic approach to visuals seems to provide an uncluttered view and adds focus to optimizing the users workflow.

We have a large group of supporters for a more visual interface who are more used to visual-based interfaces and mouse-driven interaction with direct feedback. The traditional approach is widespread and many users feel more compfortable and are more immediately productive using this approach.

And here is where we are forgetting one major thing:
Why not have both? Should it not be entirely possible to assemble a group of users who would like a visual interface and add this as an alternate interface choice? Even a plug-in approach should be feasable with an interface that simply calls the existing key commands in a macro-fashion(or directly polls the underlying routines)

In open source development variety is the spice of life. A clear focus is of course imperative, but if there is a large enough demand for a feature and a dedicated workforce willing to implement it then why stand in their way? This will simply lead to more options for the end user and a broadened user experience.
It also improves accessibility for those who have trouble with keyboard shortcuts but are fine with using a mouse.

I suggest a gathering of all those interested in implementing the visual interface for a discussion about implementation options and creation of a roadmap. If it has been decided that it will never be a part of the core program, I'm sure it may well be one of the most widely popular plug-ins or offshoots ever created for blender.

Limiting choice for the user seems a bit pointless and trying to talk an engaged and devoted developer out of something he or she really wishes to develop seems counter-productive for a community project, after all it doesn't _have_ to be in the core implementation or enabled by default.

(How many mac users turn on right-click as the first option as soon as they get their new OS X install set-up? It's obviously a clear design decision not to include it for simplicity of mouse-use and maximization of keyboard-modifier awareness, but it speaks for itself that thousand of users appreciate the option of being able to turn it on incredibly.)

I for one would be more than willing to contribute. So why don't we formally organize development on this. Would love to hear constructive input as well as the voices of interested parties. I'd be willing to set up initial hosting for the project of alternate interface development for blender. I believe just how blender was able to innovate with its shortcut-key based interface it has great opportunities trying to create a dynamic visual/mouse-based interface with an alternate but equally optimized workflow.

~_nnx

-=KEY=- Point for a pro on an extended visual interface here would be accessibility. Just because a person is disabled in such a way that mouse movement is fine and keyboard usage or button-combinations are difficult should not act as a barrier against using Blender over any other creative 3D application. In it's current state this presents a HUGE barrier for that group.

Nikprodanov
Posts: 0
Joined: Fri Feb 13, 2009 8:54 pm

Post by Nikprodanov » Fri May 01, 2009 12:53 pm

_nnx, you are posting too late - this discussion ended almost 2 years ago! The problems with the interface will disappear in the next version (2.50), which they will probably release in october this year. Some of the features are:

- Custom hotkeys
- Custom toolbars and panels
- Every parameter can be animated

You can read more about it Here and Here.

Post Reply