Comprehensive Character Animation Proposal

Animation tools, character animation, non linear animation

Moderators: jesterKing, stiv

harkyman
Posts: 278
Joined: Fri Oct 18, 2002 2:47 pm
Location: Pennsylvania, USA
Contact:

Comprehensive Character Animation Proposal

Post by harkyman » Wed Dec 01, 2004 6:33 pm

This has been sent to bf-committers mailing list and posted on elYsiun, but I know some people hang out here and rarely there, so I'll post it here, too.

With work about to begin on reconstructing Blender's character animation, I have been working on an end-user character animation proposal. I've ploughed through a lot of the existing character animation code, and have a decent idea of what goes on under the hood. I believe the proposal is quite realistic, as it mostly makes use of what is already available in Blender at the structural level. This document can be found in .html format at: http://www.harkyman.com/animprop/caproposal.html

Comprehensive Blender Character Animation Proposal

I have been digging through character animation systems for different applications for a couple of weeks in order to help come up with some proposals and suggestions for the rework of Blender's character animation system. One thing that impressed me was the foresight of the original coders. Blender's philosophy is one of datablocks, with the GUI being a way of linking and visualizing those blocks. From my evaluations of other animation packages, it seems to me that the datablocks are almost all there. We just need some help with linking and visualization.

I've done a few drafts of this already going point by point on the problems of the current system and how others solve those problems, but it keeps coming out as a hodge-podge. What I'll do instead is just describe what I think would be a superior character animation workflow to implement in Blender.

1. Rigging

Ideally, Blender should include a couple of pre-made biped rigs (and maybe some quadrupeds, if anyone has them) or varying complexity. So to begin animating, a user goes to Toolbox-]Add-]Armatures-]Rigs-]BiPed No Muscles (Ill. 1). Blender adds the armature at the cursor location. The user then enters Edit Mode to adjust the armature to fit their mesh (or better yet, adjusts it parametrically).

Image

2. Animation Workflow

The user can begin to place keyframes for their poses. No Pose Mode. The user also calls up the animation timeline, which is a simple timeline like the audio track trick people use, except that it shows markers on the frames that hold keys for the current object. A keyboard command in the 3D window lets users jump forward and backward through keyed frames for the easy adjustment of existing keys.

If no Action is targeted in the Actions window, a new action is created to hold these keyframes, and this action is added as a new strip to the NLA chain for the armature.

In the current workflow, NLA is an afterthought. From looking at other animation systems, I think that it should be the bedrock. In the current armature evaluation code do (do_all_actions, specifically), there are a lot of if-then traps for different states of NLA v. Action only animation. There should be no Action-only animation. Animation data is pulled strictly from the NLA. If the user doesn't want to know about NLA, they don't have to: the Action that is created when they begin animating is automatically appended to NLA. They never have to see it, and Blender never has to worry about evaluating non-NLA Actions. There are some other advantages, too, but I'll explain those as they become relevant later.

Back to the fact that there is no Pose Mode: how do you move the whole armature object as a single piece? We would require a special bone. Call it the BaseBone or something. Moving that bone translates (or rotates) the armature as a whole. One of the other side effects of not having a specific Pose Mode is that any object can be keyframed into an Action (or, to put it another way, any set of IPOs can be saved as an Action). The only difference between bone IPOs/poses and IPOs for mundane objects are that keys for bones go into Actions by default, whereas keys for other objects do not.

Image

So to the basic user so far, nothing has really changed, except the elimination of two steps: creating your own rig from scratch, and entering/exiting pose mode. Much simple for the user's standpoint.
But for the power animator, this is where things get good:

3. Using NLA for animation layers

The Character Animation Toolkit (awesome demos - check them out - easy reg. required and worth it - thanks Tom) uses layers for powerful animation which seem to be a lot like Blender's NLA could be, with a few modifications.

So, I have my first Action created, just by keyframing the armature, and it's been entered as the first strip in the NLA window. I, the advanced animator, pull up the NLA, and add a new strip. Adding this new strip (note, I'm not appending an already made Action), creates a new Action, which my subsequent keyframes will drop into. As I add my keyframes to this new strip, I see how they blend and interact with those of the first strip, because Blender is only evaluating character animation based on the full NLA, not just on the currently selected Action. Of course, if you want, you can still pull in Actions that were created elsewhere.

Image

Here's where we start to see some power from NLA. Currently, you can adjust BlendIn and BlenderOut only for each strip, which is evaluated within do_all_actions. But now, with the new CA system, each strip has its own blending IPO. Bringing up the properties palette for an NLA strip not only gives you the original parameters, but some new ones as well, explained here:

Name: You can name or change the name of the referenced action from right here.

Mode: Replace or Add, as per the current system.

Mute: Prevents NLA system from evaluating this strip. In the illustration, this control appears directly on the strip's name panel, as a slashed circle. In the example, the LegMove strip is Muted, and so has no effect on the final animation solution.

Solo: Shows only this strip's animation, ignoring others. Shown as the star on in the illustration. In the picture below, of the NLA-baking strip, you can see that the last strip has been set to solo, so only it will be evaluated.

Color: Lets you choose a color for the strip. Your armature appears in whatever is the color (or blended colors) of the current strip. (An all blue strip with an IPO value of 1 would show a blue armature, whereas the same strip with an IPO value of .5 over a red strip would look purple). It allow you to see at a glance which strips are affecting your animation.

MatchMove: A toggle button and a drop down menu, listing all available bones. When it is activated, Blender transforms all keys in this strip so that the initial position of the indicated bone matches the position in time of the indicated bone in the previous strip. This allows you to, for example, keyframe a moving backflip that lands many units away from the BaseBone, then follow it up with a keyframed Action of the character sitting, but MatchMoving them to the characters right foot, so that keyframes of the sitting action are transformed to begin in the ending position of the backflip. This is extremely useful for chaining together and building a library of Actions that do not have to all start and end in the same location relative to the BaseBone.

Additionally, it would be cool to allow each bone to have it's own strip IPO, meaning that you could just use the influence of the Head's IK solver from a certain strip, ignoring the rest, if you so chose. In that case, the Head's IK solver only would appear in the color of that strip. In fact, that's what is going on in the illustration above: IPOs for the lower body bones are set to 0, so they do not affect the lower body. You can see this at a glance by comparing the color of the armature bones to the colors of the strips.
This way of using the NLA system would be extremely powerful, and the additional of optional colorization would make the it much more accessible to users.

The final tool for inclusion in NLA is a Bake tool. If the user is happy with character animation being created in the NLA, he (or she) can Bake it into a new single strip Action, which is automatically set to IPO 1 across the board, and to Replace mode. Constraints could optionally be baked into straight IPO data, or be left live, at the user's discretion. The user can also decide if he wants to retain the underlying strips (which won't be evaluated anyway and therefore won't cost any time, because of the presence of a top-level strip in replace mode with an IPO value of 1) or remove them from NLA. Once animation is finalized, this can be a big timesaver, eliminating a ton of on-the-fly calculations and replacing them with a single set of IPO transforms.

Image

4. Keyframing Tools

In the first section, I said that the user keyframes the character animation. Here are some additions and enhancements to the current keyframing tools.

First, Influence IPOs for all constraints should be set to Auto Key as a default.

Second, reducing the Influence of a bone's IK solver constraint to 0 should release the bones from the IK solution and allow FK keyframing to take over. How exactly does this take place? I'm not sure. Alias MotionBuilder lets you set up two full solutions, one IK and one FK, then see them superimposed and blend between the two. Here's a thought...

When a user wants to switch between IK and FK, they don't really want to destroy the current IK solution. What they really want to do is rotate the IK chain from the selected bone's base to the solver, on a local axis defined by the line between the selected bone's base and that same solver. So here's how you do it: you don't need to generate true FK keyframes. You also don't use the IK solver's influence to do it. Each bone in an IK chain has a button next to it's name in the Edit Buttons, called FK Move. If that button is clicked, rotations on that bone also move the IK target, as though it were the (non-IK) child of the bone. You can download this simple example .blend file showing three armatures here:http://www.harkyman.com/IKFKDemo.blend The first is freeform IK, the second is IK with the IK target as the child of the shoulder bone, the third is IK with the IK target as the child of the forearms bone. So, when doing an "FK Move" rotation on an IK chain, you're generating rotation keys for the actual bone you have chosen, plus rotation and translation keys for the IK target bone.
So that's how you simulate FK motion while maintaining your IK solution, which is what most people want to do anyway. The FK Move button is nothing more than a keyframing tool, and not a true mode.

Third, the introduction of Driven Keys. Since that name's already taken, let's call them Action Sliders. You create a small action, say, the keyframed opening of a hand. First frame is closed (state 1), the second is opened (state 2). Bring up the NLA screen, select this Action strip you just made (remember, all Actions initially get appended to NLA), preform the appropriate Make Slider key command, and the Action disappears from the NLA.

Image

Where'd it go? Take a look at the Action Sliders window. There is now a Slider there, going from 0.0 to 1.0, set to auto key, that controls the pose of the keyed part of the armature. Keys generated with this tool are applied as IPO keyframes within the current Action, not as live slider data, so there is no confusion in the NLA. If you want to change positioning that is already set, you can use the previously mentioned key commands to bebop to the frame with a key on it and reset it to the value you prefer.

Image

If you think about what I proposed earlier, you will notice that any object that can be keyframed can be added to the NLA, and not just object motions. Materials. Light settings. Anything with an IPO. That NLA strip can then be made into an Action Slider which can be used to dynamically set keys throughout an animation. You are also not limited to a two state toggle - you could have a full battery of linear character animation put into your Action Slider, which would proceed on your character over the 0.0 to 1.0 range. Obviously, RVKs could be used with this system as well.

Fourth, the RVK creation interface. There isn't one, really. When you make them they show up as lines in an IPO window, then show up again to be named in the NLA screen. Honestly, anything would be better than this.

5. Visualization Tools

First, onion skinning. When the user turns on the Onion Skinning button in the object's animation buttons, you get two new sliders, Proceed and Trail. They set the number of frames forward and backward in the timeline to show ghosts of the current object. This allows you to see your motion's flow at a glance, without actually going forward or backward in time. An immense timesaver.

Image

Second, animation paths. Turning on Show Path for either an object or a specific bone draws the entire animation path within the 3D window. Other packages have this because it is an extremely useful for character (and normal) animation. There is no reason not to have it.

Image

6. Wish List

Up to this point, Blender's underlying animation structure is mostly in place to do all of this. The building blocks are there, and I've just proposed a new way of looking at, linking, and using some of them. Now, though, I have to address a couple of the drool-worthy things I saw in other packages.

First, footprints. CAT, which I mentioned before, uses these. Go to their website and watch the video. It's about 5 minutes long, and amazing. I believe that CAT uses procedural motion for its walkcycles, as opposed to keyframed, allowing it to alter it's walk on the fly. I'm not sure if this is even implementable in Blender's current structure, as Blender doesn't really know the difference between a foot, a hand, and a tailbone. For those of you not inclined to watch the video, here's what happens: with a walkcycle defined, you link your skeleton to an animated empty (keyframed/path-following/etc.). CAT then moves the skeleton to the position of the empty and calculates and displays the locations of everywhere along it's animated path that the character's feet will fall. You can grab the footprint locations and move them. If you alter the path or keyframed animation of the empty object, you see the footprints rearrange in real time. In-freaking-credible, and a serious tool for animators.

Second, rotation constraints. From what I've read this is not easy to do, especially with IK solvers. The best implementations I've seen allow the user to apply graphical widgets to joints, specifying what kind of constraint it is: conical, ball, etc. The user then sets the limits graphically.

Third, rag doll physics. This would only come after the physics engine would somehow become directly applicable to the static side of Blender (please!). If you're working on recoding bones and armatures, keep in mind that us greedy animators will want the physics to affect our skeletons as well someday. Create and bake a dynamically generated action from the physics and add it to your NLA timeline!

Conclusion

Those are my thoughts on the future of Blender's character animation tools. As I said before, many of these suggestions are just new ways of visualizing and using the structures that are already present in Blender. The tools and workflow I have described would bring Blender on a par with many of the current commercially available animation packages. Hopefully, these analyses and suggestions will inspire the developers to do as good of a job on Blender's character animation tools as they have done on the renderer, mesh modeller and game engine in recent months. Thanks for reading!

Roland Hess
harkyman

You can give me feedback on this by emailing me: me and harkyman dot com or by replying in this thread.

dgebel
Posts: 70
Joined: Mon Mar 17, 2003 7:20 pm
Location: Ontario, Canada

Post by dgebel » Thu Dec 02, 2004 10:59 pm

Wow, looks impressive. On first glance it looks like there's some really powerful but relatively simple-to-use tools in your design. I think I might actually be able to use those NLA tools if they were interfaced like that :)
The final tool for inclusion in NLA is a Bake tool. If the user is happy with character animation being created in the NLA, he (or she) can Bake it into a new single strip Action, which is automatically set to IPO 1 across the board, and to Replace mode. Constraints could optionally be baked into straight IPO data, or be left live, at the user's discretion. The user can also decide if he wants to retain the underlying strips (which won't be evaluated anyway and therefore won't cost any time, because of the presence of a top-level strip in replace mode with an IPO value of 1) or remove them from NLA. Once animation is finalized, this can be a big timesaver, eliminating a ton of on-the-fly calculations and replacing them with a single set of IPO transforms.
One comment about the "baking" process. If I am understanding it, it sounds basically like a "merge layers" type of command. I know I've seen it elsewhere, so I assume its an industry-term. It's rather jargony though.

In keeping with KISS (for the user that is!) what about calling it "Merge Action strips" to create 1 new strip and delete the others. If you know what Bake means, you should know what Merge Strips means, but the reverse doesn't hold! Maybe "Group and lock Action strips" sounds it like would perform the "top-level strip in replace mode" version you mention. "Keep Constraints/Merge Constraints into IPO" would be further subdivisions on the menu.

And if they can't be unlocked, I don't think there would be any point to having them still there, would there? Dead objects are just confusing and clutter up the screen!

bertram
Posts: 164
Joined: Wed Oct 16, 2002 12:03 am

Post by bertram » Thu Dec 02, 2004 11:11 pm

How about "meta"ing actions like in the Sequence Editor instead of baking them? This would keep things non-destructive and therefore reversible.

ZanQdo
Posts: 207
Joined: Sun Apr 11, 2004 4:57 am

Post by ZanQdo » Fri Dec 03, 2004 12:50 am

Brilliant, did you mean you can have NLA strips for RVKs or material IPOs or whatever? I mean, I could create a library of RVK actions and then place them in the NLA window?

Example:
I have a rigged character, so I create individual actions for the armature as always, (walk cycles, runs, etc,) but can I also create RVKs actions to make a library of phonemes, facial expressions, muscle bulges, etc and then place, scale and duplicate them as NLA strips, so I can blend between phonemes just overlapping RVK action strips as I blend armature actions? The same goes to everything else (Materials, lights, camera settings, etc).

If that's what you mean, wow, this is going to be great.

Another thing, I don't get your idea about driven keys, your action sliders are a very good idea but they aren't driven keys. I'm sure I just misunderstood you so please correct my if I´m wrong

Cyberdigitus
Posts: 65
Joined: Tue Oct 29, 2002 3:27 pm
Location: Belgium

Post by Cyberdigitus » Fri Dec 03, 2004 4:13 pm

sounds interesting, but why limit nla to character animation? It would be much more powerfull if you can have any object / hiearchy animation in clips, along with camera switching for instance.

anyway, i'm not familiar with Blender's animation system at all yet...

predefined skeleton's or rigs would be nice, but to have these you'd need some very extensive and low level bone, hiearchy, constaint, ik, etc systems and full python control over those. such skeleton's could then be created trough scripts. better extend the building block so riggers can create great rigs and specialized rigs for the job at hand then some predefined one that may not cover all, there is no such thing as a generic rig.

harkyman
Posts: 278
Joined: Fri Oct 18, 2002 2:47 pm
Location: Pennsylvania, USA
Contact:

Post by harkyman » Sat Dec 04, 2004 12:22 am

My proposal states (I thought quite clearly) that NLA would be available for anything with an IPO. That means, anything you can animate in Blender, you can put into an Action Strip in the NLA.

ZanQdo
Posts: 207
Joined: Sun Apr 11, 2004 4:57 am

Post by ZanQdo » Sat Dec 04, 2004 4:25 am

Great :D So much POWER! :shock:

I still don't understand your driven keys proposal, driven keys should be your "action sliders" but triggered by other things, is that what you mean?

Your proposal is perfect :)

cfolchh
Posts: 7
Joined: Mon Apr 28, 2003 10:09 pm

Post by cfolchh » Sat Dec 04, 2004 1:07 pm

:shock: Wow.
You describe a lot of useful tools. In the thread of NLA you spoke about a "pinned bone", I think this would be a very important feature. In your explain I can´t find this feature, there is another way to do the same?

A thing, that I don´t understand, is why the "pose mode" is not useful. You say that :
If no Action is targeted in the Actions window, a new action is created to hold these keyframes, and this action is added as a new strip to the NLA chain for this armature
Actually you can do this. If you go into pose mode, you can add keyframes and after convert its to an action (with C key). It would be the same that you want to say?

It is the most important analisis that I have never seen about the character animation in blender. It is very hopeful, I hope that these changes are feasible :D

harkyman
Posts: 278
Joined: Fri Oct 18, 2002 2:47 pm
Location: Pennsylvania, USA
Contact:

Post by harkyman » Sat Dec 04, 2004 3:16 pm

I know that you can do the Action strip thing in a way already. Also, the suggestion about no Pose Mode is in the hope that bones will be recoded to treated internally more like normal objects: if that were the case, you wouldn't need a special mode to handle them, and they would be accessible for animation without an extra step while you were animating other objects. Really, they're both more of workflow enhancements than anything else.

joeri
Posts: 2243
Joined: Fri Jan 10, 2003 6:41 pm
Contact:

Post by joeri » Thu Dec 16, 2004 3:53 pm

Good constructive work I think.
Nicely avoiding the skinning problems ;) (I think that's what the posemode is for)

Some (minor) additions:
-I'd change " Rig " to skelet (collection of bones)
-On included skeletons use state-of-the-art systems:
example; on NoMusc Biped use reverse foot lock.
-For onion skinning: I'd go for past and future colors. A colorband can help visualize the speed.
-Same thing in "show path".
-Although Baking sounds fine to me it might make sense to have a "convert actions to IPO" option.
Last edited by joeri on Thu Dec 16, 2004 6:43 pm, edited 1 time in total.

harkyman
Posts: 278
Joined: Fri Oct 18, 2002 2:47 pm
Location: Pennsylvania, USA
Contact:

Post by harkyman » Thu Dec 16, 2004 6:24 pm

Skinning problems? Blender has no skinning problems! Personally, I've avoided skinning high-poly models for this very reason. I try to keep my vert count as low as I can and use subsurfs. It makes skinning much easier. My attempts at high-poly skinning have been a complete disaster.

Sure. Call it a skeleton.

My main human rig that I used for this sample has a weird foot rig that allows you to do good things with it - not sure if it's a reverse foot lock, but it works. The bones that generate the motion are hidden for visual simplicity, though. This sort of thing would be hammered out after coders decide to include some nice default rigs.

Totally agree about onion skinning enhancements - colorband, plus range of frames, and frequency of skinned image should be included. I was just mentioning the feature.

Sounds good to me. Right click on the object listing in NLA to bake the whole object's NLA data into IPOs, or just right click on a specific Action strip to convert only itself into IPOs, leaving the other strips "live".

Monkeyboi
Posts: 561
Joined: Tue Nov 26, 2002 1:24 pm
Location: Copenhagen, Denmark
Contact:

Post by Monkeyboi » Fri Dec 17, 2004 1:02 am

One of the most missed animation features IMO is the ability to combine morph targets with bone deformations.

Check below to see how powerfull this would be.

http://www.projectmessiah.com/x2/messia ... e_main.htm

Note that messiah's bones give you more control in stretching and moving joints, which would also be nice to have. [/url]

matt_e
Posts: 898
Joined: Mon Oct 14, 2002 4:32 am
Location: Sydney, Australia
Contact:

Post by matt_e » Fri Dec 17, 2004 2:48 am

And an even cooler thing in Messiah is the ability to not only link bones to blend between morph targets, but also all sorts of other things, such as blending materials and textures, notably displacement maps. Check out what Taron did with blending different Zbrush generated displacement maps in Messiah. Nothing short of amazing:

> http://www.cgtalk.com/showthread.php?t=187770

http://206.145.80.239/zbc/showthread.php?t=022061

http://206.145.80.239/zbc/showthread.php?t=22300

(Note how by shifting around displacement maps, he can give the impression of skin sliding over the character's neck)

joeri
Posts: 2243
Joined: Fri Jan 10, 2003 6:41 pm
Contact:

Post by joeri » Fri Dec 17, 2004 1:26 pm

broken wrote:stuff
Indeed very interesting.
Using driven keys to blend displacement maps. (Thanks for the tip)

Shows how important it is to have "everything animated"
With this I mean all the values that can be changed by the user should also be changeble by expression, IPO or driven key (value of a channel is mapped truw a IPO on another channel).
In my opinion it's a bless if the channel animations are relative to other channel animations.

Monkeyboi wrote:stuff
I didn't think about the implications of vertex weight painting and realtive vertex keys.
But ideal for facial animation would be to have a face mesh connected to a skelet and do the face in relative vertex keys. Now if the relative vertex key sliders ( :) ) could be connected to extra bones (with driven keys) that would help giving the right facial expression alot.

An improvement to relative vertex keys could be:
- In 3d view display the object of selected key (in the same blue?)

- Or have different objects for the relative keys (same polycount) and have a "boss" object where one could "connect" the objects to.
This way it would be easier to alter the individual expressions. And name the object the expression it's suppost to have.

- Sliders that show the amount of relativeness used.
This would be a nice display feature for the IPO window anyway: show all channels as a slider on a given time. Or on a relative scale (0-100).

- Key blending, verteces move in a straight line from one key to the other key, to avoid this one can add more keys. But if it's a slider connected to a key you can only assign the value that key counts for. With blending keys the slider can "animate" keys before it's used as a relative key.


Just some ideas. I'm alas to stuppid to implement them.

Monkeyboi
Posts: 561
Joined: Tue Nov 26, 2002 1:24 pm
Location: Copenhagen, Denmark
Contact:

Post by Monkeyboi » Fri Dec 17, 2004 3:30 pm

Joeri: I did a proposal on improving RVKs quite a while back, which might still be of interest.

http://www.shadeless.dk/ui/morphtargets.htm

[quote=broken]And an even cooler thing in Messiah is the ability to not only link bones to blend between morph targets, but also all sorts of other things, such as blending materials and textures, notably displacement maps. Check out what Taron did with blending different Zbrush generated displacement maps in Messiah. Nothing short of amazing: [/quote]

Right, the ability to link any parameter and add math, just like you demoed in Shake, and which is also possible in Maya using Driven Keys, like Joeri showed during the conference would make things very flexible in terms of animation. Something tells me it's the kind of thing that would require a massive rewrite though.

Post Reply

Who is online

Users browsing this forum: No registered users and 3 guests