We are aiming at developing an online collaborative framework that would allow deaf individuals to author intelligible signs using a dedicated authoring interface controlling the animation of a 3D avatar. This tool would enable deaf individuals from different linguistic communities to create their own animations in order to illustrate new concepts, invent new signs and populate dictionaries. Eventually, it might be an alternative to video recording, unlocking anonymous expression for deaf individuals on the Internet using their primary language. Such a tool would also put sign language studies back into the hands of the deaf by allowing them to populate large corpus of animation data -- potentially a very valuable research material for the advancement of Sign Language linguistics.
We elected Blender as development platform for fast prototyping and evaluation of innovative User Interfaces and Interaction Paradigms. In this paper, we present the results that we have achieved so far. They mainly focus on the design of a User Interface assisted by novel input devices. In particular, we show how the Leap Motion and Kinect-like devices can be used together for capturing respectively hand trajectories (position and orientation) and facial animation. Also, we show how we used the Blender GUI and API to quickly develop our software.