There are many projects and attempts to bring Sign Language into cyberspace. Most have really missed the mark because there was a disconnect about Sign Language itself.
Most notably, points missed where:
- ASL, BSL or what have you are their own sign language with their own grammar and syntax
- Translating sign is still translation and not native support.
- A library of gestures would be no more useful than a soundboard with every English word on it.
- Facial expressions and mouth shapes are integral to ASL, not secondary.
- Sign Lanauge is a 3D language and needs a 3D interface
It seems that HF has so many of the raw ingredients that have been missing for so long that I feel a glint of long lost hope in seeing this implemented.
With faceshift and a pair of gloves, you could include Sign as part of HF in a way that has never been matched before. Tracking of every finger, and palm orientation would be required to make ASL clear and understandable, and real-time feedback such as physical gloves may be required to achieve that. Yes I know there are attempts to use the Leap Motion or Kinect to turn sign language into text. These attempts certainly have a future, but a distant one and it’s, again, translation, not native support for the language.
Method of capturing the data aside, would this be something that HF would be able to include into its interface? You avatar moves with you and has your facial expressions… just add hands to the finger detail and there’d be a whole new way of communicating in cyber space.