Does the future of HF include Deaf people?


#1

There are many projects and attempts to bring Sign Language into cyberspace. Most have really missed the mark because there was a disconnect about Sign Language itself.

Most notably, points missed where:

  • ASL, BSL or what have you are their own sign language with their own grammar and syntax
  • Translating sign is still translation and not native support.
  • A library of gestures would be no more useful than a soundboard with every English word on it.
  • Facial expressions and mouth shapes are integral to ASL, not secondary.
  • Sign Lanauge is a 3D language and needs a 3D interface

It seems that HF has so many of the raw ingredients that have been missing for so long that I feel a glint of long lost hope in seeing this implemented.

With faceshift and a pair of gloves, you could include Sign as part of HF in a way that has never been matched before. Tracking of every finger, and palm orientation would be required to make ASL clear and understandable, and real-time feedback such as physical gloves may be required to achieve that. Yes I know there are attempts to use the Leap Motion or Kinect to turn sign language into text. These attempts certainly have a future, but a distant one and it’s, again, translation, not native support for the language.

Method of capturing the data aside, would this be something that HF would be able to include into its interface? You avatar moves with you and has your facial expressions… just add hands to the finger detail and there’d be a whole new way of communicating in cyber space.


#2

I think if the technology exists; why not?


#3

For the sake of full-disclosure (and because it’s REALLY cool) I’d like to share one of the “visions for the future” the company I work for has.


#4

https://www.youtube.com/watch?v=fmpP73-SHPQ The Babel Fish- a cautionary tale lol


#5

@philip might be able to elucidate a bit more, but the short answer is: yes. The ASL-to-text translation problem sounds like a tricky one, so that may be best left to experts in the field, but we will expose device input to allow for middleware like a translation tool. We haven’t yet began to experiment with gloves, but it is our intention to do so. In other words deaf-to-deaf interaction should be no problem assuming each person is equipped with Faceshift, a 3D camera and gloves.


#6

I’m adding this to the this forum since it’s directly related to the Deaf, however this has some possible applications for HF in general.

Instead of using gloves to detect the position of fingers, it uses bands on the forearms to detect them by the contraction of muscles!

I’m not sure if this is “weeks away” or barely in the concept phase, but if it works it might also benefit HF where a non-obstructing device could measure exactly how you are holding you hand and then reproduce the finger positions in HF. :smile:


#7

Sadly, Google Gesture is only a student concept project at this point. Here’s the original video: http://vimeo.com/98134714


#8

I suspected that mthome, but thanks for confirming. Perfection of machine translation would kinda conflict with my day-job anyways :open_mouth: