Animating your avatar's face with Faceshift & Asus Xtion Pro Live


#1

Notes on Faceshift with the Asus Xtion Pro Live (Macintosh)

Note: this is a first draft and may be updated very shortly to address any minor errors

Faceshift allows a 3D sensor to be used to animate your avatar’s face in a surprisingly natural way. It’s a motion capture system that analyzes your facial movements and stores them as a mixture of basic expressions, head orientation, and eye targeting. These can then be used to animate the facial expressions of your avatar in High Fidelity.

Here are some notes on the installation and setup of the software. These are provided purely on the basis of end-user experience: I am simply an alpha user with no additional connection to High Fidelity itself. You will find an extensive (though currently - Apr 2014 - slightly outdated) set of instructions on Faceshift’s web site at
http://support.faceshift.com/support/solutions/articles/121867-

…and these notes don’t attempt to replace or duplicate them.

Also see these Wiki links posted by Ryan.

1. Supported sensors

Faceshift supports the PrimeSense Carmine 1.09 (no longer easily available); Asus Xtion Pro Live, and Microsoft Kinect for Xbox 360 (with unofficial drivers).

Of the available sensors, the Asus unit provides the best results and it is easily acquired (eg via Amazon). In addition, this is the only unit I have used and I have no comment on the other sensors. I have also only used the system on Macintosh OS X.

2. Software Installation

You’ll need the High Fidelity version of the Faceshift software, and an accompanying activation code. I obtained these from Emily, who also supplied some useful notes on getting started (thanks, Emily!).

I plugged in the Xtion before starting the installation process. You certainly need it plugged in before launching the software itself. On first plugging it in you should see the scanner illuminate red briefly.

You’ll see pictures of the Xtion mounted on top of a monitor but mine did not come with any hardware to allow this. Instead, I placed mine on the desk under the monitor, looking up at me. My personal view is that the sensor needs to be looking directly at you from immediately above or below the screen, and probably about arm’s length away (ie the distance your monitor should be). The Tracking section of the software includes an indicator that shows how far away your head is with respect to the sensor and whether it is tracking accurately or not, and you may need to move the sensor a little if you find the distance indicator is too far at one end of the distance scale or the other.

You will be asked to select a driver. There are two drivers offered on startup, via a popup menu. You want the first of them (the default). The “1.09” version doesn’t work with the Asus (I presume it would work with the PrimeSense however). You’ll need to select this every time you launch Faceshift (why?).

Following installation, you will be asked to enter the activation code. Having done so, click the Activate button. Any problems with this are outside the scope of this document. At this point you should be able to see continuous red light from the scanner window on launching the software if all is well.

The unit has both an optical camera and what appears to be an IR 3D scanner. As a result of the latter, the ambient lighting levels do not seem to influence performance unduly.

3. Training notes

Faceshift needs to register a fairly extensive set of basic expressions with which to animate an avatar - it also interpolates between these to follow changing expressions smoothly. The key elements of these expressions are the mouth, eyes and eyebrows. To capture these, you “train” the software by making the required expression selected from a list, capturing and storing it, then going on to the next.

You’ll find that the system is remarkably tolerant of various things that you might have about your face such as eyeglasses. However when training the software, do make sure that you are wearing/using any devices that you might expect to be wearing when you are in-world and might obscure the eyes, eyebrows or mouth - especially glasses and headsets. If you have trained the system without a headset and then you wear the headset to go in-world, for example, you’ll find that the accuracy is significantly compromised.

The fundamental setup and use of the system is covered effectively in Faceshift’s Support documentation, although these seem currently a little out of date; however the latest version of the software appears simpler than the documented version. Start here:
http://support.faceshift.com/support/solutions/articles/121867-

…and work through the setup, training and tracking phases in order. Allow a good few minutes for the training process as there are quite a few expressions to capture. When you are capturing each expression, you’re asked to assume the expression and then capture it. You need to start with your head central and stationary and then move it from side to side a few times. An idealised mesh head shows you the expression and movement required.

4. Operation notes

Once you have set everything up, you will run the Faceshift software in Tracking mode and keep the application open in this mode while you’re using Interface.

An important Faceshift setting to note is in the Network section on the left-hand settings panel. You will need to check the Network Streaming box. Failure to do this will mean that the motion data is not transferred to animate your avatar: ie, it doesn’t work.

It’s quite likely that you’ll find that your default head position is not quite upright. I would suggest that you go in-world and see what your natural position is when working there. Then, holding that position, on the Tracking window in Faceshift, set this as the default by clicking the “Orient Head” button. This sets your current actual head position as the “upright-straight-ahead” default for your avatar.

The Faceshift software not only tracks your expressions, but also your head position and distance from the sensor. You can use these in-world by setting up Interface to allow head turning to determine the direction you are looking in (currently horizontal only) and leaning (forward, backward or sideways) to control motion. The latter is fairly amusing - it looks a little as if you are ice-skating.

I don’t know about you, but as the co-host of a weekly TV show on design and designers in virtual worlds (Designing Worlds), there’s hardly a day when I don’t wish I could do more facial expressions in Second Life and virtual worlds like it. Here in High Fidelity, we suddenly can, and it’s a major advance. Enjoy!


#2

That’s very interesting thanks for that. I was looking at getting the Kinect working with Faceshift on the Mac but it looked like a world of hurt(https://github.com/avin2/SensorKinect).The Kinect only works if your far enough away for it to see you. Sat at a desk I need to be sat so far back that I cant reach the keyboard lol. The Primesence seems to work closer. Once the Windows version of High Fidelity is out I’ll be back in my comfort zone and give your tutorial a go.

I spoke to soon i have Kinnect working on the mac with the Faceshift trial version, I need to try it with the High Fidelity version


#3

The Asus is entirely capable of doing more than the head, I believe. However Faceshift is really at its best doing head-only as it’s looking for facial detail. If it was interested in doing hands too, that would be cool - my impression is that it is entirely within the capabilities of the Asus device. I wonder if we can use LeapMotion for that?


#4

I guess this won’t work if you have an OculusVR on :-p.look forward to trying it out.


#5

Once everything is set up, can it be removed to use for other things (i.e. actually using it for xbox and and other stuff around the house) or would you have to start at square one each time you set it up for Faceshift?