It is an implementation of constrained local models fitted by regularized landmark mean-shift, as described in Jason M. Saragih’s paper.
It tracks a face and outputs the coordinate positions of the face model as an array, following the numbering of the model below:
The script can be tested here:
For those who want to play with this script and develop further applications (for driving avatar emotions for example - unfortunately will not run in VR, because the HMD is covering the user face), the files can be downloaded from here:
http://transmissiongate.com/emotiondetector.rar or: https://github.com/auduno/clmtrackr