EmoVoice - a possible approach for avatar emotions


#1

I’ve found this set of tools which allow to build real-time emotion recognizer based on acoustic properties of speech. I’ve test it and runs well on Windows. Could be implemented to drive the avatar emotions.

You can download the script from:

https://github.com/hcmlab/emovoice

Platform

Windows

Dependencies

Visual Studio 2015 Redistributable
https://www.microsoft.com/en-us/download/details.aspx?id=52685
Python 3.x (https://www.python.org/downloads/)

Quick Guide

Run do_all.cmd

Documentation

https://rawgit.com/hcmlab/emovoice/master/docs/index.html

Credits

SSI -- Social Signal Interpretation Framework 
http://openssi.net

LIBSVM -- A Library for Support Vector Machines 
https://www.csie.ntu.edu.tw/~cjlin/libsvm/

LIBLINEAR -- A Library for Large Linear Classification
https://www.csie.ntu.edu.tw/~cjlin/liblinear/

openSMILE -- The Munich Versatile and Fast Open-Source Audio Feature Extractor 
http://audeering.com/technology/opensmile/

Emo-DB -- Berlin Database of Emotional Speech 
http://emodb.bilderbar.info/start.html

License

The framework is released under LGPL (see LICENSE). Please note custom license files for the plug-ins (see LICENSE.*).

Author

Johannes Wagner, Lab for Human Centered Multimedia, 2017