Urgent Help Needed - Lipsyning a character to voice-over .wav


#1

Hi all,

I could really use some help, I am on a deadline trying to figure out how to make my custom character I made in Fuse and Mixamo, talk inside High Fidelity, as an animated .FBX synced to a Voice-Over .wav file.

Unfortunately I have found very little documentation about this.

I posted already 9 days ago but no response, and I am getting desperate now.

Now I am trying to do my Lipsync in iClone, since that has some automated features for lipsyncing. When I import my Fuse-Mixamo fbx character, it imports with the Facial Blendshapes but they do not have the Phenome sounds like EE O, etc…instead it has mouth open, jaw down, etc, that have to be converted somehow to the Phenome shapes. I wanted to find a Iclone Character Profile file that has settings for the Mixamo Blendshapes into iClone Phenomes etc. Does anyone please have any experience or suggestions for this?? Or a iClone profile for Mixamo to iClone?

I somehow need to get my Fuse Mixamo character to talk today.

Now I am even considering other character generation packages that I dont have experience with like Daz or Poser if they Phenomes are set up more easily to plug and play a voice over .wav file with. However I would like to use Mixamo, knowing that the .FBX characters it makes converts easily to High Fidelity Avatar. FST file.

Thanks in Advance.


#2

You cant.
We use to support Faceshift, then it got bought by apple buried in the Iphone x
now we dont
sighs

@Menithal has a method to trigge them manually if thats any use?


#3

I’m not familiar with iClone or how it does lip synching and also I’m not sure how you are doing the recording. However, here are some suggestions.

  1. If you route the pre-recorded voice.wav file through the microphone, the avatar will perform some amount of lip syncing. Admittedly, this is not as high quality as an offline tool will provide, but it’s something.
  2. There is procedural control over the blendshapes from our JavaScript API, an example is the scripts/developer/facialExpression.js app. You can use the H, J, K, L, V, B, M and N keys to change the expression of your avatars, while talking.

Hope this helps,
Tony


#4

High Fidelity uses Faceshift conventions, which is a different approach compared to phenomes, and which I have loosely documented here, of the different shapes:


under the Blendshapes Reference, where I describe and point out examples.

Unfortunately the tech is sorta “taken” after apple acquired it, and not many use it, other than Apples tech.

This is because in the past, you could actually animate the avatars using a web camera, as demonstrated here:

unfortunately that feature has not existed for years now, but the left overs are still there, with the Blendshapes being still Faceshift Specific.

If you /really/ need to force it, you can do a few things:

You can:

A. Actually create the shape keys from the phenomes. Into the models,

Then feed the voice over through the microphone, causing it to play back using the automated generation. Extra emotions can be controlled with scripts like @hyperlogic pointed out.

B. Bind The Phenomes to Shapekeys LipsUpperClose, LipsLowerClose, LipsUpperUp, and other Microexpressions, as described in the forum thread, that are not driven by the engine when talking via the microphone, then trigger them via keyboard commands


#5

I wonder if facerig has an api. That would be cool.


#6

Thank you very much for your replies.

I need a Character Profile for Iclone to map my FUSE/MIXAMO mouth blendshapes to create the Phenomes that Iclone uses, for instance matching the PHenomes like EE O U to the Mixamo Blendshapes such as open jaw, etc. Anyone have a profile or template or idea on how to do this?

I did try using an Iclone Character and it worked great with the Auto Lip Syncing, but it did create a pretty heavy FBX, heavier than the ones I made in MIXAMO. Plus, Mixamo has an easy route for creating Avatars, but I havent seen anyone talking about iClone to High Fidelity Avatar, so I am assuming it doesnt package it the same way as Mixamo which works with HF.

Well, I will be working on it, trying to find the best way to have a character with Lipsync present a several minute speech.

Thanks in Advance for the help!