Rigged Facial animation rather than blendshapes


#1

@ozan

I stumbled across this avatar build in blender using a facial rig rather than blendshaspes. Its free to mess with none commercially

http://blenderartists.org/forum/showthread.php?204814-Flick-Free-Character-Rig-(Blender-2-55)

would be very cool if it would work in hifi

http://dl.dropbox.com/u/1250100/Flick_v.01.5.blend <the blender file


#2

Yes. Very cool. Currently, we only have support for blendshape facial animation. It’s a good, high performance way to render the facial expressions.

Ultimately, we may want to support a bone based facial rig that we can drive in realtime. This tool would be very cool for that. I’ll share with the team.

In the meantime, you can use Flicks facial rig to pose the shapes you need for the blendshape set. Then copy the mesh and repeat for all the other blendshapes.

So, in other words:

  • Open Flick in Blender
  • Pose her face, e.g. jaw open.
  • Copy or Export the mesh and name it jaw open.
  • Repeat for all the other blenshapes.
  • Build the blendshape set in your app of choice. I use maya.
  • Map the shapes in an FST.
  • Load up in Interface and drive the avatar with a 3d or 2d camera capture system.

Ozan


#3

With my face shift script I could move all face bones of my avatar in hf.
Is here meant something other?

https://alphas.highfidelity.io/t/what-i-have-learned-today/4597/26

.


#4

This is really cool that you’re posing the face bones. I have thought about that but the problem with using FaceShift I think would be that it’s still designed to trigger morphs/blendshapes based on the 3D camera data, I’m not sure of the details of the new DDE approach either yet, but unless they can be made to work by posing the face joints then the solution Ozan has proposed will still be the most suitable.


#5

I thought about this a bit more.
I like the idea of using bones for animations, but I’m starting to see the major downsides to it:

Main issue would be accommodating variance with cross-capability,: Bone animations work by either translation or rotations.

But unlike with limbs we need more finer control for these bones.

Each avatar would have different facial bone shape, resulting in different required rotations or translations for each custom avatar. Its easier with just limbs as those extremities tend to be common among most, but with faces you have different shapes even with humans.

This is why most games use blend-shapes and not bones to animate faces, while animation industry tends to use face bones (since they can afford to fine tune each and every animation, have animation constraints and dont have to worry about cross compatibility as much): so that they can just deform the mesh to specific shapes instead of worrying about playing back animations.
It also means the Artist has more control of what happens to the mesh when rendered in real time.
Unlike with bones you may explode a head shape as @summer4me… post https://alphas.highfidelity.io/t/what-i-have-learned-today/4597/36 has wonderfully demonstrated.

I mean yes, you can do bone animations right now for your own custom avatar! but the problem is that I dont think it should be a standard method of doing things. But it definetly should be possible to do!

Custom Face animations would only work for the avatar that they were created for, using the same topology and weights. But as soon as you change that topology or have a different shape, you are going to have issues and will have to create an entirely new animation to accommodate that change. This is tolerable for the different avatar shapes, but for faces this is much, much more noticeable.

So if doing facial animation via bones, the animation script would have to be tunable to accommodate each and every avatar shape. if not that, then HiFi would need to have customizable constraints for each and every bone: otherwise risk head explosion syndrome when switching between avatars using another animation (that is after we solve the orientation issue between exporters).

Instead I think we should focus on getting existing features upto-date and polished. Such as actual blendshape animation support for both FBX animations and scripting.


#6

Yes @Menithal, I see it in the same way. Otherwise I have learned much through the dealing with the bone system. And I have the feeling, I must learn more about it. I think, it should be possible, to read in the specific bone system and automatically with the script adapt the pose. No clue, if I ever will understand enough, to do it. The problem is, that we have to fight in the moment at different fronts. If something gos wrong, so it is never clear, if this is an error in my script or an error in the core.