First steps in character animation


#1

I’ve been having some fun with animations today - I’ve got a rough fbx pipeline established for producing HiFy compatible fbx animations and have written a script that animates my avi. Was very happy with fast progress until…

I found I cannot access avi translation data contained in the fbx, so my avi is wiggling around on the spot with his hips fixed. It looks very odd, like he’s being dangled in mid air.

I’d planned to get the avi’s (global) position and add the Hips translation data from the fbx before setting the avi’s new position. It’s a bit of a cludge doing it that way, and will lead to feet sliding etc depending on different avi proportions, but it works ok and is way simpler than other options (e.g. matching footfall with ground collisions and calculating appropriate offsets).
Digging through the source code, I see the AnimationCache Class gives access to AnimationFrames, but AnimationFrames (FBXAnimationFrame Class) only contains rotation data - no translations.

My Question: Is there a way to access the fbx file’s translation data via js? If so, how?

Note, the translation data I exported is on the Hips joint only. I believe there are ways to export and use translation data on all joints, but things get pretty complicated that way. That said, it would ultimately lead to far higher fidelity animation…

  • Dave

#2

Hi @davedub have a look at these example scripts http://s3-us-west-1.amazonaws.com/highfidelity-public/scripts/proceduralAnimationAPI.js and http://highfidelity-public.s3-us-west-1.amazonaws.com/scripts/bot_procedural.js for some examples of controlling some of the animations. Something to note, if you are using your own custom AV FBX it will have unique bone names and therefore need to have animations mapped to those bones. I use Mixamo.com for all my animations on custom avatars.


#3

@chris - thanks for your reply - procedural animation looks like it’s going to be a lot of fun, I can’t wait to start playing with that!

However, it doesn’t really hold a solution to my current problem. To better explain the issue, I’ve put a youtube movie up - I hope it’s ok to do so, if not, let me know and I’ll take it down straight away. To be on the safe side, the movie is unlisted.

Movie here: http://www.youtube.com/watch?v=iiQfqBNp0oM (You may need to pause it to read the text)

  • Dave

#4

@davedub can you send me that model?


#5

@davedub http://www.youtube.com/watch?v=-dwCXMc2V0U
Thought you might be interested in this rig Its the blender rigify one but with facial bones added. wonders if those could be 1)used to directly animate the face ie bypass the blendshape process and hook the bones directly to faceshift
2)used to automate the blendshapes creation process for Faceshift
3)Beer


#6

Hi @chris,

The one playing in HiFy is this one:

http://s3-us-west-1.amazonaws.com/highfidelity-public/animations/gangnam_style_5.fbx

The one playing in my old version of MotionBuilder is this one: (same as above, just converted to fbx2006 format using autodesk’s free fbx converter)

http://davedub.co.uk/downloads/hf/gangnam_style_5.fbx

Is there any news on how to read hips translation data from the fbx using the js API?

Cheers,

  • Dave

#7

@Judas - That looks great - will have a go at importing some animations using that rig, see if we can get some gurning going :wink:

I’m not too familiar with Blender yet - can you send me a copy of the rig? Easiest for me if you can send in fbx format if at all possible…

Cheers,

  • Dave

#8

I exported it a few ways just the rig as asci and binary
and same again with the control gubbins
and just the blender file with it all in there

turned out a bit fussy to install , you have to swap out the continence of the rigify folder with the ones from github https://github.com/pitchipoy/rigify
this rig is really way beyond me (the second life one i can just about handle lol)


#9

@Judas - cool, thanks for those. I’ve just got back to Thailand, have got some other work I have to clear, but will be on this within a day or two. Will post results back here asap.

  • Dave

#10

@Judas - problem with the download - the zip file’s empty :wink: Could u re-upload? Other option is to email directly: dave at davedub dot co dt uk…

Cheers!

  • Dave

#11

https://dl.dropboxusercontent.com/u/10483952/pitchpoyrig.zip
woops blames um the government

2nd times the charm @davedub


#12

@Judas - can blame government for everything. After all, most of the world’s ills do seem to be their fault these days!

I’ve had a good look at that rig. Initial thoughts:

  1. Love the number of spine joints, would make for very natural movement.
  2. Love the extra breast bones, would make for some very interesting movement ;-).
  3. Not sure about the facial bones though - I don’t yet know enough about blendshapes to compare, but the blendshapes used for facial tracking already do seem to be doing a good job - do you think there are advantages to using facial joints over blendshapes?
  4. The hand hierarchy is a bit odd, as the thumb joints are parented off the base index finger bone. I’d need to have a look at what datagloves etc expect to find - will post back here once I’ve researched a bit more…

I also really need to know what’s going to happen regarding the lack of hips translation data available when scripting using animation / mocap data. Would you be able to help nudge @chris for an answer on that one?

Have got some RL work to clear up tomorrow, but will be back on this asap.

  • Dave

#13

Re Advantages when we make the blendshapes for faceshift we have to make 42 different ones. which is kinda time consuming for each head you make. U cant change the mesh after to make it a different face without breaking the created blendshapes.
What I was speculating was if facial joints can be created and animations played into them smile frown or whatever and it be made to kinda generate and spit out those 42 blendshapes.It might be over complicating things.

I really don’t know enough about the subtleties of rigging to know if its good or not.But what ever we pick I think its important that its free for everyone to use anyway they like.
At the meeting last night it was mentioned that mixamo might be-able to generate facial anims.
Also in the last day or 2 a ragdoll animation has been added to the viewer. its early in development but really exciting
Then I found this https://www.youtube.com/watch?v=jEQoQ5DzPMI which im messing with to see if it might pull through to hf


#14

OK, am back on this. RL distractions cleared, I have time on my side again - back to the fun stuff!

Standard HiFy rig

I’ve been having a good play around with rigify and the advanced pitchpoyrig rig. The pitchpoyrig looks really good - lots of spine bones, roll bones throughout, etc, etc. I don’t think the unusual rigify thing of parenting the thumb joint to the IndexFinger will be a problem, but it will need testing. I got a standard rigify rig up and running yesterday, and intend to tackle skinning a pitchpoyrig one to see if I can animate it in HiFy. If so, I’ll put it up publicly somewhere so it can be tested (particularly with animation controlling input devices - faceshift, datagloves, hydras).

If anyone has any info on expected joint names for input devices, please let me know asap and I’ll make sure any required naming conventions are followed.

Once thoroughly tested, tweaked and trusted, I guess we’ll be able to accept the pitchpoyrig as the standard rig for HiFy, job done :slight_smile:

However, we’ll need to fully test importing animation data before that happens, and that requires being able to access (via JS API) the Hips translation data from the the FBX, which unless I’ve missed something, isn’t currently possible…

Procedural animation and DSP

Procedural animation is new tech to me, and I’m very much interested in getting something up and running. I did some experiments the other night towards the conversion of existing mocap data to procedural animation data (in this case, a short series of sine waves and associated phase values). Obviously, much of the nuance in the mocap data would be lost, but the tradeoff in file size and blending capabilities makes it very much worth further investigation, particularly for walking, idle, sitting type animations. I don’t think the technology will prove suitable for longer, more complex sequences like dances.

I found a very easy to use dsp (digital signal processing) javascript library the other night, and got it to extract some fundamental frequencies from some random animation data using FFT. It all works nicely and could be used as a basis for uploading and converting animation files using JS in Interface.

But again, production of a fully working prototype is currently not possible, because there is currently no way of accessing the FBX translation data in the JS API - it’s very hard to walk without swaying one’s hips!

Missing animation translation data in JS

Basically, everything I’m trying to do is hampered by the inaccessibility of Hips FBX translation data, as described in posts above and demonstrated here: http://www.youtube.com/watch?v=iiQfqBNp0oM

As a result, I’m going to have a look at fixing it myself. I’ve had a good look through the source code for Interface and I think I can do it. I will put it up as a worklist suggestion and bid on it if it still looks like it’s within my capabilities after I’ve dug into the detail a bit further.

Quick shout to @Andrzej - I see you wrote most of the existing animation code in Interface. If there is any reason as to why access to Hips translation data in the JS API has not been implemented, please could you let me know asap so I don’t waste any time on it!

Ragdoll

@Judas - you mentioned ragdoll animation as a new feature in another post. I enabled the

Developer -> Avatar Options -> Collide As Ragdoll

setting briefly last night, but couldn’t see any differences when I collided with voxels. Could you point me in the right direction to see where we’re at with that? I’m particularly interested in how it blends / replaces other sources of animation…

Cheers!

  • Dave

#15

@davedub As far as i can tell the Ragdoll seems to work with avatar self collisions so if u have hydras u can slap your own body about. or a friend if your inclined oh god i just thought of a poor joke about ragdoll

Ragdoll is a fairly convincing technology but risks drifting into Frankie Uncanny Valli.

ill get my coat


#16

We just haven’t gotten around to it yet. The plan is to support animating translations in all joints in the near future.


#17

hey @Judas, Ah - I don’t have Hydras as yet, and still have trouble connecting - will check it out next time I get a connection to someone’s domain.

On another note, have you created an avatar using the pitchipoy rig? I only ask, as although I’ve worked through the tutorial and have got a (very badly!) rigged and skinned model up and running, but I don’t have a copy of faceshift yet - so I don’t think there’s any way for me to convert to fst and test in HiFy…

I really need an avi made with pitchipoy so I can try and animate it - if u have one, could I get a copy? If not, could I be really cheeky and send my pitchipoy rigged blender file over to you and ask you very nicely to do the export and conversion to fst? Would be very much appreciated!

Cheers!

  • Dave

#18

@Andrzej, thanks for the reply, is re-assuring to hear it’s in the pipeline. I know how irritating it is to be pressed on these things, but do we have any idea on timescale?

I only ask so I can plan my projects accordingly…

  • Dave

#19

No, I’m afraid I can’t give you a specific time estimate just yet.


#20

I have been playing around with the JS animation stuff over the last few days, finding my way around the API. I have come up with this:

In making the animator, I’ve come across a few things I think are worth mentioning:

  • Translation data written to the hips seems to be overridden by ground following when walking on voxels. There’s a need for animation priorities (or layers) to avoid these sorts of issues.
  • The camera is tied to the avi’s position in 3rd person camera mode so animation driven translations cause the camera to jog about. A solution would be to write custom camera following into the script, but it would be better if we could untie the camera’s location from small animation driven translations whilst still following the avi.
  • A few joints didn’t respond when rotational data was applied; in particular, the first spine joint and the toe joints. The first spine joint is used to add sashaying, it’s a great loss! I don’t know if this is a problem with the model or the animation system.
  • The capability to blend between different animation layers over a specified timespan would open up a world of possibilities.
  • I’m obviously conscious of resource usage - are there any known script memory (or any other) limits?

Comments / suggestions / advice on how to proceed most welcome!

  • Dave