Description of avatar movement/position functions



On the HF blog I found a good map of the system architecture.
I am interested to know more about the functions and the system architecture inside the Avatar Mixer (and also the corresponding structures in Client) in order to understand how the movement of a group avatars present in the same space is handled.

  • Are xyz coordinates used, or a speed vector (with the direction of movement)?
  • Is there a time tag for each avatar movement? What is the precision seconds? What is the sample rate/frequency?
  • How avatar position data is transmitted to all avatar clients present in the room?
  • Which data is included more than position? Flags?
  • Are there some movement prediction algoritms? Can them be improved by setups?
  • How can a module for monitoring, display and recording those variables be coded in the client?

Thank you!


This Post is AFAIK so if anyone knows a bit more than me, feel free to correct me.

Alot of the Avatar related functions related to the Avatar Mixer can be accessed using the
AvatarList (getAvatarIdentifiers and getAvatar) interface. You can use this to track the number of users connected, and track where they are in the domain. You can find an example in this avatar tracker I made a good while ago

The Avatar object has basically most of the data MyAvatar contains, which includes position, rotation, velocity. The data also includes current Joint positions and blendshape coefficients available through another interface. the PAL script should also provide quite a few information of available data. The Record.js script would be an example of recording these motions, and then playing them back from through the domain scripts initiated Agent.

The avatar data is updated depending on available client bandwidth (outbound) and the distance you have to the other avatars (inbound), so it is possible for you to skip data from specific avatars unless they are close to you.

Everyone mostly sends the data from their clients to the avatar mixer, and it decides if you will receive others or if others will receive you (handling also black lists and effects of the bubble)

There however is no ready set algorithms for movement prediction available for us to use, so youd have to come up with your own.

Do note that the coordinate system in High Fidelity is XZY, where as Y is up. Remembering this will save you a bit of headache :slight_smile:


Hm… I have ideas though I am not a geek :sweat: . That’s why I try firs to set up a “test installation” to follow/Watch the existing performance and behaviour of near positioned avatar groups. (With avatar group I mean avatars traying to walk together in the same direction.)

I need to analyze and visualize the existing performance. I hope it is possible to define som (statistic) performance parameters to show how well a group syncronise.

Simuated walking avatars (with included long distance lag) and recorded session to play back would be of help to first understand what is happen.

With this knowledge and evaluation tools above I suppose the next step would be to incrementaly “alterate” the walking position algoritms, evaluate and keep the ones with good results.


Is display of data included?


Would be nice to get some update of the current (old) system architecture in order to cover parts inside the blocks (Avatar Mixer). Unfortunately I know nada about the HF implementation :frowning:


Only thing it shows is a circle below all avatars on domain


Thank you for all advises!

Do you have idea if the avatar position is been send from the client with own time tag (local system or UTC or whatever)?


I think that sort of data isnt available on the data as the Avatar Mixer sends the sync info. But would need a bit more experimenting on it.


I can imagine a test setup to find out the way avatar sync works:
Test station A (“Stockholm”). Avatar A.
Avatar X (“Sidney”), “visiting” the Client A.
Test Station X.

Test case TC01
Station A and X starts A and X at UTC 19:00:00.00000 (the highest decimal precision permitted by the system/client).

The avatar mixning is done in a server M in “London”. I suppose the disyncron appears at Sideney to London respectively Stockholm to London, but we evaluate the behaviour in A or B.

A and X are automatically started at test time e.g. UTC 19.00:00.0000 to walk along X axis 1 meter aside from A.

The test module record the data and present it visually. Eg. an X-Z track plot showing A and B position as a binome (two dots with a line between).
To be continued by learn by doing…
Question: can the Test Module including GUI be coded as script?

M is placed in the same machine as A in order to access internal variables.

GUI example: