On the HF blog I found a good map of the system architecture.
I am interested to know more about the functions and the system architecture inside the Avatar Mixer (and also the corresponding structures in Client) in order to understand how the movement of a group avatars present in the same space is handled.
- Are xyz coordinates used, or a speed vector (with the direction of movement)?
- Is there a time tag for each avatar movement? What is the precision x.xxx seconds? What is the sample rate/frequency?
- How avatar position data is transmitted to all avatar clients present in the room?
- Which data is included more than position? Flags?
- Are there some movement prediction algoritms? Can them be improved by setups?
- How can a module for monitoring, display and recording those variables be coded in the client?