Hi, I’m new to HiFi and would like to use it for a research project I’m working on.
For my project, I need to:
- Send “social cues” (head direction, location, is user speaking) of each user/avatar to an external server to analyze the social interaction in real time.
- Manipulate the way avatars are rendered separately for each participant, based on information from an external server. For example, by “fading out” (reducing oppacity / blurring / other visual manipulations) avatar X to participant Y but not for all participants, or making everyone see the rest of the avatars as looking directly to them. Similar ideas will be applied to the audio: participant X could hear participant Y but not participant Z while participants Y and Z hear everybody.
Although I’m not quite sure how, reading through the scripting docs it seems that the 1st task will be possible to achieve with HiFi. However, I don’t know if the 2nd one is feasible or not.
Does anyone have any experience with something similar?
Is it possible to manipulate other users avatars (hiding / muting / apply more involved rendering manipulation like changing head direction and blurring) in client side scripts?
I will surely use the next few days to find out by myself, but any clue from experienced users will help a lot!