Audio attenuation groups


In some of the meetings we’ve had lately in HiFi with many avatars present, I’ve had problems listening to people. Without constantly walking around to be near the person I want to hear, I end up hearing the closest conversation near to me.

When individual avatar volume controls are implemented it will be better, but even then it will be a chore to set up and keep updated when with a large group.

What I would like is something I’ll call attenuation groups. I think this would be do-able in HiFi and would not be surprised if someone else had not thought about something similar.

The purpose of it all would be to automatically set the volume levels for you from all the avatars (sounds too?) nearby. As you looked around, the conversations you could hear would change in volume.

When you looked at the main stage you could hear the audio source or what the speaker there was saying, if you looked at the people near you, you would hear them.

It would work something like this:

You go to a place with many avatars talking. You would then initiate an action (run .js script / click button / make gesture etc.).

This would make a list of all nearby avatar’s look-at direction and then find the nearest avatar to a ray traced from each avatars position.

This avatar-looked-at list would then be divided into groups starting with the main group:

a). The group would be all the avatars looking at the avatar that had the most people looking at it.

b). All of the avatars in the this group would be removed from the avatar-looked-at list.

a). and b). would be repeated until either the list was empty or a threshold was reached.

Next I think you might have to add a filter to combine groups that might be the same:

Combine groups whose position were within a settable position threshold.

And then remove all avatars from your group that you, or the avatar’s group that you belonged to, have opted out of.

These groups or something like them could then be used to set your volume for all the avatars present. You still may have to adjust an avatar’s voice volume if it is too loud or soft but even that might be able to be passed from the group you are listening to.

The process would need to be repeated every second or so or when your avatar moved or rotated more than a settable threshold.

Anyway, sorry for the long post, just a thought. I don’t think all of the things needed for this are exposed in .js right now or that it might not be done better at the C++ level.


Yes, the audio listenability problem seems to get a lot of user attention but no dev traction. I discussed it in the link below. BTW, @Philip mentioned what you suggested. The problem with directionality based boosting is that, because of HMDs, when what your avatar sees becomes exactly where you look physically, you will end up with a neck ache using your head as a shotgun mike. I believe attending to the core problem that a near field is not implemented will fix many of these issues.


I am aware of what Philip said before. The difference is the groups. Fading in and out of group conversations I think would be much better and would not give you a neck ache. And near field would not solve it at all without having to either adjust the volumes yourself or actually move away from what you don’t what to hear.


When I made the ‘moments’ vid the other day, I walked between groups talking. I was quite surprised how well the normal attenuation took care of attenuating the distant conversations. That plus the good true phase effects made it easy for me to focus on a group conversation. So, now I am curious about the situation you are running into. Is this when a bunch of people are very close to each other?