Steam Audio support


I would like to see Steam Audio support in highfidelity. Also you dont have use steamworks to use this it is totaly independent from it (official page) go here first (code) not much info here (not restricted to Steam post from valve employee)

Steam Audio delivers a full-featured audio solution that integrates environment and listener simulation. HRTF significantly improves immersion in VR; physics-based sound propagation completes aural immersion by consistently recreating how sound interacts with the virtual environment.


Question would be: why?

HiFi already has audio attuniation, hrtf, ambisonics and “physics simulation” (just ability to manipulate audio streams, not true physics) stuff baked into the audio mixer way before steam rolled it out their audio solution.

Considering we have had tests accompanying 120+ users and audio still working at a nice quality speaks volumes about it being low bandwidth.

Not sure if HiFi is willing to put more time refactoring it to get steam audio integrated instead since they got it pretty well down already, what exactly does steam audio bring to the table that HiFi doesn’t already have, aside from raytraced audio? They already have much at hand getting the graphics engine uptopar with game engines.

Then architecturally:

If someone is willing to do it, they will have a bitch if a time to refactoring the audio mixer to work on the way the sdk requires: HiFi audio mixer streams audio independent per client from the server not per audio source. so audio sources get mixed in and not just played back as a file by the client. This allows you to even play back local files to others, among compressing all audio to a really low bandwidth.

So in the end the only audio streamed to the user is a single audio stream. Downside is that to allow for raytraced simulation you’d need to split up either audio streams to be per source or simulate the environment on the server side or migrate all audio playback to be local.

That’s a lot of work to just get steam works audio it is not a drop and go solution.

So: What does it really bring to the table and how would it better?

#3 look at that


Yep. As said above in my discourse, basically covering all the subjects posted in the announcement:

I don’t think High Fidelity will implement it any time soon due to literally having to throw everything they have done for the audio mixer: They’ve literally worked on it since high Fidelity’s inception and prior to Steam Audio, it was the best in business.

Literally to do occlusion based audio you client must know where the audio is coming from and must be aware of all the geometry between you and the source: same to do all the physics calculations on it and all other physics based things require he client to know the source of the audio.

All of the above also add some latency or they need to be pre-computed into a scene. Which, for High Fidelity wont unfortunately work since scenes can be changed on a drop of a hat so it would have some latency, which they try to avoid, which is why we have audio reverb zones are used to ‘pre-simulate’ this stuff. The post processing of audio including physics based audio simulation must require it to be done client side.

Also the current clients don’t know the sources of audio. Really: the audio mixer creates per client stream where all the audio is mixed into. The audio mixer uses simple math to define the volume and direction and attuniation of the audio stream, throwing it in stereo. This is how you can have the zaru like attuniation zones where someone standing on stage is lounder than the people in the audience.

And it’s alot of work not an easy thing to do.


this is really better than what you are using and it dose not require steamworks or steam you can use it without steam