I had to re-read this a few times but I think I can provide some pushes towards the right direction.
In a way, sound emission is already a thing in High Fidelity, as noted in the API. My own avatar uses this for playing footsteps when my feet hit the ground rather than on a timer and a VERY subtle heartbeat that plays all the time and can only be heard at VERY VERY close range.
That being said, after really thinking it over, I can somewhat understand the usefulness of a sound entity type, where the properties of it are based on the already existing sound player, which can be found in the marketplace. The item, in question, meets about half the requirements you specified (minus MID support, but more on that later) but that’s mostly due to the later requirements not being in place.
Part of the reason propagation factor isn’t fully supported is due to that being a domain setting, which has its own form of propagation factor effective for all sound mixing. On one hand, this helps universalize sound so that my footsteps sound and play the same as my own voice would, which are coming from points very near each other. On the other hand, it would be nice to be able to specify sound like a light, where you could have situations where audio is near consistant in a given radius, begins to fade past it, until fully diminished when out of the full radius. To an extent, this could be done with the current API, but if I remember correctly, tinkering with sound position and rotation, things that would need to be done to mimic such an effect, results in the sound emitter to restart due to a bug. I think it’s been addressed for a future update or the team is aware of it, but I haven’t tinkered with it too much since.
Not to mention, I’ve actually had this as a feature request for awhile now (psst, devs, look up “Feature Request: ability to find perceived audio volume”).
Occlusion factor would be interesting and I think clever use of zones could even make it work out, but I’m pretty sure there’s easier ways of having it automated so a user doesn’t need to play “I want to be a map maker.”
Anyway, going back to the entity idea: I think the bigger issue is that the current way of specifying data in an object is pretty good for programmers, but confusing as heck for non-coders, which is why things like the currently in place sound player isn’t doing too much. Part of it would be due to having to find a way to make the Create app GUI flexible enough to allow for custom parameter specifications, but I think having something like that in the long run would end up allowing what you want to happen to become more possible without reinventing the wheel while adding additional flexibility for programmers and creators in other tools.
This is where I want to being up MIDI again. While I’m not quite sure how useful it would be to pass a MID file as an audio file source, I could totally see cases where you could specify the MIDI device you are trying to mirror on a virtual incarnation, where the two would match the actions of each other, which has been demonstrated a lot lately (passing MIDI from a VR user’s virtual keyboard to the synth of another user in another location, sending MIDI from an instrument to the virtual one to cause an action, etc). Having the above optimizations could mean making this far easier since the user would have to only worry about a simple GUI and not a complicated “Time to bootcamp myself through JSON.” The key thing is to find a way to make this universal so that no matter the create app or system, the expectation of where to look is the same.
Regarding audio streaming, reverb, and what not, I completely agree. In fact, most audio stuff now mentioned in the server’s settings aren’t even correct. This was brought up ages ago but I guess it’s worth bumping again, since the reverb effect is pretty well done when it works.
Regarding the playlist script… actually, this one does exist, but where it’s used is very weird. In The Spot, the radio playing near the grill is actually a mono mp3 file that is over a half hour long (it used to be a 175MB uncompressed WAV file). The irony is that the burgers nearby? Those are actually using a playlist script for 3 different sound files that randomly shuffle which one is played… at least last I inspected it. I just tried to find the scripts in the GitHub, but alas, they aren’t there.
I guess that brings up another thing: a good chunk of stuff that typically gets requested in High Fidelity does already exist, but isn’t exactly broadcast too well. Things like the sound player, playlist player, spawn spacer, web shortcuts, marketplace item shortcuts, nametags, and even something as simple as chat are all available for free, but are held back by needing access to community knowledge. Sure, you could have a backup file of an item that has what you need, but that gets old pretty fast and isn’t easy to send from person to person. So while I’d love to hand you the playlist script, there isn’t an easy way of doing it (that and I’m not sure if High Fidelity wants it public or not, despite having other items publicly available in their GitHub).
So let’s TL:DR this entire thing:
- Most things are in the API and do require a script to be made
- While a sound entity would be an interesting addition, improving the ability for creations to be customized without needing programming knowledge is a must
- Yes, I’m even going to call JSON programming in this case
- MIDI support does exist, but the above listed improvements would help make it even better
- Except MID file support. Honestly not sure how that could be used unless you wanted local/server effects that used MID file actions (light controls?) to be translatable to other objects as a form of an always running automation system
- Propagation Factor would be awesome and would be even nicer if we could perceive the current audio levels post PF in scripting
- Audio stuff in the server settings needs some love
- Most things that have been requested feature wise do exist, but the ability to find them isn’t the best.
Overall, I agree that creator tool usability improvements are a must, be it for people trying to tinker with sound or any other metaverse aspect. Most of the issues I see here could be quelmed if the ability to interface with pre-existing scripts had easier access without needing to know what JSON is, while the ability to recall scripts would also help out in knowing if something already exists or not. This would effectively mean that custom entity types, in theory, could be made on the fly. Need a sound emitter? Just have a good sound emitter script on standby. Need a playlist emitter? Make like Glade and plug it in.