Two recent HiFi blog posts


The problem is… most of these issues were already solved within High Fidelity.

  • 3D Audio: While it requires beefy systems, this already works. In fact, roughly 3 years ago, a test was done which made the audio server start to identify groups to save the requirements of operation and during the convention, I heard a mentioning of attempting to use frequency adjustment based on distance (which I could see being used to create ‘channel averages’ for the sub groups to act as a single, low emission point).

  • Crowds: This was attempted to be resolved with a ninja patch (aka the bad LOD animation trick), and honestly could still be worked out if it was handled better (aka tween between updates). The other thing is, and I called this a year ago, that the client needs to be able to handle it. The article talks about how ‘current game engines can handle hundreds at most’ but I think uh… that’s not very accurate.
    2,379 users all on at its peak (+8033 AI ships). This was a test to improve their own systems. Mind you, the values involved are less (velocity & angle per entity), but with all the shots being fired, this was pretty intense on both clients and servers. Mind you, this tech demo focused more on real time action, rather than the ‘secretly a turn based system’ in the primary engine.

  • Reputation: I honestly cannot say anything to this without sounding rude. That being said, I’m not sure revealing things like a credit score on any kind of publicly accessible database (I’m assuming it was talking about the blockchain again) is exactly a wise idea. Yes, you can encrypt the data, but even that’s a mute point with newer systems being stomped faster and faster each time. Out of all important systems, this one is the lowest on a Metaverse of any kind, because who’s to say the stored information to validate a criteria is even reputable itself?

    The irony is that it’s mentioned that anonymous users having full power would result in an unstable situation. Welp, tried to warn ya.

  • Interconnected Spaces: The thing is… this was already on the roadmap. Everything described here was on the blueprint on the now officially destroyed maker map. That was supposed to be the end goal and was a feature request to remind High Fidelity about thee concept that they proposed. Such sweet irony is the termination of even the roadmap for feature requests to mention an already forgotten concept.

  • Infinite Level of Detail: Is this a running theme? Again, this was already solved! The whole idea and logic behind the voxel octree system was supposed to solve this, and very much still can. If you take the above feature with Interconnected Spaces, had each smaller internal domain create a voxel octree for itself, send the updates and info to its master server, who in turn can send it to its larger master, it solves actually a large chunk of issues. Namely that users don’t have to download models of what the voxel oxtree represented and updates could be broken down based on distance. In essence, you would have the ability to render near infinite distances without taxing the GPU too much or even the network! Considering that the Oculus Quest will now stand as the floor requirements for VR, this would be a massive advantage to the platform if this was worked out. Rather than just using model LODs, having voxel LODs would be less network intensive (making it WiFi friendly, a known issue for High Fidelity) and graphically friendly (Cubes are easy to render). This would only make the primary PC client even stronger.

    Again, this was already a concept. Was it just forgotten?

  • Live Editing: This uh… isn’t this a thing NeosVR has? I mean, High Fidelity could easily add something similar if it had say an internal Git of an entity set to be editable to track the edits on say the vertex changes and what not. That way, you could go back and forth with your changes but only require users to download one model and observe the latest changes. This would be very low intensity on both the server and clients. Also, I love how the idea of permissions came up. Oh, you mean the thing that shouldn’t exist but now realizing that it should have?

    Anyway, I guess you could have it do a session ID thing, like a list of users who’s sessions have edit permissions. That way, a person can give another person permission to edit if they just made the object and want to pass along the permissions, which can then be done without needing to know the user’s information.

  • Payment System: Uh… just saying, but wasn’t there a company who had this exact problem and sorta solved it already? I think it was called Linden Lab?

    Anyway, the good point raised is about blockchain based currencies, and I guess that’s the sole reason you can’t buy HFC with USD (which honestly killed a lot of things for the platform). The only thing I could see to help solve that is if a currency could still use the blockchain for transaction log reasons, but was still independent from it. What I’m trying to get it at is to have the value be legal but still use the blockchain for say the PoP system. The idea behind validation through the blockchain is still very useful, and honestly is HiFi’s bread and butter, but the challenge now is how to make it so the currency is available like all other virtual currencies.

Honestly, the whole article just feels like its admitting defeat while also explaining what was learned, like everything was just a 5 year experiment. My only hope is that what was learned can be applied to whatever venture awaits this next project of the company. Here’s some additional 5 cents on the matter:

  • If you build it, you should have some tools available first: This was a constant argument for pushing for publication tools to be made available and easily accessible (cough cough oven cough). I feel this was the beginning failure point, because imagine, if you will, that the oven could be used for more than just baking a model? What if the oven could also generate a voxel octree template the server can refer to for LOD purposes? Suddenly, baking models becomes even more useful, because you could build bigger worlds that LOD using High Fidelity’s built in technology, without needing to create a lower detailed model yourself. And live editing? How about something that got the ground work ready so editing can be started right away when brought in-world? That could have been big selling point for people to use the tool and for people to be on the platform.

  • Consistancy is key, options are required: I will forever state the new control scheme is dumb. I can jump around all the various metaverses without having to learn anything new and maybe some changes, and overall have a great time. The current scheme feels like it was designed by someone who hasn’t used VR, and while some have found it useful, this number is also very small compared to everyone else. Honestly, I don’t know why the option to redefine the controls wasn’t implemented or why the older scheme couldn’t be kept and made available. With each addition to the platform, it feels that adding options is just an after thought or never thought about at all.

    Oh right, because adding options is hard. I almost forgot about that quote, followed by mine where adding the ability to turn off the loading screen, which pretty much everyone already stated they did not like, was done in 45 minutes, with thanks to Menithal for pointing out the new variables and functions involved.

  • First Impressions are everything: This was one of the key reasons I was always poking The Spot and performing bandwidth tests, along with submitting optimized files to reduce the weight. I haven’t tested the new landing point, nor do I have any interest unless someone is really curious, but when the move to TheSpot was done, it made some new users who did come in have issues about how big the worlds were.

    Throwing the big comparison since some of the users of other platforms do poke High Fidelity out of curiosity, what people considered ‘large worlds’ in VRChat average around 250MB in size. The radio near the grill played music from a 169MB WAV file. Yes, a single, uncompressed wave file playing an entire playlist. I submitted an MP3 version weighting at 15.6MB, but another file, the City Underpass ambisonic file, is another 32MB multi-channel wave file. We’re already at 200MB with just sound files alone before my analysis, and this isn’t even considering the other models. I recall my estimate being about 500MB+ in total for TheSpot, without any avatars involved.

    The other issue is that there aren’t really any tools to measure the cost of a domain. Sure, you can see the average bandwidth in real time, but this is for concurrent bandwidth, not the total size of a domain (models, sounds, etc). This was brought up back when the Loading Screen was a thing and that no tools were offered to operators to help monitor the time it may take for users to load their domains.

    On top of all that, the fact that teleporting, the most basic, simple function, was made into a two handed operation, when I’ve already heard stories of users who’s trackpads no longer click down, means High Fidelity is actually no longer usable to them or was made more confusing than it needed to be, without indicating how to do such. I’m almost certain the new welcome area does not reflect how to use this new scheme, and relying on greeters to explain basic movement should be a hint at where better help should be provided. If a user can’t even leave the starting gate, they’ll just go elsewhere, and judging by the average population, that’s pretty much what happened.


Yeah the article seemed quite hypothetical, and is making the same biased statements about ‘what the metaverse is’ that we have been hearing for years. Obviously the ‘parts’ does not make the whole as can been seen in current Hifi state.

If we gloss over the typical Valley adage of ‘make the world a better place’ with VR … what’s the game plan?

RE; Crowds in VR and audio … this is so counter-intuitive and my own experience a very bad VR phenomenon. Now you can feel like people are talking over you, forced to filter out belligerent asshats in VR… Alone in a crowd simulation … how much fun could that possibly be?

Posts like this re-affirm how far off the mark Hifi’s ideas about social gaming have become… Next blog post will be something like ‘Social VR projected to be a 50billion dollar industry in 2021’; or some-other happiness cool-aid being drunk.

A successful VR platform needs to do 3 things in my opinion
1.Register, build and manage user’s avatars and identity (avatar services)
2.Host user content, worlds, props, interactions (content services)
3.Curate user content (moderation services)

High fidelity did many things, but none of the above points were implemented effectively.

1 Like

To add to the audio concern, this was also a big problem on High Fidelity, but honestly is a problem in most other games as well that use voice chat (haha, insert Fallout 76 joke here). This issue was that no one really had proper microphone settings that made their voices pick up at the right volumes, even if people used the same headsets. Again, as I argued in the past, audio perception is unique to each person, and what one may call loud others will call quiet and where one will say the music sounds fine, another will rip the Beats by Dre headphones off and assert their Audio-Technica dominance.

High Fidelity and virtually all other VR platforms with voice communication, especially with desktop mode options, have an issue where people’s volumes are just not right. Sure, you can adjust individually, but High Fidelity never offered a preference system per user (again, anonymous systems take precedence). This meant you could have a user come in who was using a laptop microphone that was set to overdrive and would become ear bleeding loud or people who’s volume was not quite high enough. This also became an issue for those who did take the time to ensure their levels were correct, because during conversation, the loudest one wins and they’d become muffled in the crowd.

High Fidelity’s team was blessed with having a strong audio team, but at times, it felt like the same team seems to forget what the average user is and what extent they honestly cared about with their setup. I may be rocking an MXL 990 connected to an Alesis Mictube Solo Tube Preamp with a Behringer U-Phoria UMC204HD set to just the right levels as to never peak too high or be too low, but that’s an honest exception to the normal userbase (if also a bit overkill). You may have people who are using gaming headsets from 8 years ago that were meant for consoles, where the mic quality wasn’t really important and had a terrible built in compressor.

But the biggest irony of all is that for a platform where audio quality is king, it sure had a higher amount of issues where volume was an issue, yet with a single volume slider, I can count the number of times where it was an issue in VRChat with one hand (leaving out obvious loud troll bombing, since both platforms had them).