List of requests after some time using HiFi


Okay, so after several months of using HiFi and getting an actual world into it, I’ve compiled a list of features I found lacking in Interface.

Some of these may actually have already been added and I may have just completely missed how to do them despite all my searching and wiki-reading. Apologies and thank yous if so. Also, some of these suggestions will become irrelevant if custom shader support is finally added for models.

  • High Fidelity currently already supports lightmaps, but does not support more than one UV map per mesh. This effectively makes the lightmaps useless if you want to use lightmaps for anything besides a small prop. Please add the ability to have at least 2 UV maps per mesh, and make lightmaps use the second UV (this is what UE4 does by default for its lightmaps). Lightmaps by their very nature demand non-overlapping, non-intersecting UVs, but you can’t or don’t always have a diffuse texture that meets this criteria (in fact, you almost never do for large props or terrain, which frequently use tiling textures), meaning that we need to be able to have 2 different UV maps, one for the albedo and the lightmap each. An example where this is the case: A large city with buildings that all use the same repeating/tiling brick texture using a UV map that goes past the UV boundaries to create a tiling effect. In order to lightmap all of these buildings properly, you couldn’t possibly use that same UV you used for the bricks because that texture repeats and tiles, and lightmap textures have to fit entirely into the texture space boundaries. I should add that baked lighting helps performance dramatically, making it an actual viable option in HiFi for those who want to use it would be immensely helpful towards the goal of maintaining good framerate in Interface.

  • Add the ability to animate the UVs of any texture, either by simple scrolling on the X or Y axis, or by using actual separate animation frame textures (like GIFs do). Incredibly important for creating convincing-looking water

  • Add a “draw distance” setting to Zone entities. While a player is inside the zone, the setting takes over and anything past the set draw distance is no longer rendered. Benefits of this feature are HUGE for frame rate and optimization in general.

  • Add a checkbox to model entities’ Physics section, called “Camera Collision”, to block the user’s camera from going through the model entity’s collision model/hull. This is necessary for preventing the camera from going outside of the building that a user is in (this dramatically breaks immersion while in third person mode, and happens very frequently when you’re in a very small room or building, where instead of seeing your avatar you just see the walls around them, forcing you to zoom in very very close manually to actually see them) This is a non-issue in VR mode obviously, but I use HiFi exclusively in third person non-VR mode and so will many of your other users assuming HiFi is to be successful. I totally understand focusing entirely on VR first above all else, but making small concessions like this for non-VR users will make them stick around longer, long enough to buy things from the marketplace and bring in $$$ for you - so you should care about making them comfortable too.

  • The ability to swap textures for a model entity on-the-fly via script. Also, a Opacity setting for entities that isn’t just On or Off, but 1-100%. This would allow, for example, the transition of a daytime skybox, to a nighttime one.

  • A button in the default HUD that automatically mutes or unmutes all web entities in the domain for the user. I use a web entity for the background music in my domain but some people prefer to have their own music playing in iTunes or whatever - this HUD button would make it easy for them to instantly disable my music without having to look around for the web entity (I make mine very hard to find to maintain immersion) SL does this right by having a music note icon in the top right of the viewer. It’s easy as pie to disable media there, but not here.

  • Please make the ATP asset browser window more like Windows Explorer, with the ability to search, sort, change view, etc. And the ability to upload multiple files at a time, it’s currently a huge pain to upload a model with many textures on it to ATP. Have to upload every texture one by one.

LOD, Clipping plane, Draw distance and stuff
The Open and Honest Content Provider Community Dialog Thread

I totally second this…


:basketball_woman: I totally third this…


Bump. This is a list worth having.


A possible addition may be that of adding a support of binary space partitioning, which could use zones as a way to figure it out. That way, we could say that if a user is in Zone A, and is walking towards Zone B, whereas Zone C is to the right of Zone B, the user can’t possibly see Zone D, which is behind Zone C, so there is no need to render it.

Of course, one of the bigger things that is to come with the system is the sparse voxel octree system, which would render even binary space partitioning even more powerful. Once that’s been finalized, it would be interesting to then see how it’d work with a draw distance system, since we would be specifying not just specifically the drawing distance of the zone, but how fine to make the voxel octree (as in, how much area does the voxel cover based on distance).

As for camera collision, the only issue I see with that as VR catches on, this may not be as much of an issue. That being said, I think something like that could already be made by making the camera go into entity mode, have it follow a local physical object and as that object perform collision checks, the camera would be bound to that object and so would respond as such. This also means we can apply smoothing to prevent traditional physic engine habits of pushing things into each other. Either way, I am not downplaying the argument that a successful product will be usable in both VR and in desktop mode, be it in FPV or TPV.

A button for the HUD that mutes all web entities would be interesting, assuming that web entities remain as their own channel. I’m not sure if web entities even have a volume parameter, nor if it’s in the works, but ultimately, that would be the biggest step first towards that solution, with a muted boolean value being the easiest to address first.

And as for the ability to swap textures on a model (especially avatars) via a script and opacity settings for entities:


well, an occlusion system should be easy to do and is definitely something that is a requirement in real Time application and even more in VR where the optimization is key for frame rate etc…
Having some sort of occluder and occludees attributes, a way to bake all that would be great. This is something that is in most if not all 3D game engines…


If there was ANY way for a script to get access to the UV map of a model and muck with it, then we could write client scripts to do scrolling, rotating, flipping, etc. Give me a place to stand and I can write my own texture animation!