'Unable to Move' Bug


Continuing the discussion from Meetup today - April 8th @ 2pm PDT:

@chris, here is the photo you requested Yes, it looks like the texture bugger size came very close to (or maybe briefly exceeded) the size of the graphics VRAM. This was on an ASUS g73 with a Nvidia 460M. You can see in the video that everything was rendering OK, I just couldn’t move around. I am unclear how blowing out the texture memory in the video relates to the avatar no longer moving linearly (rotations, jumping up worked). anyway, here it is:


So this appeared to be us hitting a texture limit?

Is this something that will be fixed or is this a you cannie change the laws of physics kinda deal?


I agree that it seems unrelated.

However, we have seen cases where once the GPU memory usage is significantly higher than the available ram on the GPU, then the driver will often go into a “virtual memory” support mode, and essentially swap between system RAM and GPU RAM… this swapping can cause blocking of all GPU activity… and you end up with very low render and present rates… that’s not happening in your case as evidenced by the 60hz present rate.

So “unable to move” in this case is not likely related to GPU memory usage.


There are couple of different “things to fix” here… Unlike game titles, where you can turn to the art department and say “use less textures or you’re fired”… we’re in some sense at the mercy of all the users who create their own content…

One of the fixes is to give content developers tools like the renderStats.js so they can see the effect of how their content interacts with the system, and how it will perform for end users. We’re going to continue to develop tools like this to help content developers optimize their content.

The other vector of a fix is for us to do more work on LOD and gracefully downgrading the experience when the content doesn’t fit on a particular playback environment. This is ultimately the area where I believe most people will see the biggest benefit. But of course, it has the side effect that some people have a degraded experience over what the content author originally intended.

The final vector is to do optimizations of how the system uses the GPU. We’ve got a couple of currently active tasks that will give us more headroom essentially using less GPU memory for existing scenes. This might sound like a good idea (and it is)… but it doesn’t solve the generalized problem, because… content… like a gas… will expand to fill the space available to it.


What i was trying to do is have my models share the same textures where possible so the textures are not packed in the model.so multiple textures are shares over multiple models.
Is there any benefit to doing this how hifi is set up at the moment or is it treating them all as unique anyway?


Sharing textures is really needed. But i seehigh fidelity not stable and finished enough for trying to work with it. As example in the assets uploader there’s no directory where we can uplod the texture to.

But, if textures are already a problem now, what is going to happen when we get terrain that can handle different textures, mabye the amount we can use get limited to say 15 different ones on domain, yes very restrictive.


Yes absolutely there is benefit from doing it that way. That should absolutely give you a benefit.