4762 has much higher overhead


Continuing the discussion from Random Picture Thread:

Another datapoint. With 4762, I no longer can experience the Music domain because I am unable to move more than once every 5 seconds there. Even lowering LOD to 0 doesn’t help. I used to be able to navigate through playa nicely and smoothly before, but now it is like walking through molasses. Also, got a TDR there: http://nvidia.custhelp.com/app/answers/detail/a_id/3633. I am OK in my low content ‘conferencing’ domain.

Looks like this build kicks me out of HF virtual worlds until I drop a couple thousand on a new computer or the slowdown gets addressed.


my sympathies i had to build e a new machine and could only afford the NVidia GTX 960 (970 is recommended), so i worried when i read your post, but both of those ‘sites’ Music & Playa were ok and as i experienced before, if slow loading . I think the High requirements ‘GTX 970 recommended’ will again make for slow HiFi uptake but I guess on the back of Occulus & Vive everybody will be forced to upgrade (eventually)


Can you provide more details about your setup?

Would you be willing to send us your hifi-log.txt when you attempt to go to the Music domain so we can try to figure out what you’re system is experiencing problems.

We did make a handful graphics changes between 4730 to 4762 - but those changes all improved performance on both higher end and lower end graphics cards in our testing.

Any additional details you can provide would be helpful.


Will do soonest. I am presently trying out some workarounds where I override some of the 3D graphics settings usinghe NVdia control panel.



A few things to try:

A few things that would be useful to report to us with performance problems:

  • Run renderStats.js
  • Report CPU and GPU memory sizes for textures.
  • Report if the gpu texture transfer count is ever non-0 for any significant length of time (more than a few seconds once loading is complete)

Some of the changes introduced in 4762 involve loading computing texture mips on the CPU during texture load instead of on the GPU. If your system is CPU memory constrained, this could have a negative effect on performance while the texture is loading, as it will consume more CPU memory. This change was intended to smooth out some of the frame stuttering seen while loading content heavy domains as generating the mips on the GPU is part of what causes those stutters. However, this effect should only consume more memory in the period between the time you load the texture and it’s transferred to the GPU. On the other hand, right now we load a texture into the CPU memory when we get it from the entity server but only transfer to the GPU if it enters the viewport, hence the suggestion of a 360.

The PR build I linked is designed to improve performance in memory constrained situations. If you are close to the maximum allowed amount of texture memory (based on the total GPU memory reported by the card) then we start lowering the resolution of the rendered textures automatically.

Without knowing the cause of the performance drop on your system it’s something of a shot in the dark, but the more feedback you can give us the better able we are to try to address the problem. Trust me, we are not trying to reduce performance in favor of features. Quite the opposite, we’re currently focused on making sure the platform is stable across a wide range of hardware, and in particular, maintaining a reasonable framerate and not crashing on low memory / low power systems.


Will do! ____________


Started A/B testing. 4762 and PR7693. Went to Playa and just spun around. Results were:

4762 crashed running out of texture memory.

and PR7693 ran longer but then hung for about 30 seconds before the crash dump dialog popped up (report sent). This is a snap when Interface stopped responding altogether.

Not using HMD in test
This is on an ASUS G73sw laptop.
Intel core i7-2630QM 2GHz
Hybrid 7200RPM SSD and SSD
Nvidia GeForce GTX460M 1.5GBVRAM (well below min for HMD)
Win10Pro 1552 production bits
Graphics driver: 2/15/2016 WHQL


Build 4791 is much better. It looks like your changes have helped tremendously. I can spin around at the playa entry location without incidence. I did a compete circuit to really push textures into the GPU, and eventually interface crashes with OOM, so it looks like the code needs to be quick about watching for the limits. Otherwise, a good direction in the changes so far.

I manually enhanced the renderStats pane, cranked up the contrast, upped the highlights to make it readable.


It’s more likely that the current texture degradation code is too limiting in what it will degrade. Basically we only look at whether we should downgrade a texture when it’s actually used, and only if it has the same number of mips as the maximum in use.

A 1024x1024 texture has 10 mips, a 512x512 has 9 mips, and and so on… so if we’re using a texture that has 9 mips and somewhere else there’s a texture that takes up 10 mips, then we won’t even consider degrading the 9 mip version. The idea here is that we don’t want to needlessly reduce quality by degrading smaller textures when bigger textures still exist that would free up more memory. However, this presents a problem. Not every texture is rendered in every frame… so if you spin around and a 10 mip texture gets loaded and then you look away from it, and no other texture you’re looking at has 10 or more mips, nothing can be degraded and no memory can be freed. In addition, I’ve discovered that textures that are loaded briefly, used and then released end up in the texture cache. These textures can easily block other in-use textures from being degraded.

The solution is multi-part. First, as we approach the texture memory limit, start reducing the size of the texture cache. Keeping those textures around for rapid response in case they get used again should take a back-seat to preventing the user from completely running out of GPU memory. Second, instead of limiting the texture degradation based on the biggest texture currently in the GPU, we should do so based on the biggest texture from the most recently rendered frame. Finally, we need to find a way to allow us to start going through all the textures that are eligible for degradation, but haven’t been rendered recently and see if they need to be degraded.

Just haven’t gotten around to the implementation yet.