It’s more likely that the current texture degradation code is too limiting in what it will degrade. Basically we only look at whether we should downgrade a texture when it’s actually used, and only if it has the same number of mips as the maximum in use.
A 1024x1024 texture has 10 mips, a 512x512 has 9 mips, and and so on… so if we’re using a texture that has 9 mips and somewhere else there’s a texture that takes up 10 mips, then we won’t even consider degrading the 9 mip version. The idea here is that we don’t want to needlessly reduce quality by degrading smaller textures when bigger textures still exist that would free up more memory. However, this presents a problem. Not every texture is rendered in every frame… so if you spin around and a 10 mip texture gets loaded and then you look away from it, and no other texture you’re looking at has 10 or more mips, nothing can be degraded and no memory can be freed. In addition, I’ve discovered that textures that are loaded briefly, used and then released end up in the texture cache. These textures can easily block other in-use textures from being degraded.
The solution is multi-part. First, as we approach the texture memory limit, start reducing the size of the texture cache. Keeping those textures around for rapid response in case they get used again should take a back-seat to preventing the user from completely running out of GPU memory. Second, instead of limiting the texture degradation based on the biggest texture currently in the GPU, we should do so based on the biggest texture from the most recently rendered frame. Finally, we need to find a way to allow us to start going through all the textures that are eligible for degradation, but haven’t been rendered recently and see if they need to be degraded.
Just haven’t gotten around to the implementation yet.