TL;DR: When building content for you should try to ensure that your texture sizes are always exact multiples of 128x128 (for textures that are larger than 128x128, smaller textures are fine with any size).
I’m currently working on some rendering back end code that will smooth out the performance while loading content, as well as allow the application to start reducing texture quality of currently loaded content is we start running out of texture memory. However, there are some limitations with the functionality that depend on the content being loaded.
This functionality is supported by an underlying OpenGL feature called ‘sparse textures’. Sparse textures are only supported for textures where the dimensions of the texture are an exact integer multiple of the ‘sparse page size’, which is most commonly 64x64 or 128x128. This means that sparse textures can’t be used if a texture is, for instance, 10,000 x 10,000 pixels. However, 10,240 x 10,240 pixels would work just fine.
Following this guideline will make it a lot more likely that people running in HMDs and load your content without massive stuttering, and that when the amount of content exceeds the available memory on the card, we can address that by lowering quality, rather than by allowing the framerate to plummet.