Improve the bandwidth for people with internet caps


is there anything that can be done for people that who have bandwidth caps so this dosent use up there whole amout of internet for the month

I have a few friends from Canada/ and australia they have a 100GB a moth limit on there internet and they tell that if High Fidelity is going to take up as much bandwidth as it dose they wont be able to play it use up that much in one to two days


Thanks for bringing this up @DGMurdockIII. As an Australian newcomer to High Fidelity with a 200GB monthly cap shared between three users in my household, this was important for me and other Aussie users to know.

Please correct me if I misunderstood your post but does High Fidelity in its current form already have the potential to chew up 100GB or more in only one or two days?


yep it can eat 100gb in one to two days


This is partially due to having models around 10 mb a pop, non proceedural animations, huge textures, and all the communication components such as hand and gestural data, voice chat.

Considering this voice data is probably 1/3 of the bandwidth usage. Rest is in the hands of the content creators and what is attached to avatars. The default avatars currently in the marketplace are around 10 mb per pop including textures.

However there should be a way for the client to “restrict” downloading of models that are huge for some connections (instead make it so that the user can say display this object to start loading it). This should get especially relevant when ever if they start looking into mobile direction.

For turning off voice thought thats where the difficulty may lay as there is no default chat. Same thing with the
hand / gestural data as HF has been build around them instead of trying to make them additional stuff.

100 gb seems to be a bit of an overshot though. Ive only used about 200 mb an hour… 400 mb if in a place with a crowd.


Thanks @Menithal that was very informative.

This is probably relevant now for HiFi users in Australia, Canada and any other country that enforces bandwidth caps.

I’m glad I found out before I jumped in unknowingly! Perhaps my recent internet problems have been a blessing in disguise! :open_mouth:


Is it gigabits or gigabytes per month limits? And is the limit X in and Y out or does your inbound and outbound ADD to define a total cap? Just want to be clear on that before jumping in with some thoughts.

That being said - SL displays its bandwidth stats in Bytes per Second (at least Firestorm does) where it’s in Bits per Second for Interface.

And - I agree it’s a bit heavy right now even if you don’t have use caps. Will reserve additional comment until it’s clear on the question above.


In Australia, it’s gigabytes per month, with both inbound and outbound adding to define the total cap. The biggest ISP here has home broadband plans ranging from 50GB to 500GB per month.


Thats a good point. I incorrectly used my size values. my previous thread uses bytes, not bits (monitored through network usage).

kb = kilo-bits (used mostly to describe speeds)
kB = kilo-Bytes. A byte is aprx 8 bits. (used for sizes), while kB is 1024 Bytes (instead of just 1k).

So yes, 200 MB would translate to 1.6 Gb. And in Interface this is described as Bits. So it generally will look like 8 x the value. (so if you are seeing 100 g bits in HF, this would be equal to 12.5 GB

There should be clear distinguishment on it, but I doubt ISPs like to point any of these differences out to users since using bits inflates sizes :stuck_out_tongue:

My values are a bit skewed however, as they mostly from just sitting in the sandbox and not loading everything… (currently at a place with a spotty internet connection, and I am using a Mac with intel…) So there is that…


I’m going to crunch some numbers and make a pitch as to why we need to think hard about this subject and consider some modifications to audio code and AV streaming data. I think we can have our cakes and eat them too for those. I have no caps on my usage, but this is a subject important to me none the less. It directly impacts ability to have large populations within a domain - even if hosting on a well connected machine at a data center.

As to models. Looking at data streamed from my private open sim grid for models same as placed in HiFi - I see little difference in data sizes. That’s a much tougher nut to crack, but, I’ll be first to admit I’ve been anything but stingy in my texture sizes and model mesh sizes. It’s hard to limit in a world with few limits after being constrained in other places. That being said - I have one model that’s 15MB between mesh and textures - It has textures embedded within FBX so that’s its total “cost” - not 15MB + X MB textures. But - I used PNG for textures vs only for the one requiring it (1 with transparency). Did a quick remake with JPG with decent quality vs max compression and shaved its size to 7MB.

I’m not sure how far we can go with reducing mesh/texture data transfer sizes… but, some improvement seems possible without totally killing the ability to do things in HF you can’t do in SL/OS.


We also have to remember that any files put online and user in HF will use the bandwidth of the provider as well, so there is some natural restraint to it.


Exactly - I have an inventory system coded giving a similar experience to inventory in SL - as in things have names, folders for things - easy upload etc and would like to offer it at some point, but a couple of things have to be solved before I can do so. First I have to register paperwork to be a “Safe Harbor” provider under US DMCA rules and, second I need to see a little more with where HiFi is going with certain things. Using CDN tech I can spread the load out and better serve content globally, but, that’s not free and could quickly start costing me some all too real $ if done without proper engineering/planing.


The only difference here is that we should encourage content creators to not bundle their textures into the content (in binary format), as its more likely than not that referencing to the same texture files could save on caching, especially if same textures are using on other avatars / same entity models.

Having a client adjustable restraint to the size of models could help.


That’s an interesting point - I’ve assumed it’s cached regardless as the FBX is unpacked on client side - so, if it caches FBX with embedded then it should be same as not embedded. I need to add some debug code to confirm that is the case.

One important difference here from SL/OS is that you might have a texture in those that’s used in hundreds of places that’s same texture - it references back to a UUID (for example the default plywood). If you’ve seen that in one place and cached, it’s there no matter where you go on grid. Many many other textures have similar parentage relations. Here, if you see plywood.jpg in Heron, plywood.jpg in Sandbox and so on even if all were identical they will do an initial load to cache on each initial view. This is part of what I’ve worked on in my inventory system - assigning textures UUID values - using SHA256 hashes on textures and meshes to find identical textures/meshes and reference those to root UUID values for serving. It’s complex, but can save tons of bandwidth for all sides.

And a cap for a maximal download seems totally reasonable on client side. I’m all for people having tools to control their experience that aren’t server side limitations.


I like that idea of using SHA256 hashes for cache values.

Anyway, I made a worklist suggestion on the capping
for the devs to look into.


Been following this discussion with great interest @Menithal and @OmegaHeron, even though I didn’t understand the technical bits! Thank you for your efforts on this issue, it is much appreciated!


100gb in two days?

somehow… that number seems… like… bs

If caching on interface is working as it should be, I find it hard to believe that bandwidth usage would be much higher than SL/OS. The avatars and domain builds here are mostly tiny compared to what is common in SL/OS.

As far as voice/sound/AV data goes, I am sure there will be some problems because there will always be people that think their avatar/gesture/hair etc. is worth the extra burden to everyone else.

SL seems to have been able to make a profit with tier payments vs. bandwidth but that may only have been because the bandwidth intensive places were offset by the much larger number of empty sims.

So yes… on a domain or user basis, a person might want to limit bandwidth.


I do not think that limits should be inherent in HiFi itself. The idea of limiting things to the least common denominator nerfs everything.

Here in the US some carriers (ATT and Verizion for instance) are trying to get people used to the idea that we should pay by the byte rather than the way it has been in the past but the main reason they are pushing
this is simply because they think that they can get a better immediate return on investment, in other words get more profit in the short term without having to invest in the future.

There are a few things wrong with that model. 1. at non peak times using bandwidth doesn’t cost them anything extra, 2. at peak times even with bandwidth caps the limit of the current infrastructure will be reached.
and 3. there will be less if any incentive to upgrade the network.

If I were in a situation where I had to deal with that I would complain, try to get it changed, take my business elsewhere and do what ever I could to stop it.

Sorry for the rant but limiting things for everyone because of someone’s bandwidth limitations or ten year old computer sort of rubs me the wrong way.


Agrees with @Twa_Hinkle it would be horrible to crush creativity to inclusive down to the lowest spek.
However this polyworld domain set me thinking .It seems alot like the voxelisation tek that was originally muted for hifi.
The triangulation decimation work flow used to get the low poly look , might be a lovely way to make the graphics scalable to all devices and to create collision volumes.
people who want to access hifi on a watch will see a simplified representation of what the person with the hi end neon lit nerd tower pc sees


Yea the 100gb does sound suspect.

And yeah prim limits were urgh in second life, considering that the prim models were localized models just re-repeated in a XML tree that define locations. So they were quite light for cards.

However texture size limitations did have their place. I’d dare to say they might have been too lenient in SL, especially when people started to bring in 4k textures through 3rd party viewers. Thank fully HF does downside textures to 1024 when going to cache, but there still will be people who will put ridicolous sized textures on servers because they do not know any better (yeah sure it’s their money)

But as in discussed with OmegaHeron
The limitation definetly should not be set in stone, but adjustable by the end users.

But there should be a default of some value, and it should be client side (as the client is the one loading the models) . So if one has a good connection then they should be completely possible for them to download all they want.

Nobody wants to be surprised by someone walking around with an avatar with a total value of at 300 mb. ( hogging both your bandwidth for a few seconds and then occupying your GPU memory).

In fact this seems extremely exploitable and we might see griefers use this technique to load up a nearly tiny 1 gb junk model entities and slap the model to a disposable drop box account and the blocking the dropbox host on their end (to avoid loading the file), and then rezing the object in world and forcing everyone to at least download it. forcing everyone’s bandwidth to solely focus on the file, and when rezed occupy their memory.

Sure people should able to do so, but only for those who have the capability to handle such requirement who can turn off the limitation.

However that aside keeping models and textures at appriate size affects everyone: from backend services hosting the files (and bandwidth per download)
To downloading and rendering on the front end (aka client)

Optimization of models should be encouraged we always will be limited via hardware capabilities. Many game engines spend lots of time on optimizations, even to get stuff running on the latest hardware. Artists tend to create too much detail but during that engineers tend to say no optimize.
Infact we are still missing LOD models on entity models.

in short:
We have to remember that model optimization comes also benefit of file size.

So easiest way to encourage this would to create a flexible cap that can be adjusted or removed by the end user.

You can still do a lot of things with the default(and changeable) 25mb total sized per model limit I suggested. (In fact the avatar I made only hits 2mb with textures, blendshapes and joints)

@Judas yeah the decimation work is interesting, but it will not solve the issue of bandwidth as the model has to be first loaded to do the calculations.


Maybe the domain could report back the memory usage on the lobby page?
Personally I’m loving building with absolutely no regard for the end user. I know the difference between good and bad models for games like I understand that cakes are fattening. Just I have been in second life so long I feel like I deserve a bit of a binge .


A good number of the larger textures I use could easily be generated by a few lines of code (eg colourised grey-noise for rock textures). A few built-in programmatic textures for commonly-used effects (mostly terrain, I would think) would be rather useful.

Also another bunch of what I generate would be good using SVG textures (better, actually: infinite detail!).

Using in-worid-color-tinted greyscale textures (as can be done in SL/OS) would also allow me to re-use textures in a few specific cases.

Finally, per-vector-color tinting of a generic texture may also be helpful. Anything that allows smaller textures to repeat without looking repetitive!