Upload cache still not disabled!


It looks like upload cache is still activated or things it’s not a new object.
With the result that finding a problem is triple diofficult.

Disable upload cache compleet for the entity upload function, NEVER use a cache object when you upload.


Technically speaking, even if you had it updated it in your own client (clear cache on upload) id actually like the behavior to be as is:

Otherwise youd really run into an issue if people have already seen your uploaded object youd run into “cached object issue”: Everyone else would be seeing the old object, and you’d be seeing the newer one. This would only encourage bad cache use where content producers EXPECT that clients reset their cache every now an then. which is bad, BAD, BAD in the world of web and especially with the sizes of models we handle here. Assets should be updated accordingly, and not be reliant on what the end user does.

You cant really tell if someone is rezing a new object, or an existing object. Instead content producers should make sure that their versioning is uptodate for the content and only updates the cache when necessary or when content has expired… See HTTP Standards on Cache.

Instead use my suggested method of working around cache: This way you really do make sure that the version of the model and assets is consistent for -everyone-

Let me do the math for you:

  • You host a single an entity worth 5MB that is very popular.
  • Concurrent you have about 50 unique people using them in 20 different places. On average, about 5-10 unique people see this entity per place places, a day.
  • Unique people do not have the model in cache.
  • 50 x 20 x 7.5 avg x 5MB = 37 500 MB x N or the percentage of the unique people who reset their cache daily - a day

The percentage of unique people (N) without the model in cache would increase especially if people are expected to clear their cache once a while just to update one entity.

Basically with that I’d burn 30% of my entire bandwidth for a month for my cloud service in a day. I dont have the money to upkeep anything like that.

With all that, now multiply that with the amount of content you may produce…

As a Compromise: Local files should definitely be loaded without cache, but should have a special indicator showing that they are local only. (and not visible to anyone else)


That methode with ?v=0 is terrible. you could rename every time the filename. but that’s just the compexity i try to avoid. webpages you overwrite the same file always too. because that’s logic.

You cannot create as example index.html and the next time it need to be index1.html etc.

High fidelity need to ignore cache when you upload a object with the same name. Most time it seems to work. but there still to many misses, and then your never know if it’s the object a bug or high fidelity.

But absolute try to avoid the ?v= track because you lose count, i can try to use date/time isntead. but again make things extreme complex.

It would be nice if high fidelity can upload items local from disk for testing. with cache disabled !, so you can test it every time with the same name. until it’s right.

?v= most hate solution here. unless high fidelity is starting todo that automatic based on date/time so high fidelity would add automatic to the url ?v=20150703-0040

@Menithal We need to find easy solution for this. rigth now it sometimes end in hair pulling moments. it’s already lot’s of steps to test object.


@Richardus.Raymaker see my update on the issue I put some math involving cache reseting.

Answers to your points:

  • .html pages are controlled usually by server sides Cache-control mechanisms which are quite low by default. Files however tend to be static, and not as large: Issues however come when you have to deal with bandwidth. It is not scaleable however when you start thinking really high traffic stuff (And I know bandwidth, having to have had dealed with a service providing last years Brazil World cup event, man the brazilians love their football.)

  • parameter versioning ?param is a trick used especially with high traffic sites to ease up bandwidth: You really have to have more traffic or more bandwidth use to see its affect, especially with Cache-control.

  • With the V count you can always just “asdf it” or just keep appending to it. You cant really lose count unless you keep removing the object the scene consistantly. You dont even need a =. Its seriously just effortless: forexample

    or my favorite:
    ?fckfckfckfckfckfckfckfckfckfckfck[more fkc here]Etcetera
    or do it like this guy does after the ?

*And then once done with your changes, just call it ?V=1 or ?V=2, which ever.

You seriously do not have to use the versioning: its just an example of randomization.

  • You cant really tell if someone is uploading an existing or non existing object. see it from another users point of view.

Having the client add these randomization would be a BAD idea thought by default on upload, It should be a manual thing to do in my honest, professional opinion. People would simply rely on it too much: causing bandwidth waste in areas we shouldn’t be wasting them in. Otherwise I would really love to lock my assets to my domains:

I seriously do not get how it can be so hard to upload the object, the modify it is model field to update the path.

Here is a couple of solutions:

  1. Allow local files to be rezed, with forexample, blue borders to indicate a “local only” model.
  2. if Hifi started actually listening to the Cache-control headers.
  3. Add “Update model” next to model url path that adds random parameters to the end of the url. (that will ease the bloody fingers)


Thats the biggest problem right now in high fidelity. You need to remove the object every time.

  • It’s not loaded or showing inworld
  • It’'s on some strange coordinate 0,0,0 ?, so delete it and try again.

So mabye things are harder then needed because some other annoying bugs.
But if i need to use the ? then i would use for myself the datetime format. only need to solve the seconds. (windows shows no seconds in taskbar)

At the other side, what you say. it would be betetr to rename every time the file before upload it into high fidelity. But, that’s a disaster with testing. the file:// idea i like. if it’s not using chace in that mode. so you have at least a quick way for testing. after testing you do it your way and add some version number to the file name before upload.

I cannot find the page back, but the person that suggested the local upload for entities need to make a worklist. but also without using cache in local mode. Or we need to have a good preview system.

Need to think how i can do this easy. it’s already having to many steps.

Sorry, now always good in writing things.