HiFi Server Avatar Limit


#1

So according to this are we estimating 5 avatars per 1gb ram now?


#2

@Cracker.Hax those metrics are completely CPU bound. The RAM has only little impact on the servers.
In this case, the bottleneck will usually be the audio mixing on the CPU.

Keep also in mind that these metrics will greatly depend on how the servers are used, it’s not all about the avatars.
If you have 1000s of objects moving around and many of audio injectors, this will change how many people the server can sustain.


#3

How do they do with little or no sound? How were these avatar limits calculated? Is there a good metric for calculating avatar limit? Are we looking at the number of cores or what?

Could somebody post test results used to come up with this figure or is it an estimation?

Is there a script somewhere to test the capability of hifi on a server?


#4

Those are estimates based on our Ops team’s experience spinning up different types of servers for different size events. Those numbers are also subject to change as the product evolves. Hopefully making them going up.

There aren’t any good metrics unfortunately since it vastly depends on use.

For example, let’s say people in desktop mode watch a movie together via a stream on a web entity in a static scene.
There will only be very limited audio streaming/mixing as well as very limited movement as well.

And now let’s take another scenario where people are having a dance party and are fully tracked with Vive trackers. There’s gonna be a lot of audio streaming/mixing of all the different groups talking to each other and a lot more joint streaming of the avatars moving around.

The same server in the first scenario will support a lot more people.

In the past, we’ve successfully used scripts spawning many NPCs that constantly move around and talk to stress test the worst case scenario.


#5

Do web entities require the stream to go through the hifi server though? Seems to me the stream goes from some streaming web server directly to the user. In this case wouldn’t the bottleneck likely be the users own resources, rather than the server?

This would be a good test for minimum amount of avatars a server could support whereas maximum might be avatars moving around but not streaming audio or motion tracking data.

Are these scripts available somewhere?


#6

@c back when hifi started there was talk of ten thousand people at an event together in hi-fi we seem to be down to 15, last gen secondlife does 80 in a sim, what went wrong?


#7

Yeah they say 80 but it gets super laggy at about 40 anyway. I suspect a lot of sims are on old hardware leftover from when they hosted 10 times as many servers as they host now.


#8

I just worry cos I have that hifi teashirt for the 100 people load test then I see the 15 people server and worry when we come out of beta it will be down to 1 person per domain
Ya know


#9

Correct, that’s why I chose that example, it doesn’t impact the server.

There are a few in the repo.
Here is one, although it might need to be dusted: https://github.com/highfidelity/hifi/blob/master/script-archive/acScripts/BetterClientSimulationBotFromRecording.js
You can tweak the recordings, models, location to use inside.
Basically, just add 50 instances to your DS persistent scripts and it should spawn 50 NPCs randomly choosing traits within what you provided.


#10

Don’t forget that those servers are running all the services on the same server. Not even a dedicated server, those are almost all virtual servers.

The High Fidelity server architecture is designed from the ground up to be distributable.
For the 100 person event, each service was running on it’s own, dedicated server.
For some of those services, it was a bit overkill, but for the audio mixer, which was the bottleneck, we got something like a 32 core machine.

Those ten thousand person events would happen in the same domain, but not on a single server, that’s impossible with the current tech unless we degrade the data so badly that the experience will not be worth it.

But to get there, separating the different services is not enough, we need something more.
Which is why the architecture also planned for a future splitting of services, what I mean by that is that instead of having 1 audio mixer for the entire domain, we could, for example, do some analysis to determine based on everyone’s position, what are the different groups of people interacting and assign one mixer per group. Those mixers would also communicate with each other so that you can hear the background noise of the other groups, but instead of getting X audio streams from each person in that distant group,your mixer would only get 1 pre-mixed stream from the mixer in charge of that other group.
This has 2 advantages, 1) it reduces the number of mixes you need to do (which are very expensive), and 2) it scales, you can just spin new servers to handle an increased load.
And the same goes for most of the services.

This docs page talks a bit about all of this.

I hope that answers some of your questions. Feel free to ping me with more questions if I’ve missed anything or was clear on some points.

Sorry for the big block of text, here, have a kitten:


#11

Good to hear this is still happening, i thought it had gone the way of the rest of the things that got me excited about hifi.More people in 1 location is allways better, this instancing solution seen elsewhere dosent cut it.People want to be with people


#12

Exactly! It has been what has kept me “around”…waiting, hoping, looking.


#13

Topical: http://www.dieselsweeties.com/ics/433/

Judas basically wants to know how many MBs HiFi can process. :grinning:


The top 500 supercomputers now run Linux
#14

What would those things be if you don’t mind me asking?


#15

Face tracking, low latency sound, for a while me and kev thomas hoped we could jam around the world,voxel models of infinate detail, remember we had terraine that we coudl tunnel into .the idea that if people werent using their domains that that processing could be harnessed to fuel domains with people in. Using mobile devices to run tasks , remember the idea that a flock of birds could be run on a phone. Remember when hifi started how it had text chat and we could all communicate with eachother.I hoped that commerce woudl be all inworld rather than on a website, you know i bought shop and shopping as domain names thinking ooh i got somthing worth having with those lol. I woudl like to be able to teach in hifi. I still dont think its possible. i teach 3d modelling i would still be better off screen sharing in skype. U know the hairdressing video, i thought we could style eachothers hair, ooh remember when we briefly had hair we could swish about…
Its like there was a vision that started off with unlimited potential and ended up as secondlife without chat
Im still here I enjoy building
but im weird, i want hifi to appeal to everyone , I want its users to wither and die in the real world because its so immersive
:smiley:


#16

And get paid in HiFicoin for it.


#17

Hi Judas,

I agree with you for most.
Btw: you say you think you cannot do training in HF but prefer to use Skype and screen sharing : why not use for that the technique Darlingnotin did show (in a video) for screen sharing in HF ?

-michel


#18

I figured I’d chime in as well on this, but part of why Second Life’s sims can support 80 users (actually, it can support 100 I think) is due to some data not being handled by the sim itself. Since Second life uses animations for things, this cuts out one of High Fidelity’s arch-nemesis: avatar kinematics. Even in desktop mode, avatars in Hifi transmit their updates to the server, who then has to spit it back to everyone else. This is because the client itself handles the animations and how they are handled instead of telling everyone else what animations are being played (which may not be globally available). Unlike audio, which can be merged into a single stream, as far as I’m aware, kinematic data is data: the more of it there is, the more of it you have to dish out to everyone else.

That’s where I’m not sure how the issue can be resolved, since it’s an overhead issue for both the server and the clients involved. My only guess is the avatar mixer(s) would have to debate what data is really important to transmit to user X and does it really matter if they know user Y waved their hand 200m behind them. If that’s the case, sorta like how my tail script works, it generates a new rotation every x milliseconds, but smooths the animation per frame. In a way, you could use this approach to reduce how many packets are sent but still have a smooth experience.

Of course, I am not an expert with this, so if anyone who knows more about the avatar mixer’s limitations and imposed solutions, feel free to correct this.


#19

Did you ever see a teacher try to work a video recorder? To use vr as a teaching tool it has to be intuativly simple. If its somthing that is going to take 3 hours to set up and still will be in the hands of the gods if it works or not on the colleges computers through their firewall
My college spent a fortune putting in interactive whiteboards, then because they were effetivly useless replaced them all with big lcd touch screens which are still a total waste of time. all that bloody tek needs is a remote that u can press to do next powerpoint slide, but they dont have that, maybe you can tap the screen to advance but maybe it will do 1 of a 100 other weird things. the upshot is they are used like projectors,
The problem is the peopel buying the tek have no idea what the teachers want and the sales person is so persuasive that we end up with craP system after crap system because no one ever asks the users what they want to use it for
which is kinda the problem developing here


#20

Very much my own experience in academia. Add to that, even if you know exactly what you want/need the chances of finding a supplier who provides something remotely useful is quite depressingly small. :persevere:

It’s bullet-pointless feature lists all the way down!