Detailed Server Stress Test results


Hello everyone.

So yesterday we performed the stresstest of our dev server. I would like to use this opportunity to thank everyone who supported us during this event. Thank you also for the nice conversations and a lot of insight. Sadly, we did not reach a very high number of client connections, but we will repeat this test periodically and maybe the devs will show us some twitter love if we show that we’re doing a serious job here.

Now, i would like to split this report into three sections: Server hardware performance and Bandwidth and the conclusion.

Server Hardware Performance

Server Specs

Operating System: CentOS 7
Supermicro 504-203B
Intel Atom 4x 2.4ghz(Turbo 2.6 ghz)
8GB Ram
1TB SATA Drive
Gigabit Connection
Stack version: 12. April 2015

We started out just testing the systems performance against just the connected clients. Most of the participants were using the voice chat at almost any given time. When we peaked at 11 connected clients the system stabilized at a healthy 24-25% average load. However it seemed that more users seemed to increase the system load less after 6 clients had been connected. From there all we could notice was very short spikes when users connected. Ram usage was negligible during the entire test.

Then we moved on and told everyone to spawn complicated models and physics simulation on the server

This resulted in the server slowly rising to 75% average load, which is where we stopped. Now, we would like to mention that the server was at this point stuffed full of popcorn machines, billiard tables, spinning globes and generally moving parts absolutely everywhere, combined with a few very big and detailed models. So this was way more moving parts then you would see on an actual server. And yet the Stack manager neither crashed or slowed down and the hardware was able to take even more.

We then returned back to normal usage and the average load gradually returned to around 25%

Here is another picture, this time from Monitorx. Note that the first big spike is from us compiling the stack manager earlier on.


Now the bandwidth told a different story.

As you can see here we peaked at around 21 mbit average load. That was when we performed the large physics test. But even when we removed all the models and simulations, the usage remained at between 17-20 mbit.

We then asked everyone to mute their microphones for a few moments and that produced another steep drop in bandwidth usage, like before when everyone was silent because they were busy deleting all the spawned stuff and we reset the server.

The server transmitted a total of 17.40 Gigabyte of data during the test.


Now, with all this being said we come to these conclusions:

  1. Normal clients that just move around and commute aswell as chat with others are very little stress for the system.
  2. Physics simulations are tremendous stress for the system
  3. The voice communication is using way more traffic then what we consider normal. Systems like teamspeak, which use very high quality audio codecs use around 10% of what Hifi uses. This seems completely unnecessary considering that most normal users do not own microphones that are even able to make an audible difference with such high quality codecs.
  4. Models and physics simulations do not produce a lot of traffic.

We hope that you found this report useful and that it will help you when choosing a server or making your calculations.

-The IndigoFuzz Team


Excellent work. Thank you so much. How does an Atom proc compare to i7’s?

Am I correct in the assumption that the average user would not be able to host a server from a home Internet bandwidth?

Awesome report guys!


" Grand Theft Auto V broke industry sales records and became the
fastest-selling entertainment product in history, earning US$800 million
in its first day and US$1 billion in its first three days. Considered
one of the most significant titles of the seventh generation of console gaming,"


The atoms are a tad slower, but lightyears better in energy consumption. But that is not hard, the i7’s are known to be energy guzzlers. Still, these atoms perform very well.

Sadly, i fear that a home user could only serve a few clients, because the upspeed is usually heavily throttled. Even my vdsl 50000 only has 10mbit up, so i’d say it could serve 5 clients at most. But again, that’s mostly due to the monstrous traffic the voice communication creates.

I’m glad you enjoyed the report, more to come.


Right. I am not entirely sure what you mean by that. Are you saying that GTA V is a traffic simulator and therefore generates lots of traffic ? :stuck_out_tongue:


I understood your use of Traffic as users- and the physics simulation in GTA5 was the reason for its success. I consider accurate physics and sound key to a next gen environment and as such would choose quality over quantity. i don’t want to downgrade the experience to save a few quid. or to make it “more like we all do in secondlife” Compromise is the road to bland.


…and bland is the road to death.

Alpha has no place for ‘conclusions’. This is the time to push the envelope, to think without limitations, to experiment and be free of convention. This is where potential and ground breaking ideas are honed - where VR technology will lead.


Of course i can make conclusions about this version of the server. This is to point out flaws, we found plenty of other bugs.


Sure, but i pointed out that it does not use a lot of traffic, it does stress the processors. But it’s only a problem if you fill your server with it.


why not have the voice chat use opus codec


As long as it is set to a reasonable bitrate i would agree

Speex is another option.


why not just use skype?


Because skype is closed source. And because skype sucks.


Explains my mother’s cooking.


Thank you for that stress test. I too noticed the high bandwidth consumed by voice. Now compression is a great idea but it will be a latency tradeoff. So, when considering audio CODECs, the primary consideration will be how much latency it introduces. There are low latency CODECs out there but 5-20ms conversion time seems fairly high given the ultra low latency requirements of HF.

Probably better will be some way to do stop streaming silence.


Please look at:


I noticed that the entity-server is logging all entity edits, which means that it is logging A LOT when multiple clients are sending simulation updates. The extra logging is currently on by default but can be disabled in the web-base domain settings UI. It is possible that this was a significant portion of the entity-server CPU load during the stress test.


Thank you, that is useful information :slight_smile: