So yesterday we performed the stresstest of our dev server. I would like to use this opportunity to thank everyone who supported us during this event. Thank you also for the nice conversations and a lot of insight. Sadly, we did not reach a very high number of client connections, but we will repeat this test periodically and maybe the devs will show us some twitter love if we show that we’re doing a serious job here.
Now, i would like to split this report into three sections: Server hardware performance and Bandwidth and the conclusion.
Server Hardware Performance
Operating System: CentOS 7
Intel Atom 4x 2.4ghz(Turbo 2.6 ghz)
1TB SATA Drive
Stack version: 12. April 2015
We started out just testing the systems performance against just the connected clients. Most of the participants were using the voice chat at almost any given time. When we peaked at 11 connected clients the system stabilized at a healthy 24-25% average load. However it seemed that more users seemed to increase the system load less after 6 clients had been connected. From there all we could notice was very short spikes when users connected. Ram usage was negligible during the entire test.
Then we moved on and told everyone to spawn complicated models and physics simulation on the server
This resulted in the server slowly rising to 75% average load, which is where we stopped. Now, we would like to mention that the server was at this point stuffed full of popcorn machines, billiard tables, spinning globes and generally moving parts absolutely everywhere, combined with a few very big and detailed models. So this was way more moving parts then you would see on an actual server. And yet the Stack manager neither crashed or slowed down and the hardware was able to take even more.
We then returned back to normal usage and the average load gradually returned to around 25%
Here is another picture, this time from Monitorx. Note that the first big spike is from us compiling the stack manager earlier on.
Now the bandwidth told a different story.
As you can see here we peaked at around 21 mbit average load. That was when we performed the large physics test. But even when we removed all the models and simulations, the usage remained at between 17-20 mbit.
We then asked everyone to mute their microphones for a few moments and that produced another steep drop in bandwidth usage, like before when everyone was silent because they were busy deleting all the spawned stuff and we reset the server.
The server transmitted a total of 17.40 Gigabyte of data during the test.
Now, with all this being said we come to these conclusions:
- Normal clients that just move around and commute aswell as chat with others are very little stress for the system.
- Physics simulations are tremendous stress for the system
- The voice communication is using way more traffic then what we consider normal. Systems like teamspeak, which use very high quality audio codecs use around 10% of what Hifi uses. This seems completely unnecessary considering that most normal users do not own microphones that are even able to make an audible difference with such high quality codecs.
- Models and physics simulations do not produce a lot of traffic.
We hope that you found this report useful and that it will help you when choosing a server or making your calculations.
-The IndigoFuzz Team