High Fidelity Hosting - 6 spots left


#1

Hello all,
I am ready to move onto the next phase of a hosting strategy for High Fidelity and I have 8 fully managed slots available. The first two are free, however there is a catch to the free slots, you need to be actively developing your domains and attracting users to it as best as the platform allows. The other six, are available on a first come first serve basis at a monthly price of $20 USD. Though I do hope you try to be active with the domain, I am after all looking for stats (more below).

The systems will be on Linux with the below specs to start. The specs are subject to change and will be sized according to what the gathered stats show are needed to make High Fidelity run smoothly.

  • 4GB memory
  • 2 vCPU @ 3.7GHz with 4 threads
  • 50GB disk space
  • 100mbps unmetered

Now you may be wondering what is my goal?
I plan to get a baseline of High Fidelity’s resource usage memory, CPU, bandwidth, etc. Once I have enough of this data, it is going to get used to create a fully scaling cloud platform for HiFi. I also plan to get user feedback from this beta pool and develop control panels and systems to aid in the ease of use and management of domains (This beta pool may grow, but until things become automated it will remain relatively small.)

I would like note this is a beta offering on beta software and there will probably be bugs and downtime. While, I will make every reasonable effort to keep your domains online, there is no guarantee, warranty, or SLA offered at this time.

Message me if interested.

status

6 of 8 left


#2

@Debs. ^

…


#3

@Midnight - You will run out of memory fairly fast on a domain with just 4GB as the processes will start to eat up more and more ram as time goes by. I had thought about offering a hosting option at one point but the specs needed to handle High Fidelity on something with an active domain are more in the range of 8GB.

With those specs you can run a small domain for the average user for a little bit though.


#4

@Coal - Thanks Coal, that is precisely what I aim to learn from this, the resource requirements. I have had my domain Midnight running for 6 months now with half the resources listed above and 8+ people have been in it just fine. While the hifi processes do expand to consume whatever memory is in the system, I do not believe this is inherently a problem, if It becomes one I can limit it a number of ways or increase it. Any program if allowed will use whatever memory its allowed to use and capable of using. If the system needs it more for something else it will reclaim it.

What will be interesting from this beta run is learning the minimum required resources to host N people doing X activity.


#5

Seems like audio has the most impact, and will be the first to suffer with some not hearing each other or not hearing the in world sounds. I think the server starts to throttle when load goes up.


#6

Yea it does, there’s an even worse offender the Octree Send Threads. I have had several of these tie up 2 cores (might of been more if the VM had more.)


#7

The problem is now that the code is not as optimized to clean up after it self on Linux and you sort of have to have scheduled restarts of the binaries once a day to keep it happy.

Which @Midnight, are you running a compile server like Jenkins and packing the binaries and then automatically deploying them to the servers so it’s always kept up to date?


#8

Not yet, after more people come onto this I probably will, I already have the majority of the software written to do it. There is no point in having binaries get compiled per machine.


#9

Awesome stuff @Midnight! Do you plan to publish your results?


#10

In so far as making a package builder… at least for Ubuntu Xenial… not all that difficult, been building them here for nearly a year and posting signed binary packages to my own public repository. Since mere mortals don’t have hooks to github I use a every 5 minute cron job to check for new release tag pushes - then a script to compile, package, sign and post to repository. No Jenkins involved.

From there, assuming repo has been added by an end user – updates would occur same as any other Ubuntu package - though I do help things along by issuing a SSH command to all my servers to update once new package is posted.

The real problem with anything dealing with hosting is… the glorious future of distributed on demand computing power and massive scaling remains nothing but a dream. Until we can have systems where we can scale on demand, divide load into logical zones, dynamically allocate/deallocate resources… it’s all no different than classic systems like openSim where you’ll always need enough hardware in motion to serve your max load even if 99% of time a fraction of that is all that’s required.

That said - one could, I suppose, write an adaptive load balancing framework and tie that to some cloud computing provider’s API (DO, Amazon, Linode etc etc etc) and juggle server power on the fly. But… this was all supposedly one of the basic underpinnings of HiFi’s prime difference – in that it would wield power on a massive scale to provide services and allow for massive concurrence in domains. Given the uncertainty of how things might or might not change in future, potential market etc etc etc… that’s a lot of work to do on a gamble as a 3rd party.


#11

yep, still on our roadmap, Omega. Good points. Agree that letting people borrow servers from each other is the big win.


#12

Maybe I’m not getting the architecture …could you not run a beefy large server and just have the place names all point to different locations on a single domain?


#13

Sure - but lets say you got a beefy server capable of supporting 100 concurrent avatars - you know, something along lines of a 24 core machine with 32G RAM (or more) and gigabit symmetric inet connectivity… Maybe I’m just a cheapskate, but, I wouldn’t want to foot the bill for such a thing. I’d much rather see, again the ancient idea of… I have a machine capable of running domain-server (a light weight task) that can pull computing power for assignment-client tasks from a dynamic pool of ACs for hire. That’s the difference between several hundred dollars/month (if not > 1000) or $10 to $20 + peak load charges if your space needed greater than support for 5 … 10 avatars max.


#14

Memory leaks? Oh no!


#15

@Cracker.Hax - You codger you! If I did not know you I might be offended by that! JK :rofl:


#16

You know what they say, if in doubt 0 0 * * * it out!


#17

Loving this offer Midnight - thank you :slight_smile:


#18

I am glad you like it @Debs :slight_smile: