How To: Build Hifi on Linux & Run a Domain in the Cloud


Hey! I’m hoping that you all can give me some feedback on this blog post I’m working on. My personal motivation is to have a domain that is not on my computer, because I often delete all data during testing. I’d like a place where I can accumulate stuff! Thanks in advance!

This article shows how to build two applications, domain-server and assignment-client, that are needed to run your own virtual world in the cloud!

It assumes that you have some familiarity with using the command prompt / terminal, and that you have a billing account with a cloud provider capable of running an Ubuntu Linux instance.


I have a lot of experience with Amazon and ec2 is a nice, flexible choice for hosting. I wanted to learn how to use Microsoft Azure, so that’s what we use in this article. They also have a $200 free trial! Digital Ocean is another choice with great prices, although I’m less familiar with their offerings.


Deploy an ubuntu image to your service using their tools. The version you should use is ubuntu 14.04 LTS (a.k.a trusty). Make sure to use this version, since we will install packages specifically for this.


In order to be able to remotely administer our domain, we want to access its domain settings page from a web browser. When you run the Domain Server, it serves the settings page on port 40100. So open port 40100 to http connections and we will be good to go. If your service allows you to specify the external port, either choose 40100 for simplicity or choose a unique port and remember it.


Follow the instructions from your cloud provider to connect your terminal to the remote machine via ssh. When you’re connected, you should see a command prompt.


The following steps cover installing the various libraries needed to build High Fidelity on this machine. Some of the steps (downloading QT, compiling High Fidelity) can take some time, so make yourself comfortable. The entire process should take less than an hour, depending on the hardware you have provisioned.

Copy a single line into the terminal, press enter, and then wait for new prompt before copying the next line.


sudo su

apt-get update
apt-get install -y build-essential mesa-common-dev libglu1-mesa-dev libasound2 libxmu-dev libxi-dev freeglut3-dev libasound2-dev libjack0 libjack-dev libxrandr-dev libudev-dev software-properties-common git libssl-dev zlib1g-dev

add-apt-repository ppa:george-edison55/cmake-3.x
apt-get update
apt-get install -y cmake

apt-add-repository ppa:beineri/opt-qt551-trusty
apt-get update
apt-get install -y qt-latest

git clone
cd hifi
mkdir build
cd build
cmake … -DQT_CMAKE_PREFIX_PATH=/opt/qt55/lib/cmake

make domain-server assignment client

*Now press “Ctrl-B” and then “D” to exit but leave this process running

./assignment-client/assignment-client -n 6
*Now press “Ctrl-B” and then “D” to exit but leave this process running



In your web browser, go to the IP address of your virtual machine, followed by the port we set above. For example:

You should now see your domain settings!


To jump right in, set a temporary place name from your settings page. Copy it and then open Interface. Press enter to bring up the address bar and type or paste your temporary place name.

Voila! You should now be in a virtual world that belongs to you, running in the cloud and accessible by anyone!


If your virtual machine goes down for some reason, you may get assigned a new IP address. For consistency, or if you’re pointing a place name at your domain, you’ll want to set up a static IP that doesn’t change. Chances are there’s an additional cost for this service, but it usually isn’t that costly.

To connect a permanent place, first make sure you have a static IP. Then, from the Domain Settings page, connect your High Fidelity account. “Create new domain ID” and then select it from “Choose From My Domains”. On the High Fidelity website, connect your Place name to this domain ID.


if someone wants to automate that build into a script instead of steps… and also the startup of the applications… that could be helpful :slight_smile:


Re cloud provider. I’m liking github ¬.¬ mostly cos its free


@Judas James is talking here about how to run a server on an Azure hosted machine - github is (I think) only useful for file storage, right?


yeah you can’t run a domain on git. between the amazon ec2 free trial and the microsoft azure free trial, you could get a year and a half of hosting for free! also git doesnt serve files with the right headers, so it has limited use a CDN


Aye yes but We don’t want to pay for asset storage if we don’t have to. So it was just a suggestion for that

I was told to use git are you saying its not suitable for storage? I know that dropbox doesn’t cache


Happy with my VPS, but technical that’s type of cloud too.


TBH i haven’t really tried much but a quick test didn’t work for me importing a model from github

anyhow, if you’re trying to figure out the maths…

with assets you have to consider:

  1. how big they are
  2. how much data will be transferred serving them

at the moment, asset server uses the disk on the machine it runs on. so you have to look at i/o pricing for the instance and how big its HDD is.

for s3 or google cloud or azure you will have to look at space and i/o pricing but these are likely to be cheap and fast once they’re cached. one percieved downside at the moment is that anyone can just follow the links and download your assets.

dropbox never caches anything, so its slow and not so recommended. but cheap – you pay your montly fee.

at some point i’m hoping to take advantage of peer 2 peer for loading models, sounds, etc. that way, the more people come to your domain, the faster it gets to distribute . so someone looking for an asset would check with the other visitors first, and only then hit the asset server for download if theres nobody around. this would be even cheaper than using a CDN like s3… p2p + asset server is the fastest, cheapest combo i can think of.

just some thoughts, hope some people try free accounts and set up their own domains!!


I recommend changing the title to How to Build HiFi using Linux & Run a Domain in the Cloud.

Azure supports Windows virtual machines and it is dead simple to take the windows installer to run a domain stack in their cloud.

But getting back to the Linux install, yes, it would help to make a one-click build/installer. That would vastly increase the number of people capable of getting a domain stack going. It is worth doing that to increase the HF attach rate.

#10 is the link u need



aha thanks. didnt know that worked. sweet – not sure how github decides whether you’re hosting too much or whatever, i’d be a little afraid all my stuff would stop getting served one day but i’m sure that lots of people are using it for storage.



“1 click installer” sounds nice. Are you going to build that for ec2 or azure or digital ocean or some other service? is that the level of “1 click” you’re attempting?


righto – you can use whatever VPS you want with these instructions, as long as it can run ubuntu. whatever’s cheapest / has the best features for you!


@Balpien.Hammerer yes good point about Linux installers helping grow domain installs. We are also looking at containers to make it easy to startup domains.


It’s not my first priority, so I am not attempting a one-click installer for Azure, although now that you mention it, I’ll play around about bit with it if it doesn’t distract me from the physics stuff I am keenly interested in.


I also don’t know if they will cut me off. Just on a pay service id be scared that someone would come into my domain load it all and clear cache over and over and get me in debt to the hosting company. They only have as a rule so much free before they charge.
The peer to peer method seems best but for that we have to wait a little longer


per user data caps are an interesting idea – without p2p domain operators are definitely vulnerable to that kind of attack, but this is no different than the web


Although I have a few test assets on my hosted service that I link directly to, that is not how I’d serve the data in a production environment. I’d pass in URLs that would route to a server script to perform DRM, and that script could also determine if rates were too high returning 503s.

And too (much further in the future), having asset brokers to distribute the load and a means to monetize it will be a great, um, asset. :slight_smile:


Yesterday I was looking into running a windows machine in the cloud through Amazon or Azure. For me, because it would be almost the same as the machines I use now to run sandbox, I would rather do it than have to build and keep a Linux instance up to date, etc.

And as far as assets go, I still use S3. I would rather use other bandwidth to host stuff that was not part of the domain for performance reasons. I have used up my free 1 year trial on Amazon but the highest monthly bill I’ve had so far is $1.36. I am not sure if this will change or not, but until it does I don’t mind paying it.

One other thing: as far as I can tell, it is possible to setup an S3 bucket to only serve stuff to specific applications (interface.exe) so that unless you spoofed it, assets would not be downloadable from a browser etc. I have not tried this yet because I’ve not gotten around to it and I like to be able to check in browser if stuff is how I want it.

If anyone else has any experience using a windows based cloud Sandbox I would like to hear about it.


docker run -d --name=stack -v /home/hf/hf-volume/opt omega_herons_docker_container_that_this_is_not-image:1.1 /sbin/my_init –


Let me know if HF is interested in sponsoring it. :stuck_out_tongue: All it requires is 64 Bit Linux install capable of running modern Docker. DO new droplet to running stack in 3 minutes - depending upon any lag at DO with provisioning a droplet.