Hosting domains across multiple servers


#1

So far I’m loving all ideas and approaches High Fidelity takes in setting up its framework and functionality, making everything simple efficient and stable. This is why I wanted to bring up an idea that recently popped to mind; If this is already possible, I’d like to know more about how it works… if not, it’s a feature I would like to suggest considering in the future.

What brought this to my attention is the way IRC (Internet Relay Chat) works: IRC servers rarely consist of one server, but a group of servers that sync up with each other to deliver the same content. Different users can be connected to various servers from the same network… but they all see the same channels, same users on these channels, get the same messages from one another, etcetera. You can get what is called a netsplit if two servers lose touch, which is an unavoidable disadvantage of this method.

I’m wondering whether the server components of High Fidelity could allow for a similar idea. In this case, whether multiple machines running the Hifi server tools can be configured to sync up with each other and mirror the same data. In such a way that any user can connect to any node for that domain, but everyone sees the same entities and people in the environment, whereas any changes made on the region (like adding or removing or editing entities) are instantly reflected to other servers and the players on them.

Since some people might wonder why such a system would be useful, here are the three primary advantages and reasons why I’m a fan of this technique and advocate it where appropriate:

  • Less lag: One of the first outcomes of this system on IRC is that, when a server is ran by multiple machines in different countries, users can connect to the node closest to them for the best speed. In our case: Let’s presume that a domain intended for English speakers is ran by two nodes, one in Britain and one in the United States. When British / European users try to connect to the domain, the main server detects their location and directs them to the British server… whereas US / American users are sent to the US server instead. Although in-world updates between British and US users will remain slower, users from the same country will experience faster responsiveness toward one another, while interactions with the world (what isn’t predicted client side) will be fastest for each side.

  • Decreased risk of downtimes: Let’s assume I run a node for a Hifi domain, and my neighbour runs another node. Because of unforeseen circumstances, I need to bring my server down for a day. If the domain was hosted solely by me like a conventional server, no one would be able to enter that domain until I fix my server up. But in this case, the connection server notices that my machine is down, and connecting users are directed to my neighbour’s node instead. Once my own node is back up and running, it looks at my neighbour’s node to see if anything in the world was edited, and updates my local copy of the world to reflect all changes.

  • A workaround to censorship: Hifi will eventually host content or ideas that laws in less fortunate countries might find a problem with. Being able to host a domain across multiple servers makes sure that, if a node in one country must go down for any legal reasons, a node in another country still exists and the domain does not disappear entirely. Unless a user’s connection to all nodes in all countries is censored, users everywhere can continue visiting that domain.

Of course, there are also threats and disadvantages to the technique. Two I can think of include:

  • What happens if two or more nodes for the same domain lose touch with each other, and during that time changes are made to the domain on each? For instance: Someone adds a new entity on one server, while someone removes an existing entity on the other server. When the servers sync up again, how will they know which one’s changes should be mirrored to the other and resolve this conflict? IRC doesn’t have this problem since it doesn’t store anything, it only delivers messages in realtime.

  • How can the administrator of the domain make sure that someone running a node can’t sabotage the data? Obviously, the best way is to only have trusted users run nodes for your domain… but being able to safely liberalize node hosting would be much more efficient in the long run. For this, one would have to make sure that a malicious user can’t corrupt data on one node and have other servers sync the broken changes from it.

What are your overall thoughts on the idea? Are there any possibilities or plans yet for this mechanism?


#2

This was one of the subjects at Wednesday’s meeting, Chris made a recording see
https://alphas.highfidelity.io/t/meetup-this-week-will-be-on-wednesday-2pm-pst/6879/12


#3

As @ritzo points out, there was a discussion during this weeks meet up and its available in video format. Various states have also been discussed before.

Infact, Most of these concepts have been discussed quite extensively during Friday meetups (such as in this week Wednesday).

Another downside to IRC is latency being one of them, since its a text relay protocol. My Irssi (irc) client occasionally can quite often lag upto a 1 second in a heavy traffic network. The issue of “servers” losing communication also is an issue , and netsplits would be catastrophic for any gameplay styled attempts and would relegate HiFi into a glorified 3D chat room (granted, it is that for now, but with more things games will be made and domains dedicated to adventure). Granted you may have a good connecting to one node, but the sync ups between nodes takes it time.


In anycase to summarise:

This is what has been already discussed:

  • High Fidelity used have something of the sort of having multiple entity servers per domain from earlier demonstrations of having separate servers running different areas of a domain.
  • these restricted to specified octal range of the domain however, and they would only control that area. Not much documentation available yet.
  • Assignment clients Are universal “servers that provide their assignment”: this includes, per client:
  • Voice Transmission
  • Physics Mediators
  • Even Assets Sharing (discussed on Wednesday)

As a future vision they’ve been hinting at, is that domain owners may in the future **lease these assignment-clients ** out to others for processing power and upload pipe to other users for monetary reward, which they are looking to build a currency out of .

Even Physics implementation that has already been done is already peer2peer. The Domain Stack has an assignment client dedicated to mediating this and making sure the calculations are correct.

What they discussed during the last meeting was also to have assets shared and hosted by various domains.


#4

Thank you for all the info, very interesting.

Yes… there is another downside, which I actually realized while making this thread but since I was in a hurry I didn’t include it: In some cases, this would actually mean more lag. Since it will no longer be just “the server sending data to the client” but rather “the server getting data from another server then sending it to the client”. This intermediate might overcome the speed benefit of getting your data from a server with a better connection, compared to getting that data directly from a server to which you have a slower connection. I imagine this depends on a lot of factors, especially how efficient the syncing is… if done right, improvements should be more frequent than decreases in performance.

Which brings yet another system to mind: Torrents. When you’re torrenting a large file like a video, the client connects not to the fastest server that has the file, but to all servers that have it… downloading different bits from each. This approach would be even faster: When you first connect to a domain, your client detects all servers that it’s hosted on… then different pieces of the world (areas, entities, models, textures) can be simultaneously downloaded from every server.

Servers could also distribute intensive tasks like physics calculations. So for instance, if you have 100 physical objects in your domain and this domain runs on 4 servers, each server calculates the physics of 25 objects. Although this is probably a bad example, since physics are ideally calculated client-side while being cheaply estimated on the server.

The ability to lease processing power in general is a very interesting concept. Super-computing interested me since a few years ago… including the idea that people in need of a quick buck could rent their CPU over the internet, and gain money based on how much their computer’s resources are used by a client. So far there aren’t many such services that I know of, but Hifi is among those that could make great use of the idea!

But ultimately, I remain most interested in being able to mirror and distribute all processes of a domain, including the domain’s database. My hope is that a domain and all its functions can be simultaneously ran on multiple machines across the world, and if any goes down there is always a spare one that you are seamlessly reconnected to without noticing any difference. I can however see why it’s tricky to get that working properly… “netsplits” being at the top of the list.


#5

Is this implemented yet?


#6

Not sofar i know, but the are working on it. It’s at least on the list to implement something soon.

Noit sure what soon is in this case :wink: