Quick and Dirty Domain Setup Guide ($20 a Year)(The Dirty Kurty)

#21

There’s likely a way unless they’re using a custom kernel or some such to block swap. Rather than a partition you use a block file like;

sudo fallocate -l 4G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile

That;
Creates a 4GB swap file in base of root directory
Changes it to read/write only by root
"Formats" file to be a swap volume
Adds swap

If you do a free -h command before and after those you should clearly see 0 swap before >0 swap after.

Replace 4G with size desired as 4GB is a bit much on a 10GB disk. You may get by with 1G, but I wouldn’t count on it.

Then modify fstab to add a line;

/swapfile none swap sw 0 0

And to insure you only use swap when there’s no alternative you can edit/add to /etc/sysctl.conf the following values;

vm.swappiness=10
vm.vfs_cache_pressure = 50

Those 2 things force VM to use cache as a last resort. You can google if you really want to understand the mechanism those adjust, but, for low memory machines that occasionally need a bit of large memory, it’s a good setting. Keeps the swap thrashing to a minimum.

I’m going to ask if they’ll consider looking at physical RAM available and perhaps not try to do the fancy baking conversions on machines with < 1G or some sort of way to disable. But… I wouldn’t hold my breath on that one.

1 Like

#22

Dang it i knew i had tried something like this before but yeah they are stopping you from using swapon

swapon: /swapfile: swapon failed: Operation not permitted

I might have to write a bit more on hosting their files on the VPS as a web service than in the atp if that starts to play funky with it. Or make an option to turn it off. Will have to see when we hit that fork in the road!

0 Likes

#23

Yeah - then, depending on how the ATP things goes… might be best to avoid it running. That will require re-writing systemd scripts to launch individual AC services vs the lump form… which isn’t a bad thing considering running DS only on one of these VPS things is cheap and totally sustainable since you could have ACs running on bigger iron that talk back to the nice cheap DS VPS buying computing time over the shared AC pools. (That’s the glorious future of HF if it ever comes).

For now, I use is something more like this;

[Unit]
Description=High Fidelity VR - Assignment Client
[Service]
EnvironmentFile=/etc/default/hfstack
User=hf
Group=hf
UMask=007
Type=simple
ExecStart=/opt/o2t-hifi/sbin/assignment-client -t3 -a $REMDS
Restart=always
[Install]
WantedBy=multi-user.target

That one starts a single AC process of type 3 (ATP) connecting back to a domain-server at $REMDS which is defined in /etc/default/hfstack

If I wanted to run all the ACs and DS on one machine I’d define $REDMDS as 127.0.0.1 but if I wanted to run AC only on another machine(s) and have it talk back to DS on another then $REMDS would be set to said machine’s IP.

Assuming you do a systemd script for each possible service you could then pick which, if any, you want running and enable/disable on the fly.

Food for thought should we have the glorious future where one really needs to start supporting high end machines and swap processing in/out dynamically based on needed resources.

1 Like

#24

Until something else also happens, individual node startup is also the only way to have a proper firewall setup, due to port randomization and possible strict NATs.

Either way, considering the cost of the servers, having a second one wouldn’t be too bad, though I’m not sure if there’s a possible issue with any user agreements. We’ll have to wait and see…

…actually, an interesting idea: what if the main user, when using the ATP, runs an ATP AC and also connects it to their VPS domain? Couldn’t that be then instructed to handle the baking process?

0 Likes

#25

Man that’s a super good read and i have a feeling having this kind of modulation might be a good path to take for the long run! Means i can make the dirty kurty even more dirty ( ͡° ͜ʖ ͡°) also giving how some of the assignment clients are leaking memory like mad it might make hitting the memory limit a bit more graceful too haha.

1 Like

#26

tldr version;

There’s nothing stopping one form having any number of machines providing AC services to a DS or having those change other than 2 things. ATP needs a common basis such that its files and mappings are always same should you swap machines and Entity server needs to have its models.json.gz same. I use NFS and various other magics to insure my ATP and ES data is available regardless of droplet I may run respective ACs on.

Long version.

Well, the entire architecture going back to the earliest days when I came in (3 months or so after closed alpha began) was all about being flexible. I thought by now we’d be more toward where things, hopefully, someday will be, but - Interface first – Stack after it seems. As a server kind of guy I’d, of course, want it other way around – solid back end to serve up the pretties, but, that’s what happens when you have an Engineer’s brain vs an Artist’s. :slight_smile:

During my testing of simulated very high concurrent avatar loads I used basically this setup…

Domain server on a 1 core digital ocean droplet (5$/month type)
Assignment client (AC) for avatar mixer on another DO droplet with 16 CPUs
AC for audio server on another DO droplet with 16 CPUs
AC for entity server, entity script server and ATP server on another DO 4 CPU instance
Then multiple script runners on my local machines and 2 more DO droplets to simulate 200 avatars.

All of those “phoned home” to DS server and ran as if all on one machine – that’s how this is all supposed to work in the glorious future. You have a DS - then you say I’m will to pay the AC pool for a certain level of services… you don’t have to bother, unless you want, juggling AC machines or worrying about this/that/other things, just pay your credits to pool and have your stuff run. That’s a long way off yet, but, certainly where it’s supposed to go someday and something I look forward to. It would make this truly something with legs vs just another game engine, VR engine or whatever.

1 Like

#27

Haha and i’m just here doing the dirty kurty ( ͡° ͜ʖ ͡°)

0 Likes

#28

:slight_smile:
@vargink - Unless the VPS provider has a TOS block to it… you may be able to still have that swap file. /sbin/swapon is contained in package mount which they likely have either modified in their Ubuntu repository or, more likely, simply not installed in their base images.

Try this to see what mount contains in their version of;

dpkg-query -L mount

In stock Ubuntu 16.04 that results in;

$ dpkg-query -L mount
/.
/bin
/bin/mount
/bin/findmnt
/bin/umount
/usr
/usr/share
/usr/share/doc
/usr/share/doc/mount
/usr/share/doc/mount/mount.txt
/usr/share/doc/mount/NEWS.Debian.gz
/usr/share/doc/mount/copyright
/usr/share/doc/mount/examples
/usr/share/doc/mount/examples/mount.fstab
/usr/share/doc/mount/examples/fstab
/usr/share/man
/usr/share/man/man5
/usr/share/man/man5/fstab.5.gz
/usr/share/man/man8
/usr/share/man/man8/losetup.8.gz
/usr/share/man/man8/swapon.8.gz
/usr/share/man/man8/umount.8.gz
/usr/share/man/man8/findmnt.8.gz
/usr/share/man/man8/mount.8.gz
/usr/share/lintian
/usr/share/lintian/overrides
/usr/share/lintian/overrides/mount
/usr/share/bash-completion
/usr/share/bash-completion/completions
/usr/share/bash-completion/completions/findmnt
/usr/share/bash-completion/completions/swapon
/usr/share/bash-completion/completions/losetup
/sbin
/sbin/swapon
/sbin/losetup
/sbin/swapoff
/usr/share/doc/mount/changelog.Debian.gz
/usr/share/man/man8/swapoff.8.gz

It’s likely, if that shows /sbin/swapon, they simply removed it from sbin before making their U16.04 image. Assuming they haven’t made any other changes you could lift a copy of /sbin/swapon from another U16.04 and just drop it in.

I do NOT suggest that you reinstall mount package even if it indicates swapon is present… if you overwite or modifiy fstab doing so then you’re kinda… screwed.

0 Likes

#29

I had a nightmare you hijacked my domains after reading this thread. Gotta stop forum watching before bed. LOL.

1 Like

#30

On Sept 26, I installed this and everything went well. So I decided to do a step by step write up yesterday and wiped the server and started over this time writing down notes. however I got these errors. So I ran the update this script and it said that it found a new version, yet I still get the same errors… and when I ran the Dirty Kurty script again it still said that there was a update to the script… which makes me think that it’s not saving properly?

Err:8 http://ppa.launchpad.net/beineri/opt-qt59-xenial/ubuntu yakkety/main amd64 Packages
404 Not Found
Ign:9 http://ppa.launchpad.net/beineri/opt-qt59-xenial/ubuntu yakkety/main all P ackages
Ign:10 http://ppa.launchpad.net/beineri/opt-qt59-xenial/ubuntu yakkety/main Tran slation-en
Reading package lists… Done
W: The repository ‘http://ppa.launchpad.net/beineri/opt-qt59-xenial/ubuntu yakke ty Release’ does not have a Release file.
N: Data from such a repository can’t be authenticated and is therefore potential ly dangerous to use.
N: See apt-secure(8) manpage for repository creation and user configuration deta ils.
E: Failed to fetch http://ppa.launchpad.net/beineri/opt-qt59-xenial/ubuntu/dists /yakkety/main/binary-amd64/Packages 404 Not Found
E: Some index files failed to download. They have been ignored, or old ones used instead.
Something went wrong! Here is the error D:

0 Likes

#31

Tells you everything. I only have builds for Xenial (i.e. Ubuntu 16.0.4.x) in my repository. Yakety is Ubuntu 16.10. @Vargink Mentions in his initial instructions that 16.04 might not initially be an option in creating one of these machines and offers a work-around to get there.

0 Likes

#32

Duh, my bad, clicked on the wrong setup. Thanks for pointing it out.

0 Likes

#33

I had a user attempting to use an Amazon Lightsail/EC2 Ubuntu 16.04 instance and setting it up with this script. I thought that was odd, but upon running it myself, I noticed there was indeed an odd issue:

The user named '~beineri' has no PPA named 'ubuntu/opt-qt59-xenial'
Please choose from the following available PPAs:
 * 'bark':  basysKom Ark
 * 'opt-qt487':  Qt 4.8.7 for /opt Precise
 * 'opt-qt487-trusty':  Qt 4.8.7 for /opt Trusty
 * 'opt-qt502':  Qt 5.0.2 for /opt Precise
 * 'opt-qt511':  Qt 5.1.1 for /opt Precise
 * 'opt-qt511-trusty':  Qt 5.1.1 for /opt Trusty
 * 'opt-qt521':  Qt 5.2.1 for /opt Precise
 * 'opt-qt521-trusty':  Qt 5.2.1 for /opt Trusty
 * 'opt-qt532':  Qt 5.3.2 for /opt Precise
 * 'opt-qt532-trusty':  Qt 5.3.2 for /opt Trusty
 * 'opt-qt542':  Qt 5.4.2 for /opt Precise
 * 'opt-qt542-trusty':  Qt 5.4.2 for /opt Trusty
 * 'opt-qt551':  Qt 5.5.1 for /opt Precise
 * 'opt-qt551-trusty':  Qt 5.5.1 for /opt Trusty
 * 'opt-qt562':  Qt 5.6.2 for /opt Precise
 * 'opt-qt562-trusty':  Qt 5.6.2 for /opt Trusty
 * 'opt-qt562-xenial':  Qt 5.6.2 for /opt Xenial
 * 'opt-qt563':  Qt 5.6.3 for /opt Precise
 * 'opt-qt563-trusty':  Qt 5.6.3 for /opt Trusty
 * 'opt-qt563-xenial':  Qt 5.6.3 for /opt Xenial
 * 'opt-qt571-trusty':  Qt 5.7.1 for /opt Trusty
 * 'opt-qt571-xenial':  Qt 5.7.1 for /opt Xenial
 * 'opt-qt58-trusty':  Qt 5.8 for /opt Trusty
 * 'opt-qt58-xenial':  Qt 5.8 for /opt Xenial
 * 'opt-qt591-trusty':  Qt 5.9.1 for /opt Trusty
Something went wrong! Here is the error D:

I’ll have to see what is going on with this issue.

EDIT: Looks like the repo for qt59 is now gone.
https://launchpad.net/~beineri/+archive/ubuntu/opt-qt59-xenial

EDIT 2: Looks like he redid it as qt591. Testing now.

EDIT 3: Yep, that fixed it. Easy fix.

0 Likes

#34

Ugh - he does that with his PPA from time to time and I usually catch it, but not this time. I’ll update my package instruction thread too.

0 Likes

#35

@Vargink - you’ll need to update Qt PPA to;

sudo add-apt-repository ppa:beineri/opt-qt591-xenial

0 Likes

#36

Okie dokie ill throw it in in a moment!

0 Likes

#37

I’m the guy FlameSoulis was talking about. Here’s what I’ve got so far…

0 Likes