Last pushed GIT code for stack is referencing /home/user/.config/High Fidelity - dev/ for its configuration vs correct /home/user/.config/High Fidelity/ - red alert on this one as that leads stack to starting with a blank DS config effectively breaking domain.
When the new domain-server starts it should copy the config file from the old location to the new one. Do you have your log from the run of the new domain-server?
Okay, I understand what happened here. You’re running a version of the domain-server you built yourself (hence the “dev” part of the path). That used to look in just
High Fidelity but we changed that to use a specific path for each release type.
The migration code for the domain-server migrates only from production to production. Please can stop your domain-server, manually copy the file over, and start the domain-server again.
Already copied and fixed - but - that pretty much just breaks all the linux servers as they’re all self-compiled unless they’re prepared to fix issue manually.
Yes, I apologize. We deploy our domain-server and assignment-client instances tagged as “production” - I hadn’t considered dev to production path for self-compiled servers for migration.
Hopefully this thread is sufficient to let those users know they will need to copy domain-server settings manually.
Did your entities copy across properly? Those are also now at a new path.
My models.json.gz and ATP assets are fine, but, that may be due to my having set an explicit full path to resources… others may not be as lucky so - be ready.
In my case I build DS/AC on a jenkins instance and I do (or did) add a version string - i.e. my web console used to display something like version o2t-8192 vs dev, now it’s back to dev so I’ll need to track down where that’s set now for my sed magics. That then auto-deploys new AC/DS to my domain servers at remote locations. That’s why I pretty much instantly found Heron to be “gone”.
The models.json.gz and ATP assets would have been migrated correctly from beside the binary to their new path.
In your case if you’ve changed that path via the domain-server settings no migration would have needed.
in SetPackagingParameters you could set a custom BUILD_VERSION if you want your binaries to have a custom version string.
If you set that custom version string you may want to have it not apply to the BUILD_ORGANIZATION (as it currently does) as that would change your domain-server config path and assignment-client data path.
Thanks - yes, I was setting a build version in a header file which worked before (it just placed current jenkins build # in before compile. I’ll probably just leave it as dev until I work through info you gave - it was more a convenience thing to see if my remotes actually matched my jenkins builds - thank you, again, for that!
I heard from Chris that more people than I expected are running compiled versions of the domain-server and assignment-client. I’d like to avoid having too many people run into this.
That pull, when merged, tweaks the migration code so it will do the right thing when you run the domain-server for the first time.
If you’re already run it, you’ll need to do the manual copy process I describe above.
Thanks again, @OmegaHeron!
Just checked using the BUILD_VERSION environment var setting from Jenkins in my build script like;
That works great for my dockerized everything deploy system. Actually makes things far simpler in some ways. Doing that I end up referencing proper configuration location and get to see my ever increasing build numbers in web display again.
Well right now I have no content on my Ubuntu server at all.