-
-
Notifications
You must be signed in to change notification settings - Fork 356
Description
Is there an existing feature request for this?
- I have searched the existing issues
What is your feature request?
This is a cross-dependent request related to https://github.com/pixlcore/xysat
The issue/flow: I like to update infra using docker tags (and not using latest) to have a better view of what's actually running.
As much as I like the new ability to upgrade the workers and coductors all from xyOps interface it's prone to have a mismatched status, for example: using ghcr.io/pixlcore/xysat:v1.0.11 and then triggering an internal update that brings xySat to v1.0.12 getting docker-tag != runner-version.
I'm using XYOPS_setup to provision the workers and every time the container is recreated a new config.json is created with a new server_id + auth_token.
currently, as a patchy-situation, using a custom start-script I'm storing the very 1st config.json and then when container is recreated I'm just swapping server_id + auth_token in the newly-pulled config.json to make sure it also grabs the latest changes from xyOps.
One possible solution on xyOps side: allow for an extra param in /api/app/satellite/config?t=KEY to "seed" the generation of server_id + auth_token so they return always the same values.
Alternatively on xySat side, allow for a bit more dynamic config.json (I can open an issue on xySat side if makes more sense either of these):
- ability to have a custom
config.jsonpath - env variables for
server_id+auth_token - store
server_id+auth_tokenin a docker named-volume to be swapped (if present) every time acurl $XYOPS_setupis executed.
Another quick option would be to have 1-server-group defined on xyOps side for each target, this way newly created worker servers post-update will always be targeted properly but this leaves zombie-servers that need to be constantly cleaned.
As always, thanks for your work 🙏!
Code of Conduct
- I agree to follow this project's Code of Conduct