I have installed the YAKS operator using an image from my company's Docker image repository, utilising the yaks install --operator image <> option. This works fine, I can see the right image being used for the operator k8s pod.
However, when I then run a YAKS test, the subsequent test job which is created reverts back to using the default 'docker.io/citrusframework/yaks:0.20.0' for the test container.

While this is fine on my local, it doesn't work for integration into our CI. Is there some way to ensure the same YAKS image initially specified is used throughout or is this a bug/enhancement?
I am able workaround this by creating my own test job yaml with the correct image and using kubectl apply, albeit this is prone to more error and requires a more complicated CI pipeline script. It would nice if I could use the yaks run command and let the operator handle everything.
Thanks in advance.
I have installed the YAKS operator using an image from my company's Docker image repository, utilising the yaks install --operator image <> option. This works fine, I can see the right image being used for the operator k8s pod.
However, when I then run a YAKS test, the subsequent test job which is created reverts back to using the default 'docker.io/citrusframework/yaks:0.20.0' for the test container.
While this is fine on my local, it doesn't work for integration into our CI. Is there some way to ensure the same YAKS image initially specified is used throughout or is this a bug/enhancement?
I am able workaround this by creating my own test job yaml with the correct image and using kubectl apply, albeit this is prone to more error and requires a more complicated CI pipeline script. It would nice if I could use the yaks run command and let the operator handle everything.
Thanks in advance.