Skip to content

Conversation

@bhushanthakur93
Copy link

What type of PR is this?

(documentation)

What this PR does / why we need it?

Updated development docs for better onboarding experience.

Which Jira/Github issue(s) this PR fixes?

OCM-19812

Special notes for your reviewer:

Pre-checks (if applicable):

  • Tested latest changes against a cluster
  • Included documentation changes with PR

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Nov 4, 2025
@openshift-ci-robot
Copy link

openshift-ci-robot commented Nov 4, 2025

@bhushanthakur93: This pull request references OCM-19812 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.21.0" version, but no target version was set.

Details

In response to this:

What type of PR is this?

(documentation)

What this PR does / why we need it?

Updated development docs for better onboarding experience.

Which Jira/Github issue(s) this PR fixes?

OCM-19812

Special notes for your reviewer:

Pre-checks (if applicable):

  • Tested latest changes against a cluster
  • Included documentation changes with PR

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci openshift-ci bot requested review from devppratik and rbhilare November 4, 2025 15:22
@rawsyntax
Copy link
Member

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Nov 5, 2025
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Nov 5, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: bhushanthakur93, rawsyntax
Once this PR has been reviewed and has the lgtm label, please assign clcollins for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@bhushanthakur93
Copy link
Author

@clcollins could you please review this MR? Thanks!

### Run using cluster routes
#### Run using cluster routes

Run locally using standard namespace and cluster routes.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Below should be changed to ROUTES=true make run, run-standard-routes target no longer exists.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done!

```
- Scale down existing MUO deployment
```
oc scale deployment managed-upgrade-operator -n managed-upgrade-operator --replicas=0
Copy link

@Alcamech Alcamech Nov 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When I did this, the namespace for the existing MUO deployment was openshift-managed-upgrade-operator

lmizell@compu-p1:~/Development/ocm/managed-upgrade-operator$ oc get deployment -A | grep managed-upgrade-operator
openshift-managed-upgrade-operator                 managed-upgrade-operator                    1/1     1            1           80m
lmizell@compu-p1:~/Development/ocm/managed-upgrade-operator$ oc scale deployment managed-upgrade-operator -n managed-upgrade-operator --replicas=0
error: no objects passed to scale namespaces "managed-upgrade-operator" not found
lmizell@compu-p1:~/Development/ocm/managed-upgrade-operator$ oc scale deployment managed-upgrade-operator -n openshift-managed-upgrade-operator --replicas=0
Warning: spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].key: node-role.kubernetes.io/master is use "node-role.kubernetes.io/control-plane" instead
deployment.apps/managed-upgrade-operator scaled
lmizell@compu-p1:~/Development/ocm/managed-upgrade-operator$ 

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good catch! fixed it now.

- Once the cluster installs, create a user with `cluster-admin` role and log in using `oc` client.
- You will need to be logged in with an account that meets the [RBAC requirements](https://github.com/openshift/managed-upgrade-operator/blob/master/deploy/cluster_role.yaml) for the MUO service account. To do that run
```
oc login $(oc get infrastructures cluster -o json | jq -r '.status.apiServerURL') --token=$(oc create token managed-upgrade-operator -n openshift-managed-upgrade-operator)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the documentation is a bit misleading because if you follow these instructions and log in with a service account you run into permission errors trying to either create a project or scale down the existing MUO deployment.

I think it should be separated into running for local development vs production replication. AIUI, the service account is only for testing RBAC restrictions to verify the operator works with production permissions.

The setup steps should read something like:

1. Log in as a user with cluster-admin privileges:                                                               
                                                                                                                   
oc login --token=<your-admin-token> --server=https://api.your-cluster.example.com:6443                           
                                                                                                                   
2. Scale down the existing MUO deployment to avoid conflicts:                                                    
                                                                                                                   
oc scale deployment managed-upgrade-operator -n openshift-managed-upgrade-operator --replicas=0                  
                                                                                                                   
3. Choose how to run the operator locally:                                                                       
                                                                                                                   
Option A: Run as your admin user (simpler for development)                                                    
    - You're already logged in, just proceed to run the operator                                                   
                                                                                                                   
Option B: Run as the MUO service account (production-like)                                                       
    - Switch to the service account context:                                                                       
  oc login $(oc get infrastructures cluster -o json | jq -r '.status.apiServerURL') --token=$(oc create token      
  managed-upgrade-operator -n openshift-managed-upgrade-operator)           

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point.

I'd like to avoid running as administrator as that can silently creep in potential bugs in future if we end up with more permissions. So I prefer explicit permissions i.e production-like setup as the only option for local development.

I updated the docs to clarify where we need cluster-admin privilege. Hope that helps avoid confusion.

@Tafhim
Copy link
Contributor

Tafhim commented Nov 21, 2025

Hi @bhushanthakur93, could you please look at the latest comments?

@openshift-ci openshift-ci bot removed the lgtm Indicates that a PR is ready to be merged. label Nov 24, 2025
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Nov 24, 2025

New changes are detected. LGTM label has been removed.

@bhushanthakur93
Copy link
Author

Hi @bhushanthakur93, could you please look at the latest comments?

Yep. @Alcamech PTAL now.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Nov 24, 2025

@bhushanthakur93: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

jira/valid-reference Indicates that this PR references a valid Jira ticket of any type.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants