Skip to content

fine grained rbac #7962

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 28 commits into
base: 2.14_stage
Choose a base branch
from
Open

fine grained rbac #7962

wants to merge 28 commits into from

Conversation

swopebe
Copy link
Contributor

@swopebe swopebe commented Jun 16, 2025

No description provided.

Copy link

openshift-ci bot commented Jun 16, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: swopebe

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@@ -51,6 +51,8 @@ See link:../console/console_intro.adoc#web-console[Web console] for more informa

For new features that are related specifically to {mce-short}, see link:../clusters/release_notes/mce_whats_new.adoc#whats-new-mce[What's new for Cluster lifecycle with {mce-short}] in the _Cluster_ section of the documentation.

- Grant more specific permissions for virtual machines with fine-grained role base access for you virtual machines. As a cluster administrator, you can manage and control permissions at the namespace level on managed clusters rather than at the cluster level. See link:../secure_clusters/fine_grain_rbac.adoc/#fine-grain-rbac[Managing fine-grained role-based access control] for information.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the release note.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need to just add TP status here

swopebe added 4 commits June 16, 2025 15:00
…nternal doc that is close to what is in the demo, prepared steps based on demo.
…nternal doc that is close to what is in the demo, prepared steps based on demo.
…nternal doc that is close to what is in the demo, prepared steps based on demo.
@swopebe swopebe requested review from Ginxo and kurwang June 17, 2025 14:16
@mshort55
Copy link

mshort55 commented Jun 17, 2025

From the End to End testing google doc, this pre-req part is missing. This is mandatory, or else the kubevirt roles will not be present on the hub cluster. Copying directly from the google doc:

  1. Install necessary policies on hub so that kubevirt roles exist on the hub cluster
  • Confirm that the policy named policy-virt-clusterroles is present in the open-cluster-management-global-set namespace

oc get policy -n open-cluster-management-global-set

  • Label the local-cluster with environment=virtualization

oc label managedclusters local-cluster environment=virtualization

  • Change the policy policy-virt-clusterroles to enforce which will add the kubevirt clusterroles onto the hubcluster
    (change the remediationAction farthest to the bottom that has the least indentation, there are 2 and only the bottom one needs to be changed)

oc edit policy -n open-cluster-management-global-set policy-virt-clusterroles

CHANGE THIS:
remediationAction: inform
TO THIS:
remediationAction: enforce

Note: this will get changed back to inform automatically after some time. This is expected, and we only need it set to enforce for a short time period just so the clusterroles can be added to the hub cluster.

Copy link

@mshort55 mshort55 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added multiple changes and considerations. Let me know if there is anything I can do to further assist!

. In your `MultiClusterHub` custom resource `spec.overrides.components` field, set `search` to `enabled` to retrieve a list of managed clusters namespaces that can represent virtual machines that are used for access control.
. Create the `ClusterRoles` resource. If you installed operators that create cluster roles for you, you already have this requirement.
. On the hub cluster `ClusterRole` resource, add the `rbac.open-cluster-management.io/filter: vm-clusterroles` label so that you see cluster roles in the console when you create or edit cluster permissions.
. Ensure you have virtual machines by creating or migrating them on the hub cluster, which gives the managed cluster namespaces.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure what this means: "which gives the managed cluster namespaces.". Maybe we can remove this line, unless I am missing something.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was taking content from the issue and watching multiple demos, so at some point it likely was mentioned. Not sure. I can remove it.


. As a cluster administrator, click *AccessControlManagement* > *Add Access Control* in the console.
. *Optional:* Click the *YAML* option on to see the metadata that you enter populate in the *Access Control YAML* window.
. In the _Basic information_ window, add the information for your cluster permission, such as the cluster and the user or group. Choose the cluster or virtual machine with the specific namespace that requires permission.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
. In the _Basic information_ window, add the information for your cluster permission, such as the cluster and the user or group. Choose the cluster or virtual machine with the specific namespace that requires permission.
. In the _Basic information_ window, add the information for your cluster permission, such as the cluster and the user or group. Choose the cluster or specific virtual machine namespaces that the user requires permission for.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will work on this line, ideally not to end with the word for, but keep the same meaning.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In formal technical doc, we do not end the sentence with a preposition, so see how that works. The line previously did say something similar, though.

. As a cluster administrator, click *AccessControlManagement* > *Add Access Control* in the console.
. *Optional:* Click the *YAML* option on to see the metadata that you enter populate in the *Access Control YAML* window.
. In the _Basic information_ window, add the information for your cluster permission, such as the cluster and the user or group. Choose the cluster or virtual machine with the specific namespace that requires permission.
. Add the `Role Bindings` information, such as the namespaces in the cluster or virtual machine, users or groups, and roles, such as `kubevirt.io:view` for fine-grained role-based access control. You can choose a combination of `RoleBindings`.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
. Add the `Role Bindings` information, such as the namespaces in the cluster or virtual machine, users or groups, and roles, such as `kubevirt.io:view` for fine-grained role-based access control. You can choose a combination of `RoleBindings`.
. Add the `Role Bindings` information, such as the namespaces in the cluster, users or groups, and roles, such as `kubevirt.io:view` for fine-grained role-based access control. You can choose a combination of `RoleBindings`.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Users choose namespaces, but not virtual machines

. Check for a `Ready` status in the console, though for the Technology Preview, you might not see a valid status for all possible `ClusterPermissions` combinations.
. Click *Edit access control* to edit the `Role Bindings` and `Cluster Role Binding`.
. *Optional:* Click *Export YAML* to use the resources
//why would they do this?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to answer your question, admins can export yaml and use it as a template for creating k8s resources through git ops or cli.

//why would they do this?
. You can delete the `ClusterPermissions` resource when you are ready.

See the following example of the `ClusterRole` resource, with roles for {ocp-virt-short} and the `ClusterRoleBinding`resource:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be worded differently. It is not really meant to be shown as an example, but these need to be applied on the hub cluster as workarounds to bugs and limitations. These need to be applied before the user that is getting permissions assigned to them logs into the hub, or else their user experience will be poor.

See the notes I added in from the End to End Testing doc. Customer needs to change cluster names and subjects based on the users/groups they are adding for RBAC through ClusterPermission.

Copy link
Contributor Author

@swopebe swopebe Jun 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have tasks and concepts in the RBAC folder. We don't document known issues or workarounds within those procedures.

This content was taken from the template and the demos, but the content at the bottom in the google doc was not in the template, so I didn't see it.

This doc will look something like this formally:
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.13/html/multicluster_engine_operator_with_red_hat_advanced_cluster_management/mce-acm-integration#discover-hosted-acm

(this is why we provide links in the doc issue template so that everyone knows how we format the doc)

Notes are specific, steps are imperative for all users, if a workaround goes into the main doc, it goes in as a step bc it is for all readers.

If this YAML is a workaround, that needs to be identified in a separate issue perhaps for troubleshooting. This issue is for the initial documentation procedure.

Limitations, known issues, etc...those are release notes. T-shooting is also an option. We go in to this here in the process doc:
https://docs.google.com/document/d/1YTqpZRH54Bnn4WJ2nZmjaCoiRtqmrc2w6DdQxe_yLZ8/edit?tab=t.0#heading=h.9fvyr2rdriby

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the confusion. I called this "workaround" because to me that is what it is. However in reality, it is a setup requirement. The yaml I provided needs to be applied for the feature to work correctly.

= Implementing fine-grained role-based access control (Technology Preview*)

{acm} supports fine-grained role-based access control (RBAC). As a cluster administrator, you can manage and control permissions at the namespace level on managed clusters, rather than at the cluster level. Grant permissions to a namespace without granting permission to the entire managed cluster, or virtual machine, to further secure your clusters.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion - rephrasing the last sentence to something like -

Grant permission to virtual machines based on namespaces they belong in a managed cluster. Cannot grant permissions to individual virtual machines, but can grant permissions to all virtual machines in cluster (yes... we can still do this...)

Copy link

@Ginxo Ginxo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please check my comments, I'm concern about https://issues.redhat.com/browse/ACM-21610 since we expect the ticket's PR to be merged by the end of the day 🤞 and to be included on 2.14.0

@swopebe
Copy link
Contributor Author

swopebe commented Jun 18, 2025

From the End to End testing google doc, this pre-req part is missing. This is mandatory, or else the kubevirt roles will not be present on the hub cluster. Copying directly from the google doc:

  1. Install necessary policies on hub so that kubevirt roles exist on the hub cluster
  • Confirm that the policy named policy-virt-clusterroles is present in the open-cluster-management-global-set namespace

oc get policy -n open-cluster-management-global-set

  • Label the local-cluster with environment=virtualization

oc label managedclusters local-cluster environment=virtualization

  • Change the policy policy-virt-clusterroles to enforce which will add the kubevirt clusterroles onto the hubcluster
    (change the remediationAction farthest to the bottom that has the least indentation, there are 2 and only the bottom one needs to be changed)

oc edit policy -n open-cluster-management-global-set policy-virt-clusterroles

CHANGE THIS: remediationAction: inform TO THIS: remediationAction: enforce

Note: this will get changed back to inform automatically after some time. This is expected, and we only need it set to enforce for a short time period just so the clusterroles can be added to the hub cluster.

The end-to-end testing doc is a bit hard for me to follow--We didn't get this info in our template, though. That is where I pulled prereqs and much of the content, from the information in the template.

From the testing doc, if there is something that needs to be documented, it's likely not in this first draft and needs to be added to the PR.

See that there are different prereqs in the issue template:

https://issues.redhat.com/browse/ACM-19799--new

We cannot have a mile long list of prereqs--at that point we may want to consider reorganizing.

@mshort55
Copy link

mshort55 commented Jun 23, 2025

Here are the final steps to enable this feature. These have been revised a lot from what we have currently, so please use these as the new default steps. Lot's of previous steps have been removed because they are no longer needed. I will put CLI steps first and UI steps second, and then we can review them in our next meeting.

CLI:
(all commands to be done on the hub cluster)

  1. Enable fine-grained-rbac-preview in MultiClusterHub
oc edit mch -n open-cluster-management multiclusterhub

CHANGE THIS:
    - configOverrides: {}
      enabled: false
      name: fine-grained-rbac-preview
TO THIS:
    - configOverrides: {}
      enabled: true
      name: fine-grained-rbac-preview
  1. Label local-cluster with environment=virtualization
oc label managedclusters local-cluster environment=virtualization
  1. Change the policy policy-virt-clusterroles to enforce which will add the kubevirt clusterroles onto the hub cluster
oc edit policy -n open-cluster-management-global-set policy-virt-clusterroles

CHANGE THIS:
  remediationAction: inform
TO THIS:
  remediationAction: enforce

(change the remediationAction farthest to the bottom that has the least indentation, there are 2 and only the bottom one needs to be changed)

  1. Create this yaml file named acm-vm-rbac-required.yml to be used with the next cli command
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: acm-vm-rbac-required # customer can use any name
rules:
  - apiGroups: ["clusterview.open-cluster-management.io"]
    resources: ["kubevirtprojects"]
    verbs: ["list"]
  - apiGroups: ["clusterview.open-cluster-management.io"]
    resources: ["managedclusters"]
    verbs: ["list","get","watch"]
  - apiGroups: ["cluster.open-cluster-management.io"]
    resources: ["managedclusters"]
    verbs: ["get"]
    resourceNames: ["cluster01", "cluster02", "cluster03"] # customer needs to add the managed clusters they want their users and groups to access
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: acm-vm-rbac-required # customer can use any name
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: acm-vm-rbac-required # this name has to match the above ClusterRole name
subjects:
  - kind: User # customer can choose User or Group
    apiGroup: rbac.authorization.k8s.io
    name: user1 # customer needs to specify user or group name
  1. Apply above yaml
oc apply -f acm-vm-rbac-required.yml

OPTIONAL:

To enable Grafana for a specific user and cluster (and if observability is enabled)

  1. Create the file observability-grafana-access.yml:
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: observability-grafana-access # customer can give this any name
  namespace: cluster01 # customer needs to specify a specific managed cluster name
subjects:
  - kind: User # customer can choose User or Group
    apiGroup: rbac.authorization.k8s.io
    name: user1 # customer needs to specify user or group name
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: view
  1. Apply yaml
oc apply -f observability-grafana-access.yml




UI:

  1. Enable fine-grained-rbac-preview in MultiClusterHub
local-cluster - Operators - Installed Operators - Advanced Cluster Management for Kubernetes - MultiClusterHub tab - Edit MultiClusterHub - YAML tab

CHANGE THIS:
    - configOverrides: {}
      enabled: false
      name: fine-grained-rbac-preview
TO THIS:
    - configOverrides: {}
      enabled: true
      name: fine-grained-rbac-preview

Click Save
  1. Label local-cluster with environment=virtualization
All Clusters - Infrastructure - Clusters - local-cluster - Actions - Edit labels

Enter: environment=virtualization

Click Save
  1. Change the policy policy-virt-clusterroles to enforce which will add the kubevirt clusterroles onto the hub cluster
Governance - Policies tab - policy-virt-clusterroles - Actions - Edit - Enable YAML editor

CHANGE THIS:
  remediationAction: inform
TO THIS:
  remediationAction: enforce

Click Save

(change the remediationAction farthest to the bottom that has the least indentation, there are 2 and only the bottom one needs to be changed)

  1. Create ClusterRole
local-cluster - User Management - Roles - Create Role

Copy and paste in the below YAML into the YAML editor:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: acm-vm-rbac-required # customer can use any name
rules:
  - apiGroups: ["clusterview.open-cluster-management.io"]
    resources: ["kubevirtprojects"]
    verbs: ["list"]
  - apiGroups: ["clusterview.open-cluster-management.io"]
    resources: ["managedclusters"]
    verbs: ["list","get","watch"]
  - apiGroups: ["cluster.open-cluster-management.io"]
    resources: ["managedclusters"]
    verbs: ["get"]
    resourceNames: ["cluster01", "cluster02", "cluster03"] # customer needs to add the managed clusters they want their users and groups to access

Click Create
  1. Create ClusterRoleBinding
local-cluster - User Management - RoleBindings - Create binding

Binding type: Cluster-wide role binding
RoleBinding Name: acm-vm-rbac-required (can be any name)
Role name: acm-vm-rbac-required (HAS to match the name of previously created ClusterRole)
Subject: select User or Group and enter in User or Group name
Click Create

OPTIONAL:

To enable Grafana for a specific user and cluster (and if observability is enabled)

  1. Create RoleBinding:
local-cluster - User Management - RoleBindings - Create binding

Binding type: Namespace role binding (RoleBinding)
RoleBinding Name: observability-grafana-access (can be any name)
RoleBinding Namespace: (customer must choose managed cluster name from drop down menu)
Role name: view
Subject: (customer must select User or Group and enter in User or Group name)
Click Create

@mshort55
Copy link

mshort55 commented Jun 23, 2025

Here are CLI instructions for adding access control:
(run commands from hub cluster)

  1. Create a yaml file named cluster01-prod-admin.yml with desired RBAC permissions. Here is an example:
apiVersion: rbac.open-cluster-management.io/v1alpha1
kind: ClusterPermission
metadata:
  name: cluster01-prod-admin # customer can use any name
  namespace: cluster01 # customer must specify a specific managed cluster name
spec:
  roleBindings:
    - name: cluster01-prod-admin # customer can use any name
      namespace: prod # customer must specify the namespace in the managed cluster that they are granting access too
      roleRef:
        name: kubevirt.io:admin
        apiGroup: rbac.authorization.k8s.io
        kind: ClusterRole
      subjects:
        - kind: User # customer can choose User or Group
          apiGroup: rbac.authorization.k8s.io
          name: user1 # customer needs to specify user or group name
  1. Apply yaml:
oc apply -f cluster01-prod-admin.yml

. Review and click *Create permission*.
. Check for a `Ready` status in the console, though for the Technology Preview, you might not see a valid status for all possible `ClusterPermissions` combinations.
. Click *Edit permission* to edit the `Role bindings` and `Cluster role binding`.
. *Optional:* Click *Export YAML* to use the resources for GitOps or in the terminal.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add ClusterPermission explanation to this

@swopebe
Copy link
Contributor Author

swopebe commented Jun 25, 2025

Please check my comments, I'm concern about https://issues.redhat.com/browse/ACM-21610 since we expect the ticket's PR to be merged by the end of the day 🤞 and to be included on 2.14.0

Yes, but I could not move forward with the conflicts in the information I had and still create proper documentation, so we needed clarity and a clearer draft. This content needs to look like the rest of the ACM content. After a couple of meetings, we have the steps ironed out.

In the doc process, you can close your issues, since the doc merge is always after the code merges due to workload. This is why the doc team needs their own issues for the release work. You are free to close your issue knowing that we are working the feature doc.

+
*Note:* Run `oc get mch -A` to get the name and namespace of the `MultiClusterHub` resource if you do not use the `open-cluster-management` namespace.

. Label your `local-cluster` with `environment=virtualization`. Run the following command:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does the user do this?

----
remediationAction: enforce
----
//I think we should actually break this out a bit to show them.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It may actually be better to break this out a bit to show them in the YAML--I didn't see an example of how we say this in the doc and we don't want to use directional language.

resources: ["managedclusters"]
verbs: ["get"]
resourceNames: ["cluster01", "cluster02", "cluster03"] <2>
---
Copy link
Contributor Author

@swopebe swopebe Jun 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We originally had ClusterRole and ClusterRoleBinding in two different steps, before the meetings and such. We also have this separated in the console process, but all together in this process.

----
oc apply -f user-observability-grafana-access.yml
----

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The CLI procedure is ready for tech review, see comments.^^^

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants