-
Notifications
You must be signed in to change notification settings - Fork 484
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
enhancements/authentication: Pod Security Admission Enforcement #1747
base: master
Are you sure you want to change the base?
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
12f022d
to
f95e7a9
Compare
#### Set PSS Annotation: `security.openshift.io/MinimallySufficientPodSecurityStandard` | ||
|
||
The PSA label syncer must set the `security.openshift.io/MinimallySufficientPodSecurityStandard` annotation. | ||
Because users can modify `pod-security.kubernetes.io/warn` and `pod-security.kubernetes.io/audit`, these labels do not reliably indicate the minimal standard. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What prevents users from modifying the new annotation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Modifying the annotations is a big no-no for our customers, at least if they don't own them.
As a user, you can only worsen your situation, by modifying that annotation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you add a VAP that prevents anyone other than a particular service account from modifying the value to the annotation?
// TargetModeConditional indicates that the user is willing to let the cluster | ||
// automatically enforce a stricter enforcement once there are no violating Namespaces. | ||
// If violations exist, the cluster stays in its previous state until those are resolved. | ||
// This allows a gradual move towards label and global config enforcement without | ||
// immediately breaking workloads that are not yet compliant. | ||
TargetModeConditional PSATargetMode = "Conditional" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a particular reason why we keep the cluster in the previous state? Is there another middle ground we can use?
The automatic enforcement thing is something that gives me pause and could be prone to some confusion. What if we had something like:
- Privileged - no pod security restrictions
- RestrictedExceptViolating - strictest possible enforcement, automatically exempting namespaces that would violate
- Restricted - strictest possible configuration. Does not rollout if there are violating namespaces.
Also, do we already have an API mapping to the PodSecurity Admission Controller configuration? If not, would having an API surface that maps to that help us come up with something that isn't an all-or-nothing type transition?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The mapping to the PodSecurity Admission configuration is linked in the enhancement:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is an interesting idea, but how would such a thing work?
I mean it would be pointless, if violating namespaces would be added dynamically to such a list, wouldn't it?
So would it be a list that is set initially and could only be reduced? What if there are like 100 namespaces out of 102 in there? Does it signal to the customer, that they are "enforcing restricted" (false sense of security)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Giving the customer the capability to exclude NS upfront in a central place (through an API that maps the PodSecurity Configuration), would be an interesting concept, but it would mean that the kube-apiserver needs to roll-out a new version. It would be easier to label the particular namespace by hand with privileged
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The mapping to the PodSecurity Admission configuration is linked in the enhancement:
Ah, it looks like we don't allow for any configuration from the customer side to control the granularity of this beyond labeling the namespaces. I do wonder if it would make sense to provide the customer more nuanced control through an API that maps to this with some of our own defaulting logic? Why don't we do this today?
It is an interesting idea, but how would such a thing work?
Haven't gone into detailed thinking of how, but my general line of thinking is instead of all-or-nothing, something like RestrictedExceptViolating
could set the default enforcement to restricted with some kind of configuration to not enforce that mode on Namespaces that would violate the restricted enforcement mode. Have to spend some more time thinking on what approach would be best, but I think that could be:
- configures the
exemptions.namespaces
in the PodSecurity admission plugin configuration with violating namespaces - controller adds label to the violating namespaces to specify that the namespace should be evaluated with the privileged mode.
I mean it would be pointless, if violating namespaces would be added dynamically to such a list, wouldn't it?
So would it be a list that is set initially and could only be reduced? What if there are like 100 namespaces out of 102 in there?
From the motivation section of this enhancement, it sounded like we are pretty confident that the number of violating namespaces are generally pretty low. The scenario you point out is definitely something that could happen, but how likely is it?
Does it signal to the customer, that they are "enforcing restricted" (false sense of security)?
Maybe, but I would expect that we should still have some mechanism in place to bring violating namespaces to the attention of cluster administrators. Is it better to be "secure by default, except for these cases we have made you aware of" or "not secure at all"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we discussed this in one of the recent Arch Calls, right?
It is nearly impossible for automated auditing tools to recognize. Which would make us incompatible with some auditing tools. And this is an important thing for us to avoid.
The OCP Arch Call - 4th of February 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think was specifically related to the exemptions
piece. I believe a labeling/annotation approach would be auditable because it would be making an update call to the kube-apiserver to update the namespace labels/annotations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could modify the enhancement, such that the psa label syncer enforces labels, where it works and doesn't where it does not.
Currently, I proposed that if not all of those namespaces would work, we don't proceed to label them at all and wait for those violations to be resolved.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did another pass focusing on the new API. Left a couple more ideas for how we may be able to get away from an "automated" transition thing.
The automated transitioning piece is still giving me pause and if we can come up with a solution that achieves the same goal with a fully declarative approach it might result in a better user and maintainer experience
In addition to adjusting how the `OpenShiftPodSecurityAdmission` `FeatureGate` behaves, administrators need visibility and control throughout this transition. | ||
A new API is necessary to provide this flexibility. | ||
|
||
#### New API |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sjenning Asked in slack which group this API would be added to. Would be good to include that information here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
According to the very first pitch in the Arch Call and @JoelSpeed's comment, I would put it into config/v1alpha1
. I think this is the right package for configs that should be modifiable by users.
enhancements/authentication/pod-security-admission-enforcement.md
Outdated
Show resolved
Hide resolved
// to "Restricted" on the kube-apiserver. | ||
// This represents full enforcement, where both Namespace labels and the global config | ||
// enforce Pod Security Admission restrictions. | ||
EnforcmentModeFull PSAEnforcementMode = "FullEnforcement" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this just Restricted
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is the "LabelEnforcement" mode, which tells the PSA label syncer to set the enforce label AND on top of that it modifies the PodSecurity configuration of the kube apiserver to restricted by default.
Should I revisit the comment above to make it more explicit?
enhancements/authentication/pod-security-admission-enforcement.md
Outdated
Show resolved
Hide resolved
// in a fully privileged (pre-rollout) state, ignoring any label enforcement | ||
// or global config changes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would this mode actually ignore label enforcement?
If I am following https://kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-admission-controller/#configure-the-admission-controller correctly, it seems like the defaults only apply when there are no labels. Is Privileged
effectively the same as disabling the PodSecurity
admission plugin?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If TargetModePrivileged
is set, based on what I tried to articulate, is that we effectively disable any restrictions, which means it should behave as it does in all cluster before.
In detail, we could argue about how we would like to do it. I would just revert everything, like removing the enforce label set by the PSA label syncer and set the global config back to privileged.
Logically (based on the name) more correct would be to set the enforce label to privileged. But as mentioned above, I would like to keep the permutations of possible state as minimal as possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm trying to understand the expected behavior from a user perspective. If I, as a user, were to specify Privileged
as the target mode, what happens? If in this state I add a PSa label to a namespace to enforce a different mode like restricted
or baseline
for that namespace, will it be enforced?
I think there is a distinct difference between "I don't want Pod Security at all" and "I want the most privileged Pod Security default, but still respect anything that would override the default" and I'm trying to make sure I understand which one of those this value is supposed to represent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This value currently represents, currently as in this enhancement, the intend to "I don't want Pod Security at all".
From a user perspective it might be more logical to have it actually set to "I want the most privileged Pod Security default"...
... until something goes wrong and they want "the previous state, before it started to fall apart".
I am fine both ways.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the "I don't want Pod Security at all", maybe something like None
is more reflective of that and in that case should disable the Pod Security admission plugin all together?
TargetModeConditional PSATargetMode = "Conditional" | ||
|
||
// TargetModeRestricted indicates that the user wants the strictest possible | ||
// enforcement, causing the cluster to ignore any existing violations and |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of ignoring existing violations, should the cluster operator go degraded with any existing violations being the reason?
Maybe a knob for users to control this would be useful here to support use cases of:
- Forcefully enable
restricted
PSA configuration - Enable
restricted
PSA configuration only if no violations (I realize this is what theConditional
enum type is for, but maybe that shouldn't be a separate option?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note: I elaborated a bit more on my thinking here in #1747 (comment)
enhancements/authentication/pod-security-admission-enforcement.md
Outdated
Show resolved
Hide resolved
// PSATargetMode reflects the user’s chosen (“target”) enforcement level. | ||
type PSATargetMode string | ||
|
||
const ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should there be an option that is reflective of the LabelEnforcement
enforcement mode that would be specified in the status?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could, but I mentioned it in the Non-Goals.
The reason is, that I want to keep the permutations of possible state as minimal as possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know that this question relates to the non goals you outlined. What target mode(s) might result in the LabelEnforcement
enforcement mode being populated in the status?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is part of this non-goal.
I am not eager to maintain too many states. Either you are privileged, restricted or on the way progressing forward through "LabelOnly" to "FullEnforcement".
type PSAEnforcementConfigSpec struct { | ||
// targetMode is the user-selected Pod Security Admission enforcement level. | ||
// Valid values are: | ||
// - "Privileged": ensures the cluster runs with no restrictions | ||
// - "Conditional": defers the decision to cluster-based evaluation | ||
// - "Restricted": enforces the strictest Pod Security admission | ||
// | ||
// If this field is not set, it defaults to "Conditional". | ||
// | ||
// +kubebuilder:default=Conditional | ||
TargetMode PSATargetMode `json:"targetMode"` | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Continuing from my comment on the Restricted
target mode, maybe instead of only being able to specify a single enum value this should be a discriminated union that enables more flexibility in configuration?
To further show my line of thinking:
To represent the Privileged
option a user would still do:
targetMode: Privileged
To represent a forceful enablement of Restricted
:
targetMode: Restricted
restricted:
violationPolicy: Ignore
# room for future configuration options
To represent an enablement of Restricted
only if no violations (would go to degraded on failure to enable):
targetMode: Restricted
restricted:
violationPolicy: Block
To facilitate the transition case that the proposed Conditional
targetMode seems to be targeting, maybe the default can be something like:
targetMode: Restricted
restricted:
violationPolicy: LabelMinimallyRestrictive
Where any violating namespaces encountered will be labeled with the appropriately minimally restrictive PSS mode (which might be privileged), prior to attempting to set the default enforced by the PodSecurity
admission controller to Restricted
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is slightly more expressive. Maybe something like:
targetMode:
level: restricted
violationPolicy: Block
or
targetMode: restricted
violationPolicy: Block
The default could be Block
then, which would also work for targetMode: privileged
😄
With labelMinimallyRestrictive... the label syncer isn't able to determine this yet and it changes its view point. Currently it determines the future outlook based on the SA-based RBAC. Now it would need to have another level of concern: testing individual workloads against up to 2 levels (do you satisfy restricted, do you satisfy privileged?).
enhancements/authentication/pod-security-admission-enforcement.md
Outdated
Show resolved
Hide resolved
enhancements/authentication/pod-security-admission-enforcement.md
Outdated
Show resolved
Hide resolved
enhancements/authentication/pod-security-admission-enforcement.md
Outdated
Show resolved
Hide resolved
|
||
Needs to be evaluated. | ||
|
||
### Baseline Clusters |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moved here:
Is there any particular reason there isn't an option for the Baseline enforcement level? Is this enforcement level one that users may want to configure?
https://github.com/openshift/enhancements/pull/1747/files#r1941774952
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, we could do that. It technically doubles the effort, but O(n) == O(2n)
, right? :D
We would check the cluster for namespaces that have no enforce label and try restricted upon them. If it fails we would try baseline. If all of the reminding namespaces support at least baseline, we could move to baseline.
I am hesitant, as across the whole reviewing section to add different states to reason about, but this is a minimal overhead for a significant gain.
5d1a078
to
930c8a4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A lot of my questions probably result from a lack of knowledge about PSA and how it actually works/what configurations exist today. Where can I learn more about the various options for it's configuration so that I can better review this EP?
This enhancement introduces a **new cluster-scoped API** and changes to the relevant controllers to rollout [Pod Security Admission (PSA)](https://kubernetes.io/docs/concepts/security/pod-security-admission/) enforcement [in OpenShift](https://www.redhat.com/en/blog/pod-security-admission-in-openshift-4.11). | ||
Enforcement means that the `PodSecurityAdmissionLabelSynchronizationController` sets the `pod-security.kubernetes.io/enforce` label on Namespaces, and the PodSecurityAdmission plugin enforces the `Restricted` [Pod Security Standard (PSS)](https://kubernetes.io/docs/concepts/security/pod-security-standards/) globally on Namespaces without any label. | ||
|
||
The new API allows users to either enforce the `Restricted` PSS or maintain `Privileged` PSS for several releases. Eventually, all clusters will be required to use `Restricted` PSS. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a way to add an exception for a certain namespace by labelling it in some way to remain privileged for now?
Eg if I had one violating namespace, could I just excuse that one for now and then have the rest of the cluster be restricted?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, a user is able to set the Pod Security Admission labels for a namespace.
The PodSecurityAdmissionLabelSynchronizationController
won't interfere with such labels once it lost ownership to the user.
This is even necessary for a user to do, who doesn't want to set up a ServiceAccount properly and leverages the capability to rely on User-based SCCs as the PodSecurityAdmissionLabelSynchronizationController
isn't tracking User-based SCCs and isn't able to properly assess the appropriate labels correctly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This enhancement introduces a **new cluster-scoped API** and changes to the relevant controllers to rollout [Pod Security Admission (PSA)](https://kubernetes.io/docs/concepts/security/pod-security-admission/) enforcement [in OpenShift](https://www.redhat.com/en/blog/pod-security-admission-in-openshift-4.11). | ||
Enforcement means that the `PodSecurityAdmissionLabelSynchronizationController` sets the `pod-security.kubernetes.io/enforce` label on Namespaces, and the PodSecurityAdmission plugin enforces the `Restricted` [Pod Security Standard (PSS)](https://kubernetes.io/docs/concepts/security/pod-security-standards/) globally on Namespaces without any label. | ||
|
||
The new API allows users to either enforce the `Restricted` PSS or maintain `Privileged` PSS for several releases. Eventually, all clusters will be required to use `Restricted` PSS. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Eventually, all clusters will be required to use
Restricted
PSS.
How and why? There will be no get out of jail card at all eventually? Why is this not a cluster admins choice?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Secure by default? 😄
We would like to nudge our users to move to the most secure setup at some point. @deads2k suggested that there could be a period, where we tolerate being Privileged
and start blocking upgrades after x-releases.
There is no reason to run with a Privileged
global setup. A user can set namespaces individually to match the required PSS, if not properly labeled by the PodSecurityAdmissionLabelSynchronizationController
(which should only happen in the case that the SCCs are user-based).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will add a section into Motivation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Although these numbers are now quite low, it is essential to avoid any scenario where users end up with failing workloads. | ||
|
||
To ensure a safe transition, this proposal suggests that if a potential failure of workloads is being detected in release `n`, that the operator moves into `Upgradeable=false`. | ||
The user would need to either resolve the potential failures or set the enforcing mode to `Privileged` for now in order to be able to upgrade. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or perhaps they label the violating namespaces as privileged and allow the rest of the cluster to be restricted?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I will add this option.
|
||
1. Rolling out Pod Security Admission enforcement. | ||
2. Minimize the risk of breakage for existing workloads. | ||
3. Allow users to remain in “privileged” mode for a couple of releases. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we putting an end date on this? That should be explained in the EP as this is not otherwise obvious (or link to somewhere where this has already been documented?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#### Set PSS Annotation: `security.openshift.io/MinimallySufficientPodSecurityStandard` | ||
|
||
The PSA label syncer must set the `security.openshift.io/MinimallySufficientPodSecurityStandard` annotation. | ||
Because users can modify `pod-security.kubernetes.io/warn` and `pod-security.kubernetes.io/audit`, these labels do not reliably indicate the minimal standard. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you add a VAP that prevents anyone other than a particular service account from modifying the value to the annotation?
|
||
If a user encounters `status.violatingNamespaces` it is expected to: | ||
|
||
- resolve the violating Namespaces in order to be able to `Upgrade` or |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A workflow description of how to resolve violating namespaces would be really useful. What are the options for getting a namespace out of violating
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If a user encounters `status.violatingNamespaces` it is expected to: | ||
|
||
- resolve the violating Namespaces in order to be able to `Upgrade` or | ||
- set the `spec.enforcementMode=Privileged` and solve the violating Namespaces later. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When a user defers this, how do we know that they will fix it later? What guidance can we provide them?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we need provide guidance to the customer, the question would be what is the most effective way? Blog entries? This enhancement?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#### PodSecurityAdmissionLabelSynchronizationController | ||
|
||
The [PodSecurityAdmissionLabelSynchronizationController (PSA label syncer)](https://github.com/openshift/cluster-policy-controller/blob/master/pkg/psalabelsyncer/podsecurity_label_sync_controller.go) must watch the `status.enforcementMode` and the `OpenShiftPodSecurityAdmission` `FeatureGate`. | ||
If `spec.enforcementMode` is `Restricted` and the `FeatureGate` `OpenShiftPodSecurityAdmission` is enabled, the syncer will set the `pod-security.kubernetes.io/enforce` label. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This label goes on each namespace? Or?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On each namespace, except it is:
- a run-level 0 namespace (default, kube-system,...), an
openshift
pre-fixed namespace or- a namespace which disabled the PSA label syncer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will list those Namespaces.
|
||
### Fresh Installs | ||
|
||
Needs to be evaluated. The System Administrator needs to pre-configure the new API’s `spec.enforcementMode`, choosing whether the cluster will be `Privileged` or `Restricted` during a fresh install. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not have the installer generate the correct (Restricted) configuration to be installed? Then most people won't care, but if someone really wanted to, they could pause at the manifests stage and edit the resource to Privileged
instead
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think, if the default will be Restricted
, it would default to enforcement, right? The question that is left open for me is: if a customer wants to default to Privileged
on fresh installs, is there an option to do it? Is there some documentation wrt? Or is it common to let the SysAdmin change that config post-installation when setting up a cluster?
|
||
### Enforce PSA labe syncer, fine-grained | ||
|
||
It would be possible to enforce only the `pod-security.kubernetes.io/enforce` labels on Namespaces without enforcing it globally through the `PodSecurity` configuration given to the kube-apiserver. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How would this look in reality? Are there race issues with a solution that leverages this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, yes. We need to actually do this. We need a signal from the PSA label syncer, that it is done, before we start enforcing the global configuration. The question is, if it is better to spread out to another release or if we would like to introduce a signal of state from the PSA label syncer.
Why we need to coordinate this:
- Workloads would fail (if they need baseline / privileged) for the duration between global configuration enforcing restricted and the label syncer setting an enforce label that is less restrictive than
restricted
.
|
||
While the root causes need to be identified in some cases, the result of identifying a violating Namespace is understood. | ||
|
||
#### New SCC Annotation: `security.openshift.io/ValidatedSCCSubjectType` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO: fix casing to match change
- List which namespaces aren't managed by the PSA label syncer - Explain, why we want to force restricted PSA enforcement eventually. - Add a guide on how to handle violations.
@ibihim: The following test failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
What
A proposal how to safely start enforcing.
Why
It isn't as trivial as we hoped for in 4.11 :)