Skip to content

Conversation

@p0lyn0mial
Copy link
Contributor

@p0lyn0mial p0lyn0mial commented Aug 29, 2025

  This test runs only the controller, which produces the oauth-server deployment.

  The purpose of this test is to show which input resources are required to run the operator and the controller.

  input-dir:
  - config.openshift.io/clusterversions: required to "start the operator" (CreateOperatorStarter/prepareOauthOperator)
  - config.openshift.io/infrastructures: not strictly required (CreateOperatorStarter/prepareOauthAPIServerOperator), but if missing, SNO is assumed and a warning is produced
  - config.openshift.io/authentications: required by the controller, specifically the "cluster" object
  - operator.openshift.io/authentications: required by the controller, specifically the "cluster" object
  - openshift-authentication/route.openshift.io/routes/oauth-openshift: required by the controller, single-item informer
  - core/nodes: (i think) required, otherwise the replica count on the deployment will be 0
  dependencies specified directly on the deployment:
  configmaps:
  - core/configmaps/openshift-authentication/audit: required, otherwise the pod won't start 
  - core/configmaps/openshift-authentication/v4-0-config-system-cliconfig: required, otherwise the pod wont' start
  - core/configmaps/openshift-authentication/v4-0-config-system-trusted-ca-bundle: this one is marked as optional, the pod will start when it is missing
  secrets:
  - core/configmaps/openshift-authentication/v4-0-config-system-session: required, otherwise the pod won't start
  - core/configmaps/openshift-authentication/v4-0-config-system-serving-cert: required, otherwise the pod won't start
  - core/configmaps/openshift-authentication/v4-0-config-system-router-certs: required, otherwise the pod won't start
  - core/configmaps/openshift-authentication/v4-0-config-system-ocp-branding-template: required, otherwise the pod won't start
  
  
  optional resources:
  specified directly on the deployment:
  secrets:
  - core/configmaps/openshift-authentication/v4-0-config-user-template-login: optional, custom template provided by a user
  - core/configmaps/openshift-authentication/v4-0-config-user-template-provider-selection: as above
  - core/configmaps/openshift-authentication/v4-0-config-user-template-error: as above

@coderabbitai
Copy link

coderabbitai bot commented Aug 29, 2025

Walkthrough

Adds a new “oauth-server-creation-minimal” apply-configuration test scenario with input fixtures (cluster-scoped and namespaced), expected outputs (ApplyStatus, Create Event, Deployment), and controller results. Also updates minimal-cluster event metadata names and adjusts a CSR payload. Most changes are YAML test data additions.

Changes

Cohort / File(s) Summary
Scenario harness
test-data/apply-configuration/overall/oauth-server-creation-minimal/test.yaml, .../expected-output/controller-results.yaml
Introduces a new test definition and expected controller results for the oauth-server creation minimal scenario.
Operator status ApplyStatus
.../expected-output/Management/ApplyStatus/cluster-scoped-resources/operator.openshift.io/authentications/5749-body-cluster.yaml, .../5749-metadata-cluster.yaml, .../5749-options-cluster.yaml
Adds ApplyStatus action metadata, options, and status body for cluster-scoped Authentication (operator.openshift.io/v1).
Deployment expected output
.../expected-output/Management/Create/namespaces/openshift-authentication/apps/deployments/a3d6-metadata-oauth-openshift.yaml, .../a3d6-body-oauth-openshift.yaml
Adds expected Create action and Deployment manifest for oauth-openshift.
Operator event expected output
.../expected-output/Management/Create/namespaces/openshift-authentication-operator/core/events/6672-metadata-*.yaml, .../6672-body-*.yaml
Adds expected Create event metadata and body for DeploymentCreated in openshift-authentication-operator namespace.
Scenario inputs: config.openshift.io
.../input-dir/cluster-scoped-resources/config.openshift.io/authentications.yaml, .../clusterversions.yaml, .../infrastructures.yaml
Adds AuthenticationList, ClusterVersionList, and InfrastructureList fixtures.
Scenario inputs: core nodes
.../input-dir/cluster-scoped-resources/core/nodes/ip-10-0-114-231....yaml, .../ip-10-0-34-16....yaml, .../ip-10-0-62-215....yaml
Adds three Node manifests with full metadata, spec, and status.
Scenario inputs: namespace resources
.../input-dir/namespaces/openshift-authentication/route.openshift.io/routes/oauth-openshift.yaml, .../core/secrets.yaml
Adds Route for oauth-openshift and SecretList (session, serving cert, router certs, branding templates).
Operator config (cluster-scoped)
.../input-dir/cluster-scoped-resources/operator.openshift.io/authentications/cluster.yaml
Adds operator Authentication resource with observedConfig, conditions, generations, and status.
Minimal-cluster event tweaks
test-data/apply-configuration/overall/minimal-cluster/expected-output/Management/Create/namespaces/openshift-authentication-operator/core/events/2e46-metadata-*.yaml, .../86b1-metadata-*.yaml
Updates event metadata.name values (IDs only) in minimal-cluster expected outputs.
CSR payload update
test-data/apply-configuration/overall/minimal-cluster/expected-output/UserWorkload/Create/cluster-scoped-resources/certificates.k8s.io/certificatesigningrequests/ffbf-body-system-COLON-openshift-COLON-openshift-authenticator-.yaml
Replaces spec.request base64 content; structure unchanged.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Title Check ✅ Passed The title clearly indicates the addition of an integration test for the oauth-server deployment controller, matching the main objective of the pull request without extraneous details, making it concise and specific to the primary change.
Description Check ✅ Passed The description outlines the purpose and scope of the integration test, enumerating required input resources, deployment dependencies, and optional resources, directly reflecting the changes introduced in the pull request.
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.
✨ Finishing touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci openshift-ci bot requested review from ibihim and liouk August 29, 2025 12:04
@p0lyn0mial
Copy link
Contributor Author

/assign @benluddy @bertinatto

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/ApplyStatus/cluster-scoped-resources/operator.openshift.io/authentications/5749-metadata-cluster.yaml (1)

1-10: Typo: "mame" should be "name".

YAML key should be name to match other metadata files and any schema consumers.

Apply:

 action: ApplyStatus
 controllerInstanceName: TODO-deploymentController
 fieldManager: OAuthServer-Workload
 generateName: ""
-mame: cluster
+name: cluster
 resourceType:
   Group: operator.openshift.io
   Resource: authentications
   Version: v1
test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/Create/namespaces/openshift-authentication/apps/deployments/e654-metadata-oauth-openshift.yaml (1)

1-10: Typo: "mame" should be "name".

Fix to ensure consumers resolve metadata correctly.

 action: Create
 controllerInstanceName: TODO-deploymentController
 generateName: ""
-mame: oauth-openshift
+name: oauth-openshift
 namespace: openshift-authentication
 resourceType:
   Group: apps
   Resource: deployments
   Version: v1
🧹 Nitpick comments (8)
test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/config.openshift.io/infrastructures.yaml (1)

1-76: Not strictly required—consider dropping or minimizing for “minimal” scenario.

PR objectives note Infrastructure isn’t required for this controller; including it adds noise. If you keep it, prune managedFields/resourceVersion/uid/status to stable essentials (metadata.name + spec.platformSpec only).

Minimal variant:

 apiVersion: config.openshift.io/v1
 kind: InfrastructureList
 items:
 - apiVersion: config.openshift.io/v1
   kind: Infrastructure
   metadata:
-    creationTimestamp: "2025-08-01T18:29:04Z"
-    managedFields: {}
     name: cluster
-    resourceVersion: "536"
-    uid: 6acf325c-8f35-4030-83a2-24d69d22a0ad
   spec:
     platformSpec:
       type: AWS
-  status: {}
-metadata:
-  continue: ""
-  resourceVersion: "269027"
test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/config.openshift.io/clusterversions.yaml (1)

1-169: Slim ClusterVersion to fields actually read by the operator.

This payload is very heavy (conditions/history/managedFields/versionHash/etc.). Trim to the minimal subset your controller needs (likely metadata.name and status.desired.version/image). Reduces churn and review noise.

Example minimal:

 apiVersion: config.openshift.io/v1
 kind: ClusterVersionList
 items:
 - apiVersion: config.openshift.io/v1
   kind: ClusterVersion
   metadata:
-    creationTimestamp: "2025-08-01T18:29:12Z"
-    managedFields: {}
     name: version
   status:
-    conditions: []
     desired:
       version: 4.20.0-0.ci-2025-08-01-181332-test-ci-op-gn2pz6q7-latest
-      image: registry.build06.ci.openshift.org/ci-op-gn2pz6q7/release@sha256:4cb12a9c6...
-  ...
-metadata:
-  continue: ""
-  resourceVersion: "269027"

If you’re unsure which fields are read, I can scan the repo for field accessors and generate a minimal schema.

test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/controller-results.yaml (1)

1-82: Stabilize expectations to avoid drift as controllers evolve.

Instead of asserting a full list with many Skipped entries, consider checking that TODO-deploymentController is Succeeded and that all other controllers are not run (or match a small allowlist). This will reduce churn when new controllers are added.

test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/config.openshift.io/authentications.yaml (1)

1-82: Trim mutable server fields in test inputs to reduce flakiness.

Unless the harness depends on them, drop managedFields, resourceVersion, uid, and timestamps from inputs; also consider omitting status (server-owned) from inputs. Keep only fields the controller actually reads. This will make fixtures smaller and more stable over time.

Would you like me to propose a minimized version of this fixture that retains only fields referenced by the controller?

test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/ApplyStatus/cluster-scoped-resources/operator.openshift.io/authentications/5749-body-cluster.yaml (1)

8-8: Grammar nit: add “are”.

Minor, but user-facing condition messages should be polished.

-    message: no oauth-openshift.openshift-authentication pods available on any node.
+    message: no oauth-openshift.openshift-authentication pods are available on any node.
test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/operator.openshift.io/authentications/cluster.yaml (2)

786-787: Avoid server-populated fields in input fixtures.

resourceVersion and uid are server-managed and make fixtures less portable/deterministic. Prefer omitting them in inputs.

-  resourceVersion: "30999"
-  uid: fa91c2c0-b4a8-44f5-b2bf-1d34f1ffba2d

856-858: Use an empty object, not a stringified JSON, for identityProviders.

If the field is intended to be a map, keep it typed as such to avoid downstream unmarshalling surprises.

-      volumesToMount:
-        identityProviders: '{}'
+      volumesToMount:
+        identityProviders: {}
test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/Create/namespaces/openshift-authentication/apps/deployments/e654-body-oauth-openshift.yaml (1)

95-99: Security context intentionally privileged — document or waive.

Static analysis flags privileged root. If this mirrors real product output, add a local Checkov/linters waiver for testdata with justification to avoid recurring noise.

I can draft a .checkov.yml to exclude test-data/** with a rationale.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 11e57aa and da84587.

📒 Files selected for processing (14)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/ApplyStatus/cluster-scoped-resources/operator.openshift.io/authentications/5749-body-cluster.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/ApplyStatus/cluster-scoped-resources/operator.openshift.io/authentications/5749-metadata-cluster.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/ApplyStatus/cluster-scoped-resources/operator.openshift.io/authentications/5749-options-cluster.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/Create/namespaces/openshift-authentication-operator/core/events/6672-body-authentication-operator.18599d2230299800.64fe3b99.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/Create/namespaces/openshift-authentication-operator/core/events/6672-metadata-authentication-operator.18599d2230299800.64fe3b99.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/Create/namespaces/openshift-authentication/apps/deployments/e654-body-oauth-openshift.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/Create/namespaces/openshift-authentication/apps/deployments/e654-metadata-oauth-openshift.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/controller-results.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/config.openshift.io/authentications.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/config.openshift.io/clusterversions.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/config.openshift.io/infrastructures.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/operator.openshift.io/authentications/cluster.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/namespaces/openshift-authentication/route.openshift.io/routes/oauth-openshift.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/test.yaml (1 hunks)
🧰 Additional context used
🪛 Checkov (3.2.334)
test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/Create/namespaces/openshift-authentication/apps/deployments/e654-body-oauth-openshift.yaml

[MEDIUM] 1-193: Containers should not run with allowPrivilegeEscalation

(CKV_K8S_20)


[HIGH] 1-193: Container should not be privileged

(CKV_K8S_16)


[MEDIUM] 1-193: Minimize the admission of root containers

(CKV_K8S_23)

🔇 Additional comments (7)
test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/namespaces/openshift-authentication/route.openshift.io/routes/oauth-openshift.yaml (2)

46-57: Route spec looks correct for passthrough to oauth on 6443.


47-47: No expected-output contains the host string; no suffix adjustments needed.

test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/Create/namespaces/openshift-authentication-operator/core/events/6672-body-authentication-operator.18599d2230299800.64fe3b99.yaml (1)

5-12: Event target vs message mismatch: confirm intended involvedObject.

Message says the oauth-openshift Deployment in openshift-authentication was created, but involvedObject points to Deployment/authentication-operator in openshift-authentication-operator. If the intent is to record on the created object, switch involvedObject accordingly; if you intentionally attach events to the operator Deployment, ignore this.

Option A (attach to created deployment):

 involvedObject:
-  kind: Deployment
-  name: authentication-operator
-  namespace: openshift-authentication-operator
+  kind: Deployment
+  name: oauth-openshift
+  namespace: openshift-authentication

Also applies to: 15-17

test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/ApplyStatus/cluster-scoped-resources/operator.openshift.io/authentications/5749-options-cluster.yaml (1)

1-2: Options look consistent; no issues.

fieldManager: OAuthServer-Workload and force: true align with SSA expectations and with the related metadata/body artifacts.

test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/ApplyStatus/cluster-scoped-resources/operator.openshift.io/authentications/5749-body-cluster.yaml (1)

26-30: Validate lastGeneration value.

lastGeneration: 0 for oauth-openshift looks suspicious after a Create; typically you’d see 1+. Please confirm the controller populates 0 here; otherwise adjust to the Deployment’s observed generation to avoid false failures.

test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/operator.openshift.io/authentications/cluster.yaml (1)

782-785: OwnerReference UID must match input ClusterVersion.

Ensure the referenced ClusterVersion in input data uses uid fd412cff-9592-4cb6-b0e9-97a5c376f29e, or drop UIDs from inputs to avoid tight coupling.

test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/Create/namespaces/openshift-authentication/apps/deployments/e654-body-oauth-openshift.yaml (1)

13-21: Replicas 0 conflicts with status “0/1” elsewhere.

Either set replicas: 1 here or fix the ApplyStatus message to 0/0 so the scenario is consistent.

-  replicas: 0
+  replicas: 1

If zero replicas are required for the harness, update the status text instead (see related comment).

Comment on lines +17 to +21
message: 'deployment/oauth-openshift.openshift-authentication: 0/1 pods have been
updated to the latest generation and 0/1 pods are available'
reason: PodsUpdating
status: "True"
type: OAuthServerDeploymentProgressing
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix replica math in condition message (0/1 vs replicas: 0).

This status says “0/1 … 0/1 available” while the expected Deployment sets replicas to 0. Align this to 0/0 or change the Deployment to 1 replica; otherwise the test is self-inconsistent and brittle.

Apply one of:

-    message: 'deployment/oauth-openshift.openshift-authentication: 0/1 pods have been
-      updated to the latest generation and 0/1 pods are available'
+    message: 'deployment/oauth-openshift.openshift-authentication: 0/0 pods have been
+      updated to the latest generation and 0/0 pods are available'

…or switch the Deployment to replicas: 1 to match “0/1”.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
message: 'deployment/oauth-openshift.openshift-authentication: 0/1 pods have been
updated to the latest generation and 0/1 pods are available'
reason: PodsUpdating
status: "True"
type: OAuthServerDeploymentProgressing
message: 'deployment/oauth-openshift.openshift-authentication: 0/0 pods have been
updated to the latest generation and 0/0 pods are available'
reason: PodsUpdating
status: "True"
type: OAuthServerDeploymentProgressing
🤖 Prompt for AI Agents
In
test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/ApplyStatus/cluster-scoped-resources/operator.openshift.io/authentications/5749-body-cluster.yaml
around lines 17-21, the condition message reports "0/1 pods" while the actual
Deployment replica count is 0; make them consistent by either (A) updating the
condition message to "0/0 pods have been updated to the latest generation and
0/0 pods are available" to reflect replicas: 0, or (B) change the expected
Deployment spec to replicas: 1 so the message "0/1 ... 0/1 available" is
correct—choose one approach and apply that single, consistent fix throughout the
expected output and any related test fixtures.

Comment on lines 32 to 33
name: oauth-openshift
namespace: openshift-authentication
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Remove name/namespace from Pod template metadata.

PodTemplateSpec metadata should not set name/namespace; many API servers reject or ignore them. Keeping these can cause drift vs. real controller output.

-      name: oauth-openshift
-      namespace: openshift-authentication
🤖 Prompt for AI Agents
In
test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/Create/namespaces/openshift-authentication/apps/deployments/e654-body-oauth-openshift.yaml
around lines 32-33, the PodTemplateSpec metadata contains name and namespace
fields which should be removed; edit the Deployment so spec.template.metadata
does not include name or namespace (leave only labels/annotations necessary for
selector matching), ensuring the top-level metadata still retains the
Deployment's name/namespace and selectors remain consistent.

Comment on lines +5 to +41
creationTimestamp: "2025-08-01T18:44:36Z"
labels:
app: oauth-openshift
managedFields:
- apiVersion: route.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:host: {}
f:port:
.: {}
f:targetPort: {}
f:tls:
.: {}
f:insecureEdgeTerminationPolicy: {}
f:termination: {}
f:to:
f:kind: {}
f:name: {}
f:weight: {}
f:wildcardPolicy: {}
manager: authentication-operator
operation: Update
time: "2025-08-01T18:44:36Z"
- apiVersion: route.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:ingress: {}
manager: openshift-router
operation: Update
subresource: status
time: "2025-08-01T18:44:36Z"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Prune server-populated/unstable fields to keep the input deterministic.

creationTimestamp/managedFields/resourceVersion/uid/status will churn and add noise to reviews.

Minimal edit:

 metadata:
-  creationTimestamp: "2025-08-01T18:44:36Z"
   labels:
     app: oauth-openshift
-  managedFields:
-  - apiVersion: route.openshift.io/v1
-    fieldsType: FieldsV1
-    fieldsV1: {}
-    manager: authentication-operator
-    operation: Update
-    time: "2025-08-01T18:44:36Z"
-  - apiVersion: route.openshift.io/v1
-    fieldsType: FieldsV1
-    fieldsV1: {}
-    manager: openshift-router
-    operation: Update
-    subresource: status
-    time: "2025-08-01T18:44:36Z"
   name: oauth-openshift
   namespace: openshift-authentication
-  resourceVersion: "20297"
-  uid: f3652ccc-9da2-49a3-8819-cf9b3cbc1fb2
 ...
-status:
-  ingress:
-  - conditions:
-    - lastTransitionTime: "2025-08-01T18:44:36Z"
-      status: "True"
-      type: Admitted
-    host: oauth-openshift.apps.ci-op-gn2pz6q7-69aee.XXXXXXXXXXXXXXXXXXXXXX
-    routerCanonicalHostname: router-default.apps.ci-op-gn2pz6q7-69aee.XXXXXXXXXXXXXXXXXXXXXX
-    routerName: default
-    wildcardPolicy: None

Also applies to: 44-45, 58-67

🤖 Prompt for AI Agents
test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/namespaces/openshift-authentication/route.openshift.io/routes/oauth-openshift.yaml
lines 5-41: remove server-populated/unstable fields to make the fixture
deterministic — specifically delete creationTimestamp, managedFields,
resourceVersion, uid, and status entries (the same pruning should be applied
around lines 44-45 and 58-67 as noted); keep only the static metadata (name,
labels, annotations you control) and spec fields required for tests so the file
no longer contains runtime-churn fields.

binaryName: ./authentication-operator
testName: creation of the oauth-server
controllers:
- "TODO-deploymentController"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Controller key looks placeholder.

“TODO-deploymentController” reads like a stub. Ensure this matches the actual controller entry name used by the harness; otherwise the test won’t execute anything.

-  - "TODO-deploymentController"
+  - "oauth-server-deployment-controller"  # example; replace with real key used by the binary
🤖 Prompt for AI Agents
In test-data/apply-configuration/overall/oauth-server-creation-minimal/test.yaml
around line 4, the controller key "TODO-deploymentController" is a placeholder
and must be replaced with the actual controller entry name used by the test
harness; update this value to the real controller identifier (exact string used
in the harness/config), verify that name exists in the harness configuration or
registry, and run the test to confirm the harness picks up and executes the
controller.

Comment on lines 10 to 15
input-dir:
- config.openshift.io/clusterversions: required to "start the operator" (CreateOperatorStarter/prepareOauthOperator)

- config.openshift.io/infrastructures: not strictly required (CreateOperatorStarter/prepareOauthAPIServerOperator), but if missing, SNO is assumed and a warning is produced

- config.openshift.io/authentications: required by the controller, specifically the "cluster" object

- operator.openshift.io/authentications: required by the controller, specifically the "cluster" object

- openshift-authentication/route.openshift.io/routes/oauth-openshift: required by the controller, single-item informer
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

input-dir is descriptive text, not paths.

The harness typically expects a directory path or list of paths; these bullets will parse as a map of strings and likely break discovery. Move the prose to a notes field and point input-dir at the real fixture root.

-  input-dir:
-  - config.openshift.io/clusterversions: required to "start the operator" (CreateOperatorStarter/prepareOauthOperator)
-  - config.openshift.io/infrastructures: not strictly required (CreateOperatorStarter/prepareOauthAPIServerOperator), but if missing, SNO is assumed and a warning is produced
-  - config.openshift.io/authentications: required by the controller, specifically the "cluster" object
-  - operator.openshift.io/authentications: required by the controller, specifically the "cluster" object
-  - openshift-authentication/route.openshift.io/routes/oauth-openshift: required by the controller, single-item informer
+  input-dir: ./input-dir
+  notes:
+  - config.openshift.io/clusterversions: required to start the operator
+  - config.openshift.io/infrastructures: optional; if missing SNO is assumed (warning)
+  - config.openshift.io/authentications: required ("cluster")
+  - operator.openshift.io/authentications: required ("cluster")
+  - openshift-authentication/route.openshift.io/routes/oauth-openshift: required (single-item informer)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
input-dir:
- config.openshift.io/clusterversions: required to "start the operator" (CreateOperatorStarter/prepareOauthOperator)
- config.openshift.io/infrastructures: not strictly required (CreateOperatorStarter/prepareOauthAPIServerOperator), but if missing, SNO is assumed and a warning is produced
- config.openshift.io/authentications: required by the controller, specifically the "cluster" object
- operator.openshift.io/authentications: required by the controller, specifically the "cluster" object
- openshift-authentication/route.openshift.io/routes/oauth-openshift: required by the controller, single-item informer
input-dir: ./input-dir
notes:
- config.openshift.io/clusterversions: required to start the operator
- config.openshift.io/infrastructures: optional; if missing, SNO is assumed (warning)
- config.openshift.io/authentications: required ("cluster")
- operator.openshift.io/authentications: required ("cluster")
- openshift-authentication/route.openshift.io/routes/oauth-openshift: required (single-item informer)
🤖 Prompt for AI Agents
In test-data/apply-configuration/overall/oauth-server-creation-minimal/test.yaml
around lines 10–19, the input-dir field contains descriptive bullets instead of
actual filesystem path(s); move that prose into a new notes field and replace
input-dir with the real fixture directory path or list of paths that the test
harness should discover (i.e., point input-dir at the test fixture root(s));
ensure the notes field preserves the original explanatory bullets and that
input-dir contains only valid path strings.

@p0lyn0mial p0lyn0mial force-pushed the integration-test-creation-oauth-server branch 3 times, most recently from 3f6961e to f288aed Compare August 29, 2025 15:11
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/core/nodes/ip-10-0-34-16.us-west-2.compute.internal.yaml (1)

219-501: Strip status block from node fixture
Remove the entire status: section (including addresses, allocatable, capacity, conditions, daemonEndpoints, images, nodeInfo, runtimeHandlers, etc.) from this test fixture; client‐created YAML should only define metadata and spec to avoid brittle, status‐dependent tests.

♻️ Duplicate comments (2)
test-data/apply-configuration/overall/oauth-server-creation-minimal/test.yaml (2)

4-4: Replace placeholder controller key.

“TODO-deploymentController” must match the real controller identifier registered in the binary.

-  - "TODO-deploymentController"
+  - "oauth-server-deployment-controller"  # exact string as in the controller registry

10-16: “input-dir” is prose, not a path; move to notes and point to the real directory.

As written it won’t be parsed by the harness.

-description: >
+description: >
   This test runs only the controller, which produces the oauth-server deployment.

   The purpose of this test is to show which input resources are required to run the operator and the controller.

-  input-dir:
-  - config.openshift.io/clusterversions: required to "start the operator" (CreateOperatorStarter/prepareOauthOperator)
-  - config.openshift.io/infrastructures: not strictly required (CreateOperatorStarter/prepareOauthAPIServerOperator), but if missing, SNO is assumed and a warning is produced
-  - config.openshift.io/authentications: required by the controller, specifically the "cluster" object
-  - operator.openshift.io/authentications: required by the controller, specifically the "cluster" object
-  - openshift-authentication/route.openshift.io/routes/oauth-openshift: required by the controller, single-item informer
-  - core/nodes: (i think) required, otherwise the replica count on the deployment will be 0
+input-dir: ./input-dir
+notes:
+  - config.openshift.io/clusterversions: required to start the operator (CreateOperatorStarter/prepareOauthOperator)
+  - config.openshift.io/infrastructures: optional; if missing, SNO is assumed (warning)
+  - config.openshift.io/authentications: required ("cluster" object)
+  - operator.openshift.io/authentications: required ("cluster" object)
+  - openshift-authentication/route.openshift.io/routes/oauth-openshift: required (single-item informer)
+  - core/nodes: required; otherwise the deployment replica count will be 0
🧹 Nitpick comments (10)
test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/core/nodes/ip-10-0-114-231.us-west-2.compute.internal.yaml (2)

36-52: Avoid deprecated label keys in fixture unless explicitly required.

beta.kubernetes.io/* and failure-domain.beta.kubernetes.io/* are deprecated. Keep only labels the controller reads (master/control-plane role).

-    beta.kubernetes.io/arch: amd64
-    beta.kubernetes.io/instance-type: m6a.xlarge
-    beta.kubernetes.io/os: linux
-    failure-domain.beta.kubernetes.io/region: us-west-2
-    failure-domain.beta.kubernetes.io/zone: us-west-2d
+    # minimal labels needed by the controller
     kubernetes.io/arch: amd64
     kubernetes.io/hostname: ip-10-0-114-231.us-west-2.compute.internal
     kubernetes.io/os: linux
     node-role.kubernetes.io/control-plane: ""
     node-role.kubernetes.io/master: ""

486-498: Empty runtime handler name looks odd.

A runtimeHandlers entry with name: "" may be legit for “default” but is unnecessary here; recommend removing all runtimeHandlers from the fixture.

test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/core/nodes/ip-10-0-34-16.us-west-2.compute.internal.yaml (2)

36-52: Minimize labels to what the controller consumes.

Keep role labels; drop deprecated beta/failure-domain labels unless required by the test.


489-497: Remove empty-name runtime handler.

Same rationale as the other node file.

test-data/apply-configuration/overall/oauth-server-creation-minimal/test.yaml (1)

19-19: Remove trailing whitespace.

YAMLlint flags trailing spaces on these lines.

-  - core/configmaps/openshift-authentication/audit: required, otherwise the pod won't start␠
+  - core/configmaps/openshift-authentication/audit: required, otherwise the pod won't start
...
-
-
+
+

Also applies to: 27-28

test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/Create/namespaces/openshift-authentication/apps/deployments/d461-body-oauth-openshift.yaml (1)

67-76: Optional: add startupProbe to reduce false positives on cold starts.

Only if the rendered object already includes it in current releases; otherwise keep as-is to match reality.

Also applies to: 82-91

test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/namespaces/openshift-authentication/core/secrets.yaml (4)

1-3: Mark fixtures as non-production and test-only.

Add a brief YAML comment header clarifying these are synthetic test fixtures and contain no live credentials to preempt secret scanners and reviewers’ concerns.

Apply at file top:

 --- 
+## TEST FIXTURES ONLY
+# Synthetic secrets for integration testing the oauth-server Deployment controller.
+# Contains no production credentials. Safe to publish.
 apiVersion: v1
 items:

35-35: Use a tiny, clearly fake PEM for tls.key (optional).

Current value decodes to “1679 bytes long”. Prefer a minimal fake PEM to avoid confusion and satisfy any downstream readers expecting PEM structure.

If helpful, I can provide pre-encoded minimal PEM stubs for tls.crt/tls.key.


85-105: Avoid environment-specific router cert key name.

Key “apps.ci-op-gn2pz6q7-69aee.XXXXXXXXXXXXXXXXXXXXXX” looks CI/environment-bound. Use a generic, deterministic SNI key to keep fixtures portable (e.g., router-default.apps.example.test).

Proposed change:

   data:
-    apps.ci-op-gn2pz6q7-69aee.XXXXXXXXXXXXXXXXXXXXXX: NDEwNCBieXRlcyBsb25n
+    router-default.apps.example.test: NDEwNCBieXRlcyBsb25n
   kind: Secret
   metadata:
-    creationTimestamp: "2025-08-01T18:34:45Z"
-    managedFields: ...
     name: v4-0-config-system-router-certs
     namespace: openshift-authentication
-    resourceVersion: "8802"
-    uid: da379869-e11c-4267-a98d-26a516b0c511
   type: Opaque

Also add labels for consistency:

   metadata:
+    labels:
+      app: oauth-openshift

1-135: Silence secret scanners for known-safe test data.

Gitleaks flags are expected here. Prefer a repo-level allowlist for this path to avoid alert fatigue.

Add or extend a .gitleaks.toml allowlist:

[[allowlist.paths]]
description = "test fixtures: oauth-server minimal input secrets"
regex = '''test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/namespaces/openshift-authentication/core/secrets\.yaml'''

I can draft the PR snippet if you want.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 3f6961e and f288aed.

📒 Files selected for processing (18)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/ApplyStatus/cluster-scoped-resources/operator.openshift.io/authentications/5749-body-cluster.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/ApplyStatus/cluster-scoped-resources/operator.openshift.io/authentications/5749-metadata-cluster.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/ApplyStatus/cluster-scoped-resources/operator.openshift.io/authentications/5749-options-cluster.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/Create/namespaces/openshift-authentication-operator/core/events/6672-body-authentication-operator.18599d2230299800.64fe3b99.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/Create/namespaces/openshift-authentication-operator/core/events/6672-metadata-authentication-operator.18599d2230299800.64fe3b99.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/Create/namespaces/openshift-authentication/apps/deployments/d461-body-oauth-openshift.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/Create/namespaces/openshift-authentication/apps/deployments/d461-metadata-oauth-openshift.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/controller-results.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/config.openshift.io/authentications.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/config.openshift.io/clusterversions.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/config.openshift.io/infrastructures.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/core/nodes/ip-10-0-114-231.us-west-2.compute.internal.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/core/nodes/ip-10-0-34-16.us-west-2.compute.internal.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/core/nodes/ip-10-0-62-215.us-west-2.compute.internal.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/operator.openshift.io/authentications/cluster.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/namespaces/openshift-authentication/core/secrets.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/namespaces/openshift-authentication/route.openshift.io/routes/oauth-openshift.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/test.yaml (1 hunks)
✅ Files skipped from review due to trivial changes (2)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/Create/namespaces/openshift-authentication-operator/core/events/6672-metadata-authentication-operator.18599d2230299800.64fe3b99.yaml
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/core/nodes/ip-10-0-62-215.us-west-2.compute.internal.yaml
🚧 Files skipped from review as they are similar to previous changes (10)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/Create/namespaces/openshift-authentication-operator/core/events/6672-body-authentication-operator.18599d2230299800.64fe3b99.yaml
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/namespaces/openshift-authentication/route.openshift.io/routes/oauth-openshift.yaml
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/operator.openshift.io/authentications/cluster.yaml
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/ApplyStatus/cluster-scoped-resources/operator.openshift.io/authentications/5749-options-cluster.yaml
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/config.openshift.io/clusterversions.yaml
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/controller-results.yaml
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/ApplyStatus/cluster-scoped-resources/operator.openshift.io/authentications/5749-body-cluster.yaml
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/config.openshift.io/infrastructures.yaml
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/config.openshift.io/authentications.yaml
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/ApplyStatus/cluster-scoped-resources/operator.openshift.io/authentications/5749-metadata-cluster.yaml
🧰 Additional context used
🪛 Checkov (3.2.334)
test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/Create/namespaces/openshift-authentication/apps/deployments/d461-body-oauth-openshift.yaml

[MEDIUM] 1-193: Containers should not run with allowPrivilegeEscalation

(CKV_K8S_20)


[HIGH] 1-193: Container should not be privileged

(CKV_K8S_16)


[MEDIUM] 1-193: Minimize the admission of root containers

(CKV_K8S_23)

🪛 Gitleaks (8.27.2)
test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/namespaces/openshift-authentication/core/secrets.yaml

5-7: Possible Kubernetes Secret detected, posing a risk of leaking credentials/tokens from your deployments

(kubernetes-secret-yaml)


33-36: Possible Kubernetes Secret detected, posing a risk of leaking credentials/tokens from your deployments

(kubernetes-secret-yaml)


85-87: Possible Kubernetes Secret detected, posing a risk of leaking credentials/tokens from your deployments

(kubernetes-secret-yaml)


107-111: Possible Kubernetes Secret detected, posing a risk of leaking credentials/tokens from your deployments

(kubernetes-secret-yaml)


35-35: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)

🪛 YAMLlint (1.37.1)
test-data/apply-configuration/overall/oauth-server-creation-minimal/test.yaml

[error] 19-19: trailing spaces

(trailing-spaces)


[error] 27-27: trailing spaces

(trailing-spaces)


[error] 28-28: trailing spaces

(trailing-spaces)

🔇 Additional comments (5)
test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/core/nodes/ip-10-0-114-231.us-west-2.compute.internal.yaml (2)

212-217: Keep taint minimal and accurate.

The NoSchedule master taint is sufficient for scheduling semantics in this test. Confirm no additional taints are required by the controller logic.


217-499: Trim status for client-create compatibility
Status is a server-only subresource in Kubernetes—remove the entire status block (addresses, conditions, images, runtimeHandlers, etc.) unless your test harness actually loads persisted objects via storage. If you need any fields for replica calculations, only keep metadata.labels and spec.taints.

-status:
-  addresses:
-    ...
-  runtimeHandlers:
-    ...
test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/Create/namespaces/openshift-authentication/apps/deployments/d461-body-oauth-openshift.yaml (3)

135-143: Node selector uses master label.

If the controller renders control-plane on newer clusters, ensure the expected output matches actual rendering; otherwise the test will fail across versions.


152-193: Secrets vs ConfigMaps alignment looks correct.

Optional user template secrets are marked optional; trusted CA bundle ConfigMap is optional; matches the description.


24-27: Ignore operator.openshift.io hash annotations in test assertions
Hash values (rvs-hash, spec-hash) vary between runs. Add an options file next to your expected-output YAML to skip these annotations in comparisons:

# d461-options-oauth-openshift.yaml
ignorePaths:
  - /metadata/annotations/operator.openshift.io/rvs-hash
  - /metadata/annotations/operator.openshift.io/spec-hash
  - /spec/template/metadata/annotations/operator.openshift.io/rvs-hash

Comment on lines 2 to 4
controllerInstanceName: TODO-deploymentController
generateName: ""
mame: oauth-openshift
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix controller name and metadata key typo.

  • controllerInstanceName is a placeholder.
  • mame is misspelled; should be name.
-action: Create
-controllerInstanceName: TODO-deploymentController
-generateName: ""
-mame: oauth-openshift
+action: Create
+controllerInstanceName: oauth-server-deployment-controller  # replace with actual identifier used by the harness
+generateName: ""
+name: oauth-openshift

Also ensure this controller identifier matches test.yaml.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
controllerInstanceName: TODO-deploymentController
generateName: ""
mame: oauth-openshift
action: Create
controllerInstanceName: oauth-server-deployment-controller # replace with actual identifier used by the harness
generateName: ""
name: oauth-openshift
🤖 Prompt for AI Agents
In
test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/Create/namespaces/openshift-authentication/apps/deployments/d461-metadata-oauth-openshift.yaml
lines 2-4, replace the placeholder controllerInstanceName
TODO-deploymentController with the actual controller identifier used in
test.yaml, correct the misspelled metadata key "mame" to "name", and verify the
controllerInstanceName value exactly matches the identifier in test.yaml so
tests reference the same controller.

machineconfiguration.openshift.io/reason: ""
machineconfiguration.openshift.io/state: Done
volumes.kubernetes.io/controller-managed-attach-detach: "true"
creationTimestamp: "2025-08-01T18:32:06Z"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Drop server-populated metadata for fixture stability.

Remove creationTimestamp, resourceVersion, and uid to avoid apply failures and reduce flakiness.

-  creationTimestamp: "2025-08-01T18:32:06Z"
...
-  resourceVersion: "268883"
-  uid: f40050b6-5eb7-411f-84bf-bf8b5ad0c66f

Also applies to: 210-211

🤖 Prompt for AI Agents
In
test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/core/nodes/ip-10-0-114-231.us-west-2.compute.internal.yaml
around line 35 (and also remove at lines 210-211), the fixture contains
server-populated fields that cause flakiness—remove the creationTimestamp,
resourceVersion, and uid keys from the YAML so the fixture only contains
user-specified metadata; update the file by deleting those three fields wherever
they appear (including lines 210-211) to ensure stable, idempotent applies.

Comment on lines +53 to +181
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:volumes.kubernetes.io/controller-managed-attach-detach: {}
f:labels:
.: {}
f:beta.kubernetes.io/arch: {}
f:beta.kubernetes.io/os: {}
f:kubernetes.io/arch: {}
f:kubernetes.io/hostname: {}
f:kubernetes.io/os: {}
f:node-role.kubernetes.io/control-plane: {}
f:node-role.kubernetes.io/master: {}
f:node.openshift.io/os_id: {}
f:spec:
f:providerID: {}
f:status:
f:addresses:
k:{"type":"InternalDNS"}:
.: {}
f:address: {}
f:type: {}
manager: ancient-changes
operation: Update
time: "2025-08-01T18:32:37Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
f:beta.kubernetes.io/instance-type: {}
f:failure-domain.beta.kubernetes.io/region: {}
f:failure-domain.beta.kubernetes.io/zone: {}
f:node.kubernetes.io/instance-type: {}
f:topology.k8s.aws/zone-id: {}
f:topology.kubernetes.io/region: {}
f:topology.kubernetes.io/zone: {}
manager: aws-cloud-controller-manager
operation: Update
time: "2025-08-01T18:32:37Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:cluster.x-k8s.io/cluster-name: {}
f:cluster.x-k8s.io/cluster-namespace: {}
f:cluster.x-k8s.io/labels-from-machine: {}
f:cluster.x-k8s.io/machine: {}
manager: cluster-api
operation: Update
time: "2025-08-01T18:33:29Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:spec:
f:taints: {}
manager: kube-controller-manager
operation: Update
time: "2025-08-01T18:33:58Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:cloud.network.openshift.io/egress-ipconfig: {}
manager: cloud-network-config-controller
operation: Update
time: "2025-08-01T18:35:06Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:machineconfiguration.openshift.io/controlPlaneTopology: {}
f:machineconfiguration.openshift.io/lastAppliedDrain: {}
manager: machine-config-controller
operation: Update
time: "2025-08-01T18:35:47Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:machine.openshift.io/machine: {}
manager: nodelink-controller
operation: Update
time: "2025-08-01T18:36:16Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:k8s.ovn.org/host-cidrs: {}
f:k8s.ovn.org/l3-gateway-config: {}
f:k8s.ovn.org/node-chassis-id: {}
f:k8s.ovn.org/node-encap-ips: {}
f:k8s.ovn.org/node-gateway-router-lrp-ifaddrs: {}
f:k8s.ovn.org/node-id: {}
f:k8s.ovn.org/node-masquerade-subnet: {}
f:k8s.ovn.org/node-primary-ifaddr: {}
f:k8s.ovn.org/node-subnets: {}
f:k8s.ovn.org/node-transit-switch-port-ifaddr: {}
f:k8s.ovn.org/remote-zone-migrated: {}
f:k8s.ovn.org/zone-name: {}
manager: ip-10-0-114-231
operation: Update
subresource: status
time: "2025-08-01T19:56:47Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:machineconfiguration.openshift.io/currentConfig: {}
f:machineconfiguration.openshift.io/desiredConfig: {}
f:machineconfiguration.openshift.io/desiredDrain: {}
f:machineconfiguration.openshift.io/lastObservedServerCAAnnotation: {}
f:machineconfiguration.openshift.io/lastSyncedControllerConfigResourceVersion: {}
f:machineconfiguration.openshift.io/reason: {}
f:machineconfiguration.openshift.io/state: {}
manager: machine-config-daemon
operation: Update
time: "2025-08-01T19:58:10Z"
- apiVersion: v1
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Remove managedFields.

managedFields are server-maintained; keeping them in input can cause rejection or noise.

-  managedFields:
-  - apiVersion: v1
-    fieldsType: FieldsV1
-    fieldsV1:
-      ...
-    time: "2025-08-01T19:58:10Z"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:volumes.kubernetes.io/controller-managed-attach-detach: {}
f:labels:
.: {}
f:beta.kubernetes.io/arch: {}
f:beta.kubernetes.io/os: {}
f:kubernetes.io/arch: {}
f:kubernetes.io/hostname: {}
f:kubernetes.io/os: {}
f:node-role.kubernetes.io/control-plane: {}
f:node-role.kubernetes.io/master: {}
f:node.openshift.io/os_id: {}
f:spec:
f:providerID: {}
f:status:
f:addresses:
k:{"type":"InternalDNS"}:
.: {}
f:address: {}
f:type: {}
manager: ancient-changes
operation: Update
time: "2025-08-01T18:32:37Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
f:beta.kubernetes.io/instance-type: {}
f:failure-domain.beta.kubernetes.io/region: {}
f:failure-domain.beta.kubernetes.io/zone: {}
f:node.kubernetes.io/instance-type: {}
f:topology.k8s.aws/zone-id: {}
f:topology.kubernetes.io/region: {}
f:topology.kubernetes.io/zone: {}
manager: aws-cloud-controller-manager
operation: Update
time: "2025-08-01T18:32:37Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:cluster.x-k8s.io/cluster-name: {}
f:cluster.x-k8s.io/cluster-namespace: {}
f:cluster.x-k8s.io/labels-from-machine: {}
f:cluster.x-k8s.io/machine: {}
manager: cluster-api
operation: Update
time: "2025-08-01T18:33:29Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:spec:
f:taints: {}
manager: kube-controller-manager
operation: Update
time: "2025-08-01T18:33:58Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:cloud.network.openshift.io/egress-ipconfig: {}
manager: cloud-network-config-controller
operation: Update
time: "2025-08-01T18:35:06Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:machineconfiguration.openshift.io/controlPlaneTopology: {}
f:machineconfiguration.openshift.io/lastAppliedDrain: {}
manager: machine-config-controller
operation: Update
time: "2025-08-01T18:35:47Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:machine.openshift.io/machine: {}
manager: nodelink-controller
operation: Update
time: "2025-08-01T18:36:16Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:k8s.ovn.org/host-cidrs: {}
f:k8s.ovn.org/l3-gateway-config: {}
f:k8s.ovn.org/node-chassis-id: {}
f:k8s.ovn.org/node-encap-ips: {}
f:k8s.ovn.org/node-gateway-router-lrp-ifaddrs: {}
f:k8s.ovn.org/node-id: {}
f:k8s.ovn.org/node-masquerade-subnet: {}
f:k8s.ovn.org/node-primary-ifaddr: {}
f:k8s.ovn.org/node-subnets: {}
f:k8s.ovn.org/node-transit-switch-port-ifaddr: {}
f:k8s.ovn.org/remote-zone-migrated: {}
f:k8s.ovn.org/zone-name: {}
manager: ip-10-0-114-231
operation: Update
subresource: status
time: "2025-08-01T19:56:47Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:machineconfiguration.openshift.io/currentConfig: {}
f:machineconfiguration.openshift.io/desiredConfig: {}
f:machineconfiguration.openshift.io/desiredDrain: {}
f:machineconfiguration.openshift.io/lastObservedServerCAAnnotation: {}
f:machineconfiguration.openshift.io/lastSyncedControllerConfigResourceVersion: {}
f:machineconfiguration.openshift.io/reason: {}
f:machineconfiguration.openshift.io/state: {}
manager: machine-config-daemon
operation: Update
time: "2025-08-01T19:58:10Z"
- apiVersion: v1
# (lines 53–181—the entire managedFields section—have been removed)
🤖 Prompt for AI Agents
In
test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/core/nodes/ip-10-0-114-231.us-west-2.compute.internal.yaml
(lines 53-181) the managedFields block is present but must be removed because it
is server-maintained; delete the entire managedFields key and its nested entries
from the node YAML so only user-controlled fields remain (do not preserve
manager/operation/time/subresource entries), and ensure any automated test
fixtures or apply tooling that generates this file strips server-side fields
before committing.

machineconfiguration.openshift.io/reason: ""
machineconfiguration.openshift.io/state: Done
volumes.kubernetes.io/controller-managed-attach-detach: "true"
creationTimestamp: "2025-08-01T18:32:30Z"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Drop server-populated metadata for fixture stability.

Remove creationTimestamp, resourceVersion, and uid.

-  creationTimestamp: "2025-08-01T18:32:30Z"
...
-  resourceVersion: "269278"
-  uid: 2fea5bbd-9f90-474e-a2fd-c8b76fdb60f5

Also applies to: 211-212

🤖 Prompt for AI Agents
In
test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/core/nodes/ip-10-0-34-16.us-west-2.compute.internal.yaml
around line 35 (and also lines 211-212), remove server-populated metadata fields
to stabilize fixtures: delete creationTimestamp, resourceVersion, and uid
entries from the YAML so the file contains only client-managed fields; ensure
indentation/format remains valid after removal and run the test/fixture
validation to confirm no other server-only fields remain.

Comment on lines +53 to +209
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:volumes.kubernetes.io/controller-managed-attach-detach: {}
f:labels:
.: {}
f:beta.kubernetes.io/arch: {}
f:beta.kubernetes.io/instance-type: {}
f:beta.kubernetes.io/os: {}
f:failure-domain.beta.kubernetes.io/region: {}
f:failure-domain.beta.kubernetes.io/zone: {}
f:kubernetes.io/arch: {}
f:kubernetes.io/hostname: {}
f:kubernetes.io/os: {}
f:node-role.kubernetes.io/control-plane: {}
f:node-role.kubernetes.io/master: {}
f:node.kubernetes.io/instance-type: {}
f:node.openshift.io/os_id: {}
f:topology.k8s.aws/zone-id: {}
f:topology.kubernetes.io/region: {}
f:topology.kubernetes.io/zone: {}
f:spec:
f:providerID: {}
f:status:
f:addresses:
k:{"type":"InternalDNS"}:
.: {}
f:address: {}
f:type: {}
manager: ancient-changes
operation: Update
time: "2025-08-01T18:32:37Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:cluster.x-k8s.io/cluster-name: {}
f:cluster.x-k8s.io/cluster-namespace: {}
f:cluster.x-k8s.io/labels-from-machine: {}
f:cluster.x-k8s.io/machine: {}
manager: cluster-api
operation: Update
time: "2025-08-01T18:33:29Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:k8s.ovn.org/host-cidrs: {}
f:k8s.ovn.org/l3-gateway-config: {}
f:k8s.ovn.org/node-chassis-id: {}
f:k8s.ovn.org/node-encap-ips: {}
f:k8s.ovn.org/node-masquerade-subnet: {}
f:k8s.ovn.org/node-primary-ifaddr: {}
f:k8s.ovn.org/remote-zone-migrated: {}
f:k8s.ovn.org/zone-name: {}
manager: ip-10-0-34-16
operation: Update
subresource: status
time: "2025-08-01T18:33:55Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:spec:
f:taints: {}
manager: kube-controller-manager
operation: Update
time: "2025-08-01T18:33:58Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:cloud.network.openshift.io/egress-ipconfig: {}
manager: cloud-network-config-controller
operation: Update
time: "2025-08-01T18:35:05Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:machine.openshift.io/machine: {}
manager: nodelink-controller
operation: Update
time: "2025-08-01T18:36:16Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:machineconfiguration.openshift.io/controlPlaneTopology: {}
f:machineconfiguration.openshift.io/lastAppliedDrain: {}
manager: machine-config-controller
operation: Update
time: "2025-08-01T18:36:44Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:k8s.ovn.org/node-gateway-router-lrp-ifaddrs: {}
f:k8s.ovn.org/node-id: {}
f:k8s.ovn.org/node-subnets: {}
f:k8s.ovn.org/node-transit-switch-port-ifaddr: {}
manager: ip-10-0-114-231
operation: Update
subresource: status
time: "2025-08-01T19:56:47Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:machineconfiguration.openshift.io/currentConfig: {}
f:machineconfiguration.openshift.io/desiredConfig: {}
f:machineconfiguration.openshift.io/desiredDrain: {}
f:machineconfiguration.openshift.io/lastObservedServerCAAnnotation: {}
f:machineconfiguration.openshift.io/lastSyncedControllerConfigResourceVersion: {}
f:machineconfiguration.openshift.io/reason: {}
f:machineconfiguration.openshift.io/state: {}
manager: machine-config-daemon
operation: Update
time: "2025-08-01T19:58:10Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:csi.volume.kubernetes.io/nodeid: {}
f:labels:
f:topology.ebs.csi.aws.com/zone: {}
f:status:
f:conditions:
k:{"type":"DiskPressure"}:
f:lastHeartbeatTime: {}
k:{"type":"MemoryPressure"}:
f:lastHeartbeatTime: {}
k:{"type":"PIDPressure"}:
f:lastHeartbeatTime: {}
k:{"type":"Ready"}:
f:lastHeartbeatTime: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:images: {}
f:runtimeHandlers: {}
manager: kubelet
operation: Update
subresource: status
time: "2025-08-01T20:05:08Z"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Remove managedFields blocks.

Server-owned; not needed in inputs.

-  managedFields:
-  - apiVersion: v1
-    ...
-  - apiVersion: v1
-    ...

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In
test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/core/nodes/ip-10-0-34-16.us-west-2.compute.internal.yaml
around lines 53-209, the file contains server-owned managedFields blocks that
should be removed; delete the top-level managedFields key and its entire array
(all listed manager entries and nested fields) from the YAML document, ensure
the resulting YAML remains valid (correct indentation and no leftover keys), and
run a YAML/linter validation to confirm the file parses cleanly.

Comment on lines +8 to +31
metadata:
creationTimestamp: "2025-08-01T18:44:36Z"
labels:
app: oauth-openshift
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:v4-0-config-system-session: {}
f:metadata:
f:labels:
.: {}
f:app: {}
f:type: {}
manager: authentication-operator
operation: Update
time: "2025-08-01T18:44:36Z"
name: v4-0-config-system-session
namespace: openshift-authentication
resourceVersion: "20293"
uid: 1ab2c769-1f5b-4442-8652-ed0ebbded1a7
type: Opaque
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Strip server-managed metadata from fixtures for stability.

creationTimestamp, managedFields, resourceVersion, and uid are server-populated and create churn without value in fixtures. Remove them to keep inputs minimal and reduce flakiness.

Apply this pattern (example for the first Secret):

   kind: Secret
   metadata:
-    creationTimestamp: "2025-08-01T18:44:36Z"
     labels:
       app: oauth-openshift
-    managedFields:
-      - apiVersion: v1
-        fieldsType: FieldsV1
-        fieldsV1:
-          f:data:
-            .: {}
-            f:v4-0-config-system-session: {}
-          f:metadata:
-            f:labels:
-              .: {}
-              f:app: {}
-          f:type: {}
-        manager: authentication-operator
-        operation: Update
-        time: "2025-08-01T18:44:36Z"
     name: v4-0-config-system-session
     namespace: openshift-authentication
-    resourceVersion: "20293"
-    uid: 1ab2c769-1f5b-4442-8652-ed0ebbded1a7
   type: Opaque

Please apply similarly to the remaining three Secrets.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
metadata:
creationTimestamp: "2025-08-01T18:44:36Z"
labels:
app: oauth-openshift
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:v4-0-config-system-session: {}
f:metadata:
f:labels:
.: {}
f:app: {}
f:type: {}
manager: authentication-operator
operation: Update
time: "2025-08-01T18:44:36Z"
name: v4-0-config-system-session
namespace: openshift-authentication
resourceVersion: "20293"
uid: 1ab2c769-1f5b-4442-8652-ed0ebbded1a7
type: Opaque
kind: Secret
metadata:
labels:
app: oauth-openshift
name: v4-0-config-system-session
namespace: openshift-authentication
type: Opaque
🤖 Prompt for AI Agents
In
test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/namespaces/openshift-authentication/core/secrets.yaml
around lines 8 to 31, the Secret metadata contains server-managed fields
(creationTimestamp, managedFields, resourceVersion, uid) that should be removed
from fixtures; delete those fields from the metadata block of this Secret and
apply the same removal to the other three Secret objects in this file so only
stable, intent-bearing fields (name, namespace, labels, type, data) remain.

Comment on lines +33 to +83
data:
tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVGRENDQXZ5Z0F3SUJBZ0lJZUIwRFlEaVV4Qm93RFFZSktvWklodmNOQVFFTEJRQXdOakUwTURJR0ExVUUKQXd3cmIzQmxibk5vYVdaMExYTmxjblpwWTJVdGMyVnlkbWx1WnkxemFXZHVaWEpBTVRjMU5EQTNNekkxTlRBZQpGdzB5TlRBNE1ERXhPRE0wTXpkYUZ3MHlOekE0TURFeE9ETTBNemhhTURjeE5UQXpCZ05WQkFNVExHOWhkWFJvCkxXOXdaVzV6YUdsbWRDNXZjR1Z1YzJocFpuUXRZWFYwYUdWdWRHbGpZWFJwYjI0dWMzWmpNSUlCSWpBTkJna3EKaGtpRzl3MEJBUUVGQUFPQ0FROEFNSUlCQ2dLQ0FRRUE0T1dwQXowL015aGh2NC9OMytmeXcwNklES0FBZ3V3UgpnOTBoZ0NxMFpjWTJPdW12SXlxdDFoM0t2ZlFWdkdwMU54L0JCQkc2SzYwRE1WRUJqRnhNZlhLVTM1UTR5OURtCnRCVTF5eHJnVFR3bDNEajZ2cm9vTGx5dUFKRG1xQW9CbG1LeHFSS3NTVnEwa3FmTDRVT0UrZHljWUJJSUZhVWoKRnBIK203VHNSYUhBa2ljNzFoTDBSeklsMUNWVmZsTUppNUFkYitkY2c3TzQ2VFVYYmE3S0o0TThzeFhvUzA5cgpEZ1NtZWpCUHBhc1pIVnM2UloyOVFBeUpSeDQ2Q0pMWWNncHlkakp2cjZoSnljM09GZXQ3ZUNzVVRvZjUyeGlSCmt0aUhidGF1cWVKTklQY2J3czg0UytOb3h0aEY1TXp4QlJFTGc4Y1F5eEF3eXp3RkVsYkR1UUlEQVFBQm80SUIKSXpDQ0FSOHdEZ1lEVlIwUEFRSC9CQVFEQWdXZ01CTUdBMVVkSlFRTU1Bb0dDQ3NHQVFVRkJ3TUJNQXdHQTFVZApFd0VCL3dRQ01BQXdIUVlEVlIwT0JCWUVGTWJSVzRURUpvejMyYkdDQlFlZDlZNTZQZTJwTUI4R0ExVWRJd1FZCk1CYUFGTjgzR253MEZCcHRqakdSblpDNW1CRmpiVFpXTUhNR0ExVWRFUVJzTUdxQ0xHOWhkWFJvTFc5d1pXNXoKYUdsbWRDNXZjR1Z1YzJocFpuUXRZWFYwYUdWdWRHbGpZWFJwYjI0dWMzWmpnanB2WVhWMGFDMXZjR1Z1YzJocApablF1YjNCbGJuTm9hV1owTFdGMWRHaGxiblJwWTJGMGFXOXVMbk4yWXk1amJIVnpkR1Z5TG14dlkyRnNNRFVHCkN5c0dBUVFCa2dnUlpBSUJCQ1lUSkRjMU1XWmtOVEF3TFRWak4yRXROR0kxT1MwNFl6WTBMV1ZoT0dGaFpEVTQKTnprMU5EQU5CZ2txaGtpRzl3MEJBUXNGQUFPQ0FRRUFHclpQMGR2L1Y0QlVVSkErU2NrVTBFNittQ0tiQ01oTwpvY2IvYTV6S3hkQkVsL0pSWUg0UFNmRFpPUmppNDVTQTBBZisvaXlZVWdaaXdBb3V6VjZYOC9NMFZUNTIwNUlMCmczcExaSjY3aGhQUytNdG1nU0xROW9JV0gwcHMrQVFpc2krTzFVeHVTMFVaNGwwU3dObFJDYk5Lclc2NTdLU0EKTWI3S1BNaG90VXRRcjdiZXZKS3k3MHFsNzhDZmMrSjcvdngySlhsbThZSE9mMUxCNERGMnBSRW02THJpVGhLMQpkazZQaWNicHZLYXZuRWJJeDhlQ3lyTDdBNFg0ZHFpTXBYalU5VE5qZFVjWUN3eGtncEJ3dys3Yk5DSDhLTlFaCk9oWkF2NzNMUGI5R0JyS1RadGxma2dkM01tUWNBMWVHRW4zb0RIWjd0US9lUFdjOXYzMG8wdz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0KLS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURVVENDQWptZ0F3SUJBZ0lJUElnQk9nNnlLSFl3RFFZSktvWklodmNOQVFFTEJRQXdOakUwTURJR0ExVUUKQXd3cmIzQmxibk5vYVdaMExYTmxjblpwWTJVdGMyVnlkbWx1WnkxemFXZHVaWEpBTVRjMU5EQTNNekkxTlRBZQpGdzB5TlRBNE1ERXhPRE0wTVRWYUZ3MHlOekE1TXpBeE9ETTBNVFphTURZeE5EQXlCZ05WQkFNTUsyOXdaVzV6CmFHbG1kQzF6WlhKMmFXTmxMWE5sY25acGJtY3RjMmxuYm1WeVFERTNOVFF3TnpNeU5UVXdnZ0VpTUEwR0NTcUcKU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRRGhCTHM4aHN0ckgxYkVWOFZUK1Z3L2hKT1RUay9SUmhOZQprM2ptMUR5elFJenZIZllVS1lpZzBJeThHTUZ6a0c1SDdDQUllK2VaN1pWcmZWUTlPV2FIUWJHK0xsTDZkaDFNCnlnc1l6RlhDbTZ3c0tndHlVMG9XblBVYUk0VE15RWI0bVhXVU95TGZtSWJiWVA2RU9EQXdsbUVkZG1wdnVQVkYKNXVFek1LSGVtRDY3enE4Z3Fsc0JGSytLOXV3bmpNWHhWdHBzRUhpK3locEUwbVNxRW1mbHl2b1VMc1l1KzFjRQpTZi94Y2E4NUEvNTJtNTBYTWl1TWZJV1VYbk81djA2c2NmM3pwZkdWWm9FVk5TaWxNOUdJdTIvbXhWWm9DUHM1CkdRY3dJTTFwdENBVGtEVGo3MGNuNWp4Z1hQMloyT093SldDakNtQUVmM1JLejJrWXY5N0RBZ01CQUFHall6QmgKTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQQmdOVkhSTUJBZjhFQlRBREFRSC9NQjBHQTFVZERnUVdCQlRmTnhwOApOQlFhYlk0eGtaMlF1WmdSWTIwMlZqQWZCZ05WSFNNRUdEQVdnQlRmTnhwOE5CUWFiWTR4a1oyUXVaZ1JZMjAyClZqQU5CZ2txaGtpRzl3MEJBUXNGQUFPQ0FRRUFvUitGMWlTMnBySk5yT2ZtRm91TXdlQjlLUVU0dWpqWEV4TUYKakF4NEorZVl4SVdIY3ozRnNGYy9YOTZZRm9zQ1JUVjFya0x5TEJ6MXlzbTNHeUFJYnVoSzBzWDFzbEtxUDYrMgpzdDBmeWNuZkxQdXVoa3dxU05SQUd5eEdjVTA1cjhlak9WMnlTcTRQUW5aR0pFc1g1Q25wS3BKZ3UybUhGSnpBCnJHREs2UzdXaFI1Vml4RjYxeFE0OXdUem96VzY2UHVxd2lrSGcrTDliY1VQcitIR3RveDRya1BGdWQ1a0tpZXoKSFhLZWVtc1M3Y1pqTlFwZkp3MW1qcDZPSnBJblEyaEc3TDV5eXd1NDAvMTNGbjlTV2JUdWNCVThzZXZLdWtEeQp3TFd2Um85Nlg5TkFIL0c2WG5aS2FVRU1wTFhyYXBJeW8rNFB5cTl5U0pGMzNaTTgxUT09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
tls.key: MTY3OSBieXRlcyBsb25n
kind: Secret
metadata:
annotations:
openshift.io/description: 'Secret contains a pair signed serving certificate/key
that is generated by Service CA operator for service/oauth-openshift with
hostname oauth-openshift.openshift-authentication.svc and is annotated to
the service with annotating a service resource with ''service.beta.openshift.io/serving-cert-secret-name:
v4-0-config-system-serving-cert''. The certificate is valid for 2 years.'
openshift.io/owning-component: service-ca
service.alpha.openshift.io/expiry: "2027-08-01T18:34:38Z"
service.alpha.openshift.io/originating-service-name: oauth-openshift
service.alpha.openshift.io/originating-service-uid: 751fd500-5c7a-4b59-8c64-ea8aad587954
service.beta.openshift.io/expiry: "2027-08-01T18:34:38Z"
creationTimestamp: "2025-08-01T18:34:39Z"
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:tls.crt: {}
f:tls.key: {}
f:metadata:
f:annotations:
.: {}
f:openshift.io/description: {}
f:openshift.io/owning-component: {}
f:service.alpha.openshift.io/expiry: {}
f:service.alpha.openshift.io/originating-service-name: {}
f:service.alpha.openshift.io/originating-service-uid: {}
f:service.beta.openshift.io/expiry: {}
f:ownerReferences:
.: {}
k:{"uid":"751fd500-5c7a-4b59-8c64-ea8aad587954"}: {}
f:type: {}
manager: service-ca-operator
operation: Update
time: "2025-08-01T18:34:39Z"
name: v4-0-config-system-serving-cert
namespace: openshift-authentication
ownerReferences:
- apiVersion: v1
kind: Service
name: oauth-openshift
uid: 751fd500-5c7a-4b59-8c64-ea8aad587954
resourceVersion: "8578"
uid: ad75ddb5-40ed-4a32-bc8d-41c0fd4a0dad
type: kubernetes.io/tls
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Minimize TLS Secret noise; avoid environment-coupled fields.

  • Drop server-managed metadata and time-based annotations that don’t affect the controller logic.
  • Remove ownerReferences/UID to a Service that isn’t part of this input set (keeps the fixture self-contained).
  • Optionally add labels for consistency (app: oauth-openshift).

Apply targeted changes:

   kind: Secret
   metadata:
-    annotations:
-      openshift.io/description: 'Secret contains a pair signed serving certificate/key
-        that is generated by Service CA operator for service/oauth-openshift with
-        hostname oauth-openshift.openshift-authentication.svc and is annotated to
-        the service with annotating a service resource with ''service.beta.openshift.io/serving-cert-secret-name:
-        v4-0-config-system-serving-cert''. The certificate is valid for 2 years.'
-      openshift.io/owning-component: service-ca
-      service.alpha.openshift.io/expiry: "2027-08-01T18:34:38Z"
-      service.alpha.openshift.io/originating-service-name: oauth-openshift
-      service.alpha.openshift.io/originating-service-uid: 751fd500-5c7a-4b59-8c64-ea8aad587954
-      service.beta.openshift.io/expiry: "2027-08-01T18:34:38Z"
-    creationTimestamp: "2025-08-01T18:34:39Z"
-    managedFields:
-      - apiVersion: v1
-        fieldsType: FieldsV1
-        fieldsV1: {}
-        manager: service-ca-operator
-        operation: Update
-        time: "2025-08-01T18:34:39Z"
     name: v4-0-config-system-serving-cert
     namespace: openshift-authentication
-    ownerReferences:
-      - apiVersion: v1
-        kind: Service
-        name: oauth-openshift
-        uid: 751fd500-5c7a-4b59-8c64-ea8aad587954
-    resourceVersion: "8578"
-    uid: ad75ddb5-40ed-4a32-bc8d-41c0fd4a0dad
+    labels:
+      app: oauth-openshift
   type: kubernetes.io/tls
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
data:
tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVGRENDQXZ5Z0F3SUJBZ0lJZUIwRFlEaVV4Qm93RFFZSktvWklodmNOQVFFTEJRQXdOakUwTURJR0ExVUUKQXd3cmIzQmxibk5vYVdaMExYTmxjblpwWTJVdGMyVnlkbWx1WnkxemFXZHVaWEpBTVRjMU5EQTNNekkxTlRBZQpGdzB5TlRBNE1ERXhPRE0wTXpkYUZ3MHlOekE0TURFeE9ETTBNemhhTURjeE5UQXpCZ05WQkFNVExHOWhkWFJvCkxXOXdaVzV6YUdsbWRDNXZjR1Z1YzJocFpuUXRZWFYwYUdWdWRHbGpZWFJwYjI0dWMzWmpNSUlCSWpBTkJna3EKaGtpRzl3MEJBUUVGQUFPQ0FROEFNSUlCQ2dLQ0FRRUE0T1dwQXowL015aGh2NC9OMytmeXcwNklES0FBZ3V3UgpnOTBoZ0NxMFpjWTJPdW12SXlxdDFoM0t2ZlFWdkdwMU54L0JCQkc2SzYwRE1WRUJqRnhNZlhLVTM1UTR5OURtCnRCVTF5eHJnVFR3bDNEajZ2cm9vTGx5dUFKRG1xQW9CbG1LeHFSS3NTVnEwa3FmTDRVT0UrZHljWUJJSUZhVWoKRnBIK203VHNSYUhBa2ljNzFoTDBSeklsMUNWVmZsTUppNUFkYitkY2c3TzQ2VFVYYmE3S0o0TThzeFhvUzA5cgpEZ1NtZWpCUHBhc1pIVnM2UloyOVFBeUpSeDQ2Q0pMWWNncHlkakp2cjZoSnljM09GZXQ3ZUNzVVRvZjUyeGlSCmt0aUhidGF1cWVKTklQY2J3czg0UytOb3h0aEY1TXp4QlJFTGc4Y1F5eEF3eXp3RkVsYkR1UUlEQVFBQm80SUIKSXpDQ0FSOHdEZ1lEVlIwUEFRSC9CQVFEQWdXZ01CTUdBMVVkSlFRTU1Bb0dDQ3NHQVFVRkJ3TUJNQXdHQTFVZApFd0VCL3dRQ01BQXdIUVlEVlIwT0JCWUVGTWJSVzRURUpvejMyYkdDQlFlZDlZNTZQZTJwTUI4R0ExVWRJd1FZCk1CYUFGTjgzR253MEZCcHRqakdSblpDNW1CRmpiVFpXTUhNR0ExVWRFUVJzTUdxQ0xHOWhkWFJvTFc5d1pXNXoKYUdsbWRDNXZjR1Z1YzJocFpuUXRZWFYwYUdWdWRHbGpZWFJwYjI0dWMzWmpnanB2WVhWMGFDMXZjR1Z1YzJocApablF1YjNCbGJuTm9hV1owTFdGMWRHaGxiblJwWTJGMGFXOXVMbk4yWXk1amJIVnpkR1Z5TG14dlkyRnNNRFVHCkN5c0dBUVFCa2dnUlpBSUJCQ1lUSkRjMU1XWmtOVEF3TFRWak4yRXROR0kxT1MwNFl6WTBMV1ZoT0dGaFpEVTQKTnprMU5EQU5CZ2txaGtpRzl3MEJBUXNGQUFPQ0FRRUFHclpQMGR2L1Y0QlVVSkErU2NrVTBFNittQ0tiQ01oTwpvY2IvYTV6S3hkQkVsL0pSWUg0UFNmRFpPUmppNDVTQTBBZisvaXlZVWdaaXdBb3V6VjZYOC9NMFZUNTIwNUlMCmczcExaSjY3aGhQUytNdG1nU0xROW9JV0gwcHMrQVFpc2krTzFVeHVTMFVaNGwwU3dObFJDYk5Lclc2NTdLU0EKTWI3S1BNaG90VXRRcjdiZXZKS3k3MHFsNzhDZmMrSjcvdngySlhsbThZSE9mMUxCNERGMnBSRW02THJpVGhLMQpkazZQaWNicHZLYXZuRWJJeDhlQ3lyTDdBNFg0ZHFpTXBYalU5VE5qZFVjWUN3eGtncEJ3dys3Yk5DSDhLTlFaCk9oWkF2NzNMUGI5R0JyS1RadGxma2dkM01tUWNBMWVHRW4zb0RIWjd0US9lUFdjOXYzMG8wdz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0KLS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURVVENDQWptZ0F3SUJBZ0lJUElnQk9nNnlLSFl3RFFZSktvWklodmNOQVFFTEJRQXdOakUwTURJR0ExVUUKQXd3cmIzQmxibk5vYVdaMExYTmxjblpwWTJVdGMyVnlkbWx1WnkxemFXZHVaWEpBTVRjMU5EQTNNekkxTlRBZQpGdzB5TlRBNE1ERXhPRE0wTVRWYUZ3MHlOekE1TXpBeE9ETTBNVFphTURZeE5EQXlCZ05WQkFNTUsyOXdaVzV6CmFHbG1kQzF6WlhKMmFXTmxMWE5sY25acGJtY3RjMmxuYm1WeVFERTNOVFF3TnpNeU5UVXdnZ0VpTUEwR0NTcUcKU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRRGhCTHM4aHN0ckgxYkVWOFZUK1Z3L2hKT1RUay9SUmhOZQprM2ptMUR5elFJenZIZllVS1lpZzBJeThHTUZ6a0c1SDdDQUllK2VaN1pWcmZWUTlPV2FIUWJHK0xsTDZkaDFNCnlnc1l6RlhDbTZ3c0tndHlVMG9XblBVYUk0VE15RWI0bVhXVU95TGZtSWJiWVA2RU9EQXdsbUVkZG1wdnVQVkYKNXVFek1LSGVtRDY3enE4Z3Fsc0JGSytLOXV3bmpNWHhWdHBzRUhpK3locEUwbVNxRW1mbHl2b1VMc1l1KzFjRQpTZi94Y2E4NUEvNTJtNTBYTWl1TWZJV1VYbk81djA2c2NmM3pwZkdWWm9FVk5TaWxNOUdJdTIvbXhWWm9DUHM1CkdRY3dJTTFwdENBVGtEVGo3MGNuNWp4Z1hQMloyT093SldDakNtQUVmM1JLejJrWXY5N0RBZ01CQUFHall6QmgKTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQQmdOVkhSTUJBZjhFQlRBREFRSC9NQjBHQTFVZERnUVdCQlRmTnhwOApOQlFhYlk0eGtaMlF1WmdSWTIwMlZqQWZCZ05WSFNNRUdEQVdnQlRmTnhwOE5CUWFiWTR4a1oyUXVaZ1JZMjAyClZqQU5CZ2txaGtpRzl3MEJBUXNGQUFPQ0FRRUFvUitGMWlTMnBySk5yT2ZtRm91TXdlQjlLUVU0dWpqWEV4TUYKakF4NEorZVl4SVdIY3ozRnNGYy9YOTZZRm9zQ1JUVjFya0x5TEJ6MXlzbTNHeUFJYnVoSzBzWDFzbEtxUDYrMgpzdDBmeWNuZkxQdXVoa3dxU05SQUd5eEdjVTA1cjhlak9WMnlTcTRQUW5aR0pFc1g1Q25wS3BKZ3UybUhGSnpBCnJHREs2UzdXaFI1Vml4RjYxeFE0OXdUem96VzY2UHVxd2lrSGcrTDliY1VQcitIR3RveDRya1BGdWQ1a0tpZXoKSFhLZWVtc1M3Y1pqTlFwZkp3MW1qcDZPSnBJblEyaEc3TDV5eXd1NDAvMTNGbjlTV2JUdWNCVThzZXZLdWtEeQp3TFd2Um85Nlg5TkFIL0c2WG5aS2FVRU1wTFhyYXBJeW8rNFB5cTl5U0pGMzNaTTgxUT09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
tls.key: MTY3OSBieXRlcyBsb25n
kind: Secret
metadata:
annotations:
openshift.io/description: 'Secret contains a pair signed serving certificate/key
that is generated by Service CA operator for service/oauth-openshift with
hostname oauth-openshift.openshift-authentication.svc and is annotated to
the service with annotating a service resource with ''service.beta.openshift.io/serving-cert-secret-name:
v4-0-config-system-serving-cert''. The certificate is valid for 2 years.'
openshift.io/owning-component: service-ca
service.alpha.openshift.io/expiry: "2027-08-01T18:34:38Z"
service.alpha.openshift.io/originating-service-name: oauth-openshift
service.alpha.openshift.io/originating-service-uid: 751fd500-5c7a-4b59-8c64-ea8aad587954
service.beta.openshift.io/expiry: "2027-08-01T18:34:38Z"
creationTimestamp: "2025-08-01T18:34:39Z"
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:tls.crt: {}
f:tls.key: {}
f:metadata:
f:annotations:
.: {}
f:openshift.io/description: {}
f:openshift.io/owning-component: {}
f:service.alpha.openshift.io/expiry: {}
f:service.alpha.openshift.io/originating-service-name: {}
f:service.alpha.openshift.io/originating-service-uid: {}
f:service.beta.openshift.io/expiry: {}
f:ownerReferences:
.: {}
k:{"uid":"751fd500-5c7a-4b59-8c64-ea8aad587954"}: {}
f:type: {}
manager: service-ca-operator
operation: Update
time: "2025-08-01T18:34:39Z"
name: v4-0-config-system-serving-cert
namespace: openshift-authentication
ownerReferences:
- apiVersion: v1
kind: Service
name: oauth-openshift
uid: 751fd500-5c7a-4b59-8c64-ea8aad587954
resourceVersion: "8578"
uid: ad75ddb5-40ed-4a32-bc8d-41c0fd4a0dad
type: kubernetes.io/tls
kind: Secret
metadata:
name: v4-0-config-system-serving-cert
namespace: openshift-authentication
labels:
app: oauth-openshift
type: kubernetes.io/tls
🧰 Tools
🪛 Gitleaks (8.27.2)

33-36: Possible Kubernetes Secret detected, posing a risk of leaking credentials/tokens from your deployments

(kubernetes-secret-yaml)


35-35: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)

Comment on lines +107 to +131
data:
errors.html: MTM4ODYyIGJ5dGVzIGxvbmc=
login.html: MTcyOTExIGJ5dGVzIGxvbmc=
providers.html: MTQ3NjYwIGJ5dGVzIGxvbmc=
kind: Secret
metadata:
creationTimestamp: "2025-08-01T18:34:24Z"
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:errors.html: {}
f:login.html: {}
f:providers.html: {}
f:type: {}
manager: authentication-operator
operation: Update
time: "2025-08-01T18:34:24Z"
name: v4-0-config-system-ocp-branding-template
namespace: openshift-authentication
resourceVersion: "6993"
uid: ae89f1f0-cafc-4bee-8605-13145129825d
type: Opaque
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Reduce server-managed metadata; consider labeling for traceability.

Same cleanup as other secrets. Optional: add app label for easier debugging during test runs.

Apply:

   metadata:
-    creationTimestamp: "2025-08-01T18:34:24Z"
-    managedFields: ...
     name: v4-0-config-system-ocp-branding-template
     namespace: openshift-authentication
-    resourceVersion: "6993"
-    uid: ae89f1f0-cafc-4bee-8605-13145129825d
+    labels:
+      app: oauth-openshift
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
data:
errors.html: MTM4ODYyIGJ5dGVzIGxvbmc=
login.html: MTcyOTExIGJ5dGVzIGxvbmc=
providers.html: MTQ3NjYwIGJ5dGVzIGxvbmc=
kind: Secret
metadata:
creationTimestamp: "2025-08-01T18:34:24Z"
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:errors.html: {}
f:login.html: {}
f:providers.html: {}
f:type: {}
manager: authentication-operator
operation: Update
time: "2025-08-01T18:34:24Z"
name: v4-0-config-system-ocp-branding-template
namespace: openshift-authentication
resourceVersion: "6993"
uid: ae89f1f0-cafc-4bee-8605-13145129825d
type: Opaque
data:
errors.html: MTM4ODYyIGJ5dGVzIGxvbmc=
login.html: MTcyOTExIGJ5dGVzIGxvbmc=
providers.html: MTQ3NjYwIGJ5dGVzIGxvbmc=
kind: Secret
metadata:
name: v4-0-config-system-ocp-branding-template
namespace: openshift-authentication
labels:
app: oauth-openshift
type: Opaque
🧰 Tools
🪛 Gitleaks (8.27.2)

107-111: Possible Kubernetes Secret detected, posing a risk of leaking credentials/tokens from your deployments

(kubernetes-secret-yaml)

🤖 Prompt for AI Agents
In
test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/namespaces/openshift-authentication/core/secrets.yaml
around lines 107 to 131, remove server-managed metadata (creationTimestamp,
managedFields, resourceVersion, uid) so the Secret only includes apiVersion,
kind, metadata.name, metadata.namespace and data/type, and add a minimal
traceability label (e.g. metadata.labels: app: openshift-authentication or app:
test) to aid debugging; ensure no other server-populated fields remain in
metadata.

Comment on lines +17 to +26
dependencies specified directly on the deployment:
configmaps:
- core/configmaps/openshift-authentication/audit: required, otherwise the pod won't start
- core/configmaps/openshift-authentication/v4-0-config-system-cliconfig: required, otherwise the pod wont' start
- core/configmaps/openshift-authentication/v4-0-config-system-trusted-ca-bundle: this one is marked as optional, the pod will start when it is missing
secrets:
- core/configmaps/openshift-authentication/v4-0-config-system-session: required, otherwise the pod won't start
- core/configmaps/openshift-authentication/v4-0-config-system-serving-cert: required, otherwise the pod won't start
- core/configmaps/openshift-authentication/v4-0-config-system-router-certs: required, otherwise the pod won't start
- core/configmaps/openshift-authentication/v4-0-config-system-ocp-branding-template: required, otherwise the pod won't start
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix resource kinds in the dependency list.

These are Secrets in the deployment, not ConfigMaps. Also fix typos.

-  configmaps:
+  configmaps:
   - core/configmaps/openshift-authentication/audit: required, otherwise the pod won't start 
   - core/configmaps/openshift-authentication/v4-0-config-system-cliconfig: required, otherwise the pod wont' start
   - core/configmaps/openshift-authentication/v4-0-config-system-trusted-ca-bundle: this one is marked as optional, the pod will start when it is missing
-  secrets:
-  - core/configmaps/openshift-authentication/v4-0-config-system-session: required, otherwise the pod won't start
-  - core/configmaps/openshift-authentication/v4-0-config-system-serving-cert: required, otherwise the pod won't start
-  - core/configmaps/openshift-authentication/v4-0-config-system-router-certs: required, otherwise the pod won't start
-  - core/configmaps/openshift-authentication/v4-0-config-system-ocp-branding-template: required, otherwise the pod won't start
+  secrets:
+  - core/secrets/openshift-authentication/v4-0-config-system-session: required, otherwise the pod won't start
+  - core/secrets/openshift-authentication/v4-0-config-system-serving-cert: required, otherwise the pod won't start
+  - core/secrets/openshift-authentication/v4-0-config-system-router-certs: required, otherwise the pod won't start
+  - core/secrets/openshift-authentication/v4-0-config-system-ocp-branding-template: required, otherwise the pod won't start
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
dependencies specified directly on the deployment:
configmaps:
- core/configmaps/openshift-authentication/audit: required, otherwise the pod won't start
- core/configmaps/openshift-authentication/v4-0-config-system-cliconfig: required, otherwise the pod wont' start
- core/configmaps/openshift-authentication/v4-0-config-system-trusted-ca-bundle: this one is marked as optional, the pod will start when it is missing
secrets:
- core/configmaps/openshift-authentication/v4-0-config-system-session: required, otherwise the pod won't start
- core/configmaps/openshift-authentication/v4-0-config-system-serving-cert: required, otherwise the pod won't start
- core/configmaps/openshift-authentication/v4-0-config-system-router-certs: required, otherwise the pod won't start
- core/configmaps/openshift-authentication/v4-0-config-system-ocp-branding-template: required, otherwise the pod won't start
dependencies specified directly on the deployment:
configmaps:
- core/configmaps/openshift-authentication/audit: required, otherwise the pod won't start
- core/configmaps/openshift-authentication/v4-0-config-system-cliconfig: required, otherwise the pod wont' start
- core/configmaps/openshift-authentication/v4-0-config-system-trusted-ca-bundle: this one is marked as optional, the pod will start when it is missing
secrets:
- core/secrets/openshift-authentication/v4-0-config-system-session: required, otherwise the pod won't start
- core/secrets/openshift-authentication/v4-0-config-system-serving-cert: required, otherwise the pod won't start
- core/secrets/openshift-authentication/v4-0-config-system-router-certs: required, otherwise the pod won't start
- core/secrets/openshift-authentication/v4-0-config-system-ocp-branding-template: required, otherwise the pod won't start
🧰 Tools
🪛 YAMLlint (1.37.1)

[error] 19-19: trailing spaces

(trailing-spaces)

🤖 Prompt for AI Agents
In test-data/apply-configuration/overall/oauth-server-creation-minimal/test.yaml
around lines 17 to 26, the dependency list incorrectly labels several entries as
ConfigMaps when they are Secrets and contains typos; update the resource kinds
from core/configmaps/... to core/secrets/... for the listed secret dependencies
(session, serving-cert, router-certs, ocp-branding-template) and correct typos
(e.g., "wont'" → "won't") so the list accurately reflects Secrets and proper
wording.

Comment on lines +29 to +34
optional resources:
specified directly on the deployment:
secrets:
- core/configmaps/openshift-authentication/v4-0-config-user-template-login: optional, custom template provided by a user
- core/configmaps/openshift-authentication/v4-0-config-user-template-provider-selection: as above
- core/configmaps/openshift-authentication/v4-0-config-user-template-error: as above
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Optional user templates are Secrets, not ConfigMaps.

Align names with Deployment volumes.

-  secrets:
-  - core/configmaps/openshift-authentication/v4-0-config-user-template-login: optional, custom template provided by a user
-  - core/configmaps/openshift-authentication/v4-0-config-user-template-provider-selection: as above
-  - core/configmaps/openshift-authentication/v4-0-config-user-template-error: as above
+  secrets:
+  - core/secrets/openshift-authentication/v4-0-config-user-template-login: optional, custom template
+  - core/secrets/openshift-authentication/v4-0-config-user-template-provider-selection: optional
+  - core/secrets/openshift-authentication/v4-0-config-user-template-error: optional
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
optional resources:
specified directly on the deployment:
secrets:
- core/configmaps/openshift-authentication/v4-0-config-user-template-login: optional, custom template provided by a user
- core/configmaps/openshift-authentication/v4-0-config-user-template-provider-selection: as above
- core/configmaps/openshift-authentication/v4-0-config-user-template-error: as above
optional resources:
specified directly on the deployment:
secrets:
- core/secrets/openshift-authentication/v4-0-config-user-template-login: optional, custom template
- core/secrets/openshift-authentication/v4-0-config-user-template-provider-selection: optional
- core/secrets/openshift-authentication/v4-0-config-user-template-error: optional
🤖 Prompt for AI Agents
In test-data/apply-configuration/overall/oauth-server-creation-minimal/test.yaml
around lines 29 to 34 the review notes that the listed optional user templates
are Secrets not ConfigMaps; update the resource references from
core/configmaps/openshift-authentication/... to
core/secrets/openshift-authentication/... for the three template entries so they
match the Deployment volume names, ensuring the paths and resource kind (secret)
exactly align with how the Deployment mounts them.

@p0lyn0mial p0lyn0mial changed the title adds an integration test for the oauth-server deployment controller API-1835: adds an integration test for the oauth-server deployment controller Sep 17, 2025
@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Sep 17, 2025
@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Sep 17, 2025

@p0lyn0mial: This pull request references API-1835 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the epic to target the "4.21.0" version, but no target version was set.

In response to this:

 This test runs only the controller, which produces the oauth-server deployment.

 The purpose of this test is to show which input resources are required to run the operator and the controller.

 input-dir:
 - config.openshift.io/clusterversions: required to "start the operator" (CreateOperatorStarter/prepareOauthOperator)
 - config.openshift.io/infrastructures: not strictly required (CreateOperatorStarter/prepareOauthAPIServerOperator), but if missing, SNO is assumed and a warning is produced
 - config.openshift.io/authentications: required by the controller, specifically the "cluster" object
 - operator.openshift.io/authentications: required by the controller, specifically the "cluster" object
 - openshift-authentication/route.openshift.io/routes/oauth-openshift: required by the controller, single-item informer
 - core/nodes: (i think) required, otherwise the replica count on the deployment will be 0
 dependencies specified directly on the deployment:
 configmaps:
 - core/configmaps/openshift-authentication/audit: required, otherwise the pod won't start 
 - core/configmaps/openshift-authentication/v4-0-config-system-cliconfig: required, otherwise the pod wont' start
 - core/configmaps/openshift-authentication/v4-0-config-system-trusted-ca-bundle: this one is marked as optional, the pod will start when it is missing
 secrets:
 - core/configmaps/openshift-authentication/v4-0-config-system-session: required, otherwise the pod won't start
 - core/configmaps/openshift-authentication/v4-0-config-system-serving-cert: required, otherwise the pod won't start
 - core/configmaps/openshift-authentication/v4-0-config-system-router-certs: required, otherwise the pod won't start
 - core/configmaps/openshift-authentication/v4-0-config-system-ocp-branding-template: required, otherwise the pod won't start
 
 
 optional resources:
 specified directly on the deployment:
 secrets:
 - core/configmaps/openshift-authentication/v4-0-config-user-template-login: optional, custom template provided by a user
 - core/configmaps/openshift-authentication/v4-0-config-user-template-provider-selection: as above
 - core/configmaps/openshift-authentication/v4-0-config-user-template-error: as above

Summary by CodeRabbit

  • Tests
  • Added an end-to-end apply-configuration scenario to validate minimal OAuth server creation, covering cluster and namespace resources, nodes, route, secrets, deployment, and expected controller outcomes.
  • Introduced expected outputs for status updates and event generation to verify controller behavior.
  • Documentation
  • Included a descriptive test spec outlining required inputs, dependencies, and execution details for the minimal OAuth server scenario.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@p0lyn0mial
Copy link
Contributor Author

/test test-operator-integration

Copy link
Member

@bertinatto bertinatto left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Sep 17, 2025
@liouk
Copy link
Member

liouk commented Sep 19, 2025

/approve

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Sep 19, 2025
@liouk
Copy link
Member

liouk commented Sep 19, 2025

/hold

@p0lyn0mial coderabbit has detected some typos, namely mame instead of name -- I think these should be fixed, right?

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Sep 19, 2025
@benluddy
Copy link
Contributor

@p0lyn0mial coderabbit has detected some typos, namely mame instead of name -- I think these should be fixed, right?

https://en.wikipedia.org/wiki/Mame_(musical)?

…urces are required to run the operator and the controller that creates oauth-server deployment
@p0lyn0mial p0lyn0mial force-pushed the integration-test-creation-oauth-server branch from f288aed to ee16ec3 Compare October 1, 2025 11:52
@openshift-ci openshift-ci bot removed the lgtm Indicates that a PR is ready to be merged. label Oct 1, 2025
@p0lyn0mial
Copy link
Contributor Author

/hold cancel

@openshift-ci openshift-ci bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Oct 1, 2025
@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Oct 1, 2025

@p0lyn0mial: This pull request references API-1835 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the epic to target the "4.21.0" version, but no target version was set.

In response to this:

 This test runs only the controller, which produces the oauth-server deployment.

 The purpose of this test is to show which input resources are required to run the operator and the controller.

 input-dir:
 - config.openshift.io/clusterversions: required to "start the operator" (CreateOperatorStarter/prepareOauthOperator)
 - config.openshift.io/infrastructures: not strictly required (CreateOperatorStarter/prepareOauthAPIServerOperator), but if missing, SNO is assumed and a warning is produced
 - config.openshift.io/authentications: required by the controller, specifically the "cluster" object
 - operator.openshift.io/authentications: required by the controller, specifically the "cluster" object
 - openshift-authentication/route.openshift.io/routes/oauth-openshift: required by the controller, single-item informer
 - core/nodes: (i think) required, otherwise the replica count on the deployment will be 0
 dependencies specified directly on the deployment:
 configmaps:
 - core/configmaps/openshift-authentication/audit: required, otherwise the pod won't start 
 - core/configmaps/openshift-authentication/v4-0-config-system-cliconfig: required, otherwise the pod wont' start
 - core/configmaps/openshift-authentication/v4-0-config-system-trusted-ca-bundle: this one is marked as optional, the pod will start when it is missing
 secrets:
 - core/configmaps/openshift-authentication/v4-0-config-system-session: required, otherwise the pod won't start
 - core/configmaps/openshift-authentication/v4-0-config-system-serving-cert: required, otherwise the pod won't start
 - core/configmaps/openshift-authentication/v4-0-config-system-router-certs: required, otherwise the pod won't start
 - core/configmaps/openshift-authentication/v4-0-config-system-ocp-branding-template: required, otherwise the pod won't start
 
 
 optional resources:
 specified directly on the deployment:
 secrets:
 - core/configmaps/openshift-authentication/v4-0-config-user-template-login: optional, custom template provided by a user
 - core/configmaps/openshift-authentication/v4-0-config-user-template-provider-selection: as above
 - core/configmaps/openshift-authentication/v4-0-config-user-template-error: as above

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@bertinatto
Copy link
Member

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Oct 1, 2025
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 1, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: bertinatto, liouk, p0lyn0mial

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (2)
test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/core/nodes/ip-10-0-34-16.us-west-2.compute.internal.yaml (1)

35-212: Strip server-managed metadata from Node fixture.

Line 35 and onward reintroduce metadata.creationTimestamp, metadata.managedFields, metadata.resourceVersion, and metadata.uid. These fields are server-populated, drift between clusters, and are rejected/ignored by the API when sent by clients. Keeping them in the fixture makes the integration test brittle and misrepresents what the controller must provide. Please drop these keys so the input contains only client-authoritative data.

@@
-  creationTimestamp: "2025-08-01T18:32:30Z"
@@
-  managedFields:
-  - apiVersion: v1
-    fieldsType: FieldsV1
-    fieldsV1:
-      ...
-    time: "2025-08-01T20:05:08Z"
@@
-  resourceVersion: "269278"
-  uid: 2fea5bbd-9f90-474e-a2fd-c8b76fdb60f5
test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/namespaces/openshift-authentication/core/secrets.yaml (1)

9-130: Strip server-managed metadata from Secrets to keep fixtures creatable.

All four Secret manifests still carry creationTimestamp, managedFields, resourceVersion, and uid. Those fields are server-owned; attempting to create/apply these Secrets with them present will fail, so the integration test cannot seed the namespace yet. Please drop those fields (the earlier bot comment already outlined the cleanup).

-    creationTimestamp: "2025-08-01T18:44:36Z"
-    managedFields:
-
-    resourceVersion: "20293"
-    uid: 1ab2c769-1f5b-4442-8652-ed0ebbded1a7
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between f288aed and ee16ec3.

📒 Files selected for processing (21)
  • test-data/apply-configuration/overall/minimal-cluster/expected-output/Management/Create/namespaces/openshift-authentication-operator/core/events/2e46-metadata-authentication-operator.17fe72c59b829800.b2cdb588.yaml (1 hunks)
  • test-data/apply-configuration/overall/minimal-cluster/expected-output/Management/Create/namespaces/openshift-authentication-operator/core/events/86b1-metadata-authentication-operator.17fe72c59b829800.5aa4f723.yaml (1 hunks)
  • test-data/apply-configuration/overall/minimal-cluster/expected-output/UserWorkload/Create/cluster-scoped-resources/certificates.k8s.io/certificatesigningrequests/ffbf-body-system-COLON-openshift-COLON-openshift-authenticator-.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/ApplyStatus/cluster-scoped-resources/operator.openshift.io/authentications/5749-body-cluster.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/ApplyStatus/cluster-scoped-resources/operator.openshift.io/authentications/5749-metadata-cluster.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/ApplyStatus/cluster-scoped-resources/operator.openshift.io/authentications/5749-options-cluster.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/Create/namespaces/openshift-authentication-operator/core/events/6672-body-authentication-operator.18599d2230299800.64fe3b99.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/Create/namespaces/openshift-authentication-operator/core/events/6672-metadata-authentication-operator.18599d2230299800.64fe3b99.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/Create/namespaces/openshift-authentication/apps/deployments/a3d6-body-oauth-openshift.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/Create/namespaces/openshift-authentication/apps/deployments/a3d6-metadata-oauth-openshift.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/controller-results.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/config.openshift.io/authentications.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/config.openshift.io/clusterversions.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/config.openshift.io/infrastructures.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/core/nodes/ip-10-0-114-231.us-west-2.compute.internal.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/core/nodes/ip-10-0-34-16.us-west-2.compute.internal.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/core/nodes/ip-10-0-62-215.us-west-2.compute.internal.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/operator.openshift.io/authentications/cluster.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/namespaces/openshift-authentication/core/secrets.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/namespaces/openshift-authentication/route.openshift.io/routes/oauth-openshift.yaml (1 hunks)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/test.yaml (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/controller-results.yaml
🚧 Files skipped from review as they are similar to previous changes (10)
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/ApplyStatus/cluster-scoped-resources/operator.openshift.io/authentications/5749-options-cluster.yaml
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/test.yaml
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/namespaces/openshift-authentication/route.openshift.io/routes/oauth-openshift.yaml
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/core/nodes/ip-10-0-114-231.us-west-2.compute.internal.yaml
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/ApplyStatus/cluster-scoped-resources/operator.openshift.io/authentications/5749-body-cluster.yaml
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/config.openshift.io/clusterversions.yaml
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/Create/namespaces/openshift-authentication-operator/core/events/6672-metadata-authentication-operator.18599d2230299800.64fe3b99.yaml
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/core/nodes/ip-10-0-62-215.us-west-2.compute.internal.yaml
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/Create/namespaces/openshift-authentication-operator/core/events/6672-body-authentication-operator.18599d2230299800.64fe3b99.yaml
  • test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/ApplyStatus/cluster-scoped-resources/operator.openshift.io/authentications/5749-metadata-cluster.yaml
🧰 Additional context used
🪛 Checkov (3.2.334)
test-data/apply-configuration/overall/oauth-server-creation-minimal/expected-output/Management/Create/namespaces/openshift-authentication/apps/deployments/a3d6-body-oauth-openshift.yaml

[medium] 1-193: Containers should not run with allowPrivilegeEscalation

(CKV_K8S_20)


[high] 1-193: Container should not be privileged

(CKV_K8S_16)


[medium] 1-193: Minimize the admission of root containers

(CKV_K8S_23)

🔇 Additional comments (1)
test-data/apply-configuration/overall/minimal-cluster/expected-output/UserWorkload/Create/cluster-scoped-resources/certificates.k8s.io/certificatesigningrequests/ffbf-body-system-COLON-openshift-COLON-openshift-authenticator-.yaml (1)

9-9: CSR payload refresh looks good.

Structure stays intact and the new PEM payload lines up with the expected fixture output.

Comment on lines +9 to +67
include.release.openshift.io/self-managed-high-availability: "true"
release.openshift.io/create-only: "true"
creationTimestamp: "2025-08-01T18:29:45Z"
generation: 2
managedFields:
- apiVersion: config.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:include.release.openshift.io/ibm-cloud-managed: {}
f:include.release.openshift.io/self-managed-high-availability: {}
f:release.openshift.io/create-only: {}
f:ownerReferences:
.: {}
k:{"uid":"fd412cff-9592-4cb6-b0e9-97a5c376f29e"}: {}
f:spec: {}
manager: cluster-version-operator
operation: Update
time: "2025-08-01T18:29:45Z"
- apiVersion: config.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:spec:
f:oauthMetadata:
.: {}
f:name: {}
f:serviceAccountIssuer: {}
f:type: {}
f:webhookTokenAuthenticator:
.: {}
f:kubeConfig:
.: {}
f:name: {}
manager: authentication-operator
operation: Update
time: "2025-08-01T18:35:35Z"
- apiVersion: config.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
.: {}
f:integratedOAuthMetadata:
.: {}
f:name: {}
manager: authentication-operator
operation: Update
subresource: status
time: "2025-08-01T18:44:36Z"
name: cluster
ownerReferences:
- apiVersion: config.openshift.io/v1
kind: ClusterVersion
name: version
uid: fd412cff-9592-4cb6-b0e9-97a5c376f29e
resourceVersion: "20310"
uid: 7adc5a7e-47eb-41c7-af2e-9faa138dccc5
spec:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Remove server-managed metadata; API will reject the fixture.

We can’t seed this Authentication because creationTimestamp, managedFields, resourceVersion, and uid are all server-managed fields. The kube-apiserver (and envtest) will reject a create/apply that includes them, which blocks the new integration test. Please strip those fields (and any other server-populated metadata) before loading the object.

-    creationTimestamp: "2025-08-01T18:29:45Z"
-    managedFields:
-
-    resourceVersion: "20310"
-    uid: 7adc5a7e-47eb-41c7-af2e-9faa138dccc5
🤖 Prompt for AI Agents
In
test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/config.openshift.io/authentications.yaml
around lines 9-67, remove server-managed metadata that will cause API rejection:
delete creationTimestamp, managedFields, resourceVersion, uid, generation, and
any server-populated status fields (including status.integratedOAuthMetadata)
and other server-only annotations/ownerReferences tied to cluster-version; leave
only valid metadata (name, namespace if needed, labels/annotations that are not
server-managed) and spec. Ensure the fixture contains no status subresource or
server-populated fields before loading in envtest.

Comment on lines +7 to +75
creationTimestamp: "2025-08-01T18:29:04Z"
generation: 1
managedFields:
- apiVersion: config.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:spec:
.: {}
f:cloudConfig:
.: {}
f:key: {}
f:name: {}
f:platformSpec:
.: {}
f:aws: {}
f:type: {}
manager: cluster-bootstrap
operation: Update
time: "2025-08-01T18:29:04Z"
- apiVersion: config.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
.: {}
f:apiServerInternalURI: {}
f:apiServerURL: {}
f:controlPlaneTopology: {}
f:cpuPartitioning: {}
f:etcdDiscoveryDomain: {}
f:infrastructureName: {}
f:infrastructureTopology: {}
f:platform: {}
f:platformStatus:
.: {}
f:aws:
.: {}
f:region: {}
f:type: {}
manager: cluster-bootstrap
operation: Update
subresource: status
time: "2025-08-01T18:29:04Z"
name: cluster
resourceVersion: "536"
uid: 6acf325c-8f35-4030-83a2-24d69d22a0ad
spec:
cloudConfig:
key: config
name: cloud-provider-config
platformSpec:
aws: {}
type: AWS
status:
apiServerInternalURI: https://api-int.ci-op-gn2pz6q7-69aee.XXXXXXXXXXXXXXXXXXXXXX:6443
apiServerURL: https://api.ci-op-gn2pz6q7-69aee.XXXXXXXXXXXXXXXXXXXXXX:6443
controlPlaneTopology: HighlyAvailable
cpuPartitioning: None
etcdDiscoveryDomain: ""
infrastructureName: ci-op-gn2pz6q7-69aee-7ggdj
infrastructureTopology: HighlyAvailable
platform: AWS
platformStatus:
aws:
region: us-west-2
type: AWS
kind: InfrastructureList
metadata:
continue: ""
resourceVersion: "269027"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Drop server-managed metadata from Infrastructure fixture.

creationTimestamp, managedFields, resourceVersion, and uid make this object impossible to create in a real API server or envtest run—the request will be rejected. Please remove those server-populated fields so the test harness can seed the Infrastructure resource.

-    creationTimestamp: "2025-08-01T18:29:04Z"
-    managedFields:
-
-    resourceVersion: "536"
-    uid: 6acf325c-8f35-4030-83a2-24d69d22a0ad
🤖 Prompt for AI Agents
In
test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/config.openshift.io/infrastructures.yaml
around lines 7 to 75, the fixture contains server-populated metadata
(creationTimestamp, managedFields, resourceVersion, uid) which prevents creating
the Infrastructure in a real API server or envtest; remove the creationTimestamp
entry, the entire managedFields array, the resourceVersion field, and the uid
field from the metadata block (leave name and other user-specified fields
intact) so the test harness can seed the resource successfully.

Comment on lines +9 to +787
creationTimestamp: "2025-08-01T18:29:46Z"
generation: 9
managedFields:
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"WebhookAuthenticatorCertApprover_OpenShiftAuthenticatorDegraded"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
manager: WebhookAuthenticatorCertApprover_OpenShiftAuthenticator-reportDegraded
operation: Apply
subresource: status
time: "2025-08-01T18:34:07Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"ManagementStateDegraded"}:
.: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
manager: authentication-ManagementState
operation: Apply
subresource: status
time: "2025-08-01T18:34:07Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"UnsupportedConfigOverridesUpgradeable"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
manager: openshift-authentication-UnsupportedConfigOverrides
operation: Apply
subresource: status
time: "2025-08-01T18:34:07Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"Encrypted"}:
.: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
manager: openshift-oauth-apiserver-EncryptionCondition
operation: Apply
subresource: status
time: "2025-08-01T18:34:09Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"EncryptionKeyControllerDegraded"}:
.: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
manager: openshift-oauth-apiserver-EncryptionKey
operation: Apply
subresource: status
time: "2025-08-01T18:34:09Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"EncryptionMigrationControllerDegraded"}:
.: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
k:{"type":"EncryptionMigrationControllerProgressing"}:
.: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
manager: openshift-oauth-apiserver-EncryptionMigration
operation: Apply
subresource: status
time: "2025-08-01T18:34:09Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"EncryptionPruneControllerDegraded"}:
.: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
manager: openshift-oauth-apiserver-EncryptionPrune
operation: Apply
subresource: status
time: "2025-08-01T18:34:09Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"EncryptionStateControllerDegraded"}:
.: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
manager: openshift-oauth-apiserver-EncryptionState
operation: Apply
subresource: status
time: "2025-08-01T18:34:09Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"OAuthAPIServerConfigObservationDegraded"}:
.: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
manager: oauth-apiserver-ConfigObserver
operation: Apply
subresource: status
time: "2025-08-01T18:34:11Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"ResourceSyncControllerDegraded"}:
.: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
manager: oauth-server-ResourceSync
operation: Apply
subresource: status
time: "2025-08-01T18:34:13Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"AuditPolicyDegraded"}:
.: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
manager: oauth-apiserver-AuditPolicy
operation: Apply
subresource: status
time: "2025-08-01T18:34:14Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"RouterCertsDomainValidationControllerDegraded"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
manager: RouterCertsDomainValidationController-reportDegraded
operation: Apply
subresource: status
time: "2025-08-01T18:34:15Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"AuthenticatorCertKeyProgressing"}:
.: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
manager: openshift-authentication-WebhookAuthenticator
operation: Apply
subresource: status
time: "2025-08-01T18:34:19Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"APIServerStaticResourcesDegraded"}:
.: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
manager: APIServerStaticResources-StaticResources
operation: Apply
subresource: status
time: "2025-08-01T18:34:24Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:latestAvailableRevision: {}
manager: oauth-apiserver-RevisionController
operation: Apply
subresource: status
time: "2025-08-01T18:34:24Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"RevisionControllerDegraded"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
manager: RevisionController-reportDegraded
operation: Apply
subresource: status
time: "2025-08-01T18:34:25Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"OpenshiftAuthenticationStaticResourcesDegraded"}:
.: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
manager: OpenshiftAuthenticationStaticResources-StaticResources
operation: Apply
subresource: status
time: "2025-08-01T18:34:39Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"OAuthServiceDegraded"}:
.: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"SystemServiceCAConfigDegraded"}:
.: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
manager: openshift-authentication-ServiceCA
operation: Apply
subresource: status
time: "2025-08-01T18:34:41Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"OAuthServerConfigObservationDegraded"}:
.: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
manager: oauth-server-ConfigObserver
operation: Apply
subresource: status
time: "2025-08-01T18:34:45Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"RouterCertsDegraded"}:
.: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
manager: openshift-authentication-RouterCertsDomainValidation
operation: Apply
subresource: status
time: "2025-08-01T18:34:45Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"WebhookAuthenticatorControllerDegraded"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
manager: WebhookAuthenticatorController-reportDegraded
operation: Apply
subresource: status
time: "2025-08-01T18:35:35Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"ReadyIngressNodesAvailable"}:
.: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
manager: openshift-authentication-IngressNodesAvailable
operation: Apply
subresource: status
time: "2025-08-01T18:41:27Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"APIServicesAvailable"}:
.: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
k:{"type":"APIServicesDegraded"}:
.: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
manager: openshift-apiserver-APIService
operation: Apply
subresource: status
time: "2025-08-01T18:43:54Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"ProxyConfigControllerDegraded"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
manager: ProxyConfigController-reportDegraded
operation: Apply
subresource: status
time: "2025-08-01T18:45:32Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"IngressStateEndpointsDegraded"}:
.: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"IngressStatePodsDegraded"}:
.: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
manager: openshift-authentication-IngressState
operation: Apply
subresource: status
time: "2025-08-01T18:45:39Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"OAuthConfigDegraded"}:
.: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"OAuthConfigIngressDegraded"}:
.: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"OAuthConfigRouteDegraded"}:
.: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"OAuthConfigServiceDegraded"}:
.: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"OAuthSessionSecretDegraded"}:
.: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
manager: openshift-authentication-PayloadConfig
operation: Apply
subresource: status
time: "2025-08-01T18:45:40Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"OAuthServerServiceEndpointAccessibleControllerAvailable"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
manager: OAuthServerService-EndpointAccessible
operation: Apply
subresource: status
time: "2025-08-01T18:45:51Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"OAuthServerServiceEndpointAccessibleControllerDegraded"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
manager: OAuthServerServiceEndpointAccessibleController-reportDegraded
operation: Apply
subresource: status
time: "2025-08-01T18:45:51Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"CustomRouteControllerDegraded"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
manager: CustomRouteController-reportDegraded
operation: Apply
subresource: status
time: "2025-08-01T18:45:52Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"OAuthServerRouteEndpointAccessibleControllerAvailable"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
manager: OAuthServerRoute-EndpointAccessible
operation: Apply
subresource: status
time: "2025-08-01T18:45:53Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"OAuthServerServiceEndpointsEndpointAccessibleControllerAvailable"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
manager: OAuthServerServiceEndpoints-EndpointAccessible
operation: Apply
subresource: status
time: "2025-08-01T18:45:53Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"AuthConfigDegraded"}:
.: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"IngressConfigDegraded"}:
.: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"OAuthSystemMetadataDegraded"}:
.: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
manager: openshift-authentication-Metadata
operation: Apply
subresource: status
time: "2025-08-01T18:45:54Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"OAuthClientsControllerDegraded"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
manager: OAuthClientsController-reportDegraded
operation: Apply
subresource: status
time: "2025-08-01T18:45:55Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
manager: OAuthServerServiceEndpointsEndpointAccessibleController-reportDegraded
operation: Apply
subresource: status
time: "2025-08-01T18:45:55Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"OAuthServerRouteEndpointAccessibleControllerDegraded"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
manager: OAuthServerRouteEndpointAccessibleController-reportDegraded
operation: Apply
subresource: status
time: "2025-08-01T18:45:56Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"OAuthServerDeploymentAvailable"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"OAuthServerDeploymentDegraded"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"OAuthServerDeploymentProgressing"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"OAuthServerWorkloadDegraded"}:
.: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
f:generations:
k:{"group":"apps","name":"oauth-openshift","namespace":"openshift-authentication","resource":"deployments"}:
.: {}
f:group: {}
f:lastGeneration: {}
f:name: {}
f:namespace: {}
f:resource: {}
manager: OAuthServer-Workload
operation: Apply
subresource: status
time: "2025-08-01T18:47:47Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"APIServerDeploymentAvailable"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"APIServerDeploymentDegraded"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"APIServerDeploymentProgressing"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"APIServerWorkloadDegraded"}:
.: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
f:generations:
k:{"group":"apps","name":"apiserver","namespace":"openshift-oauth-apiserver","resource":"deployments"}:
.: {}
f:group: {}
f:lastGeneration: {}
f:name: {}
f:namespace: {}
f:resource: {}
manager: OAuthAPIServerController-Workload
operation: Apply
subresource: status
time: "2025-08-01T18:49:38Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"WellKnownReadyControllerDegraded"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
manager: WellKnownReadyController-reportDegraded
operation: Apply
subresource: status
time: "2025-08-01T18:56:32Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"WellKnownAvailable"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"WellKnownReadyControllerProgressing"}:
.: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
manager: openshift-authentication-WellKnownReady
operation: Apply
subresource: status
time: "2025-08-01T18:56:32Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:include.release.openshift.io/self-managed-high-availability: {}
f:include.release.openshift.io/single-node-developer: {}
f:release.openshift.io/create-only: {}
f:ownerReferences:
.: {}
k:{"uid":"fd412cff-9592-4cb6-b0e9-97a5c376f29e"}: {}
f:spec:
.: {}
f:logLevel: {}
f:managementState: {}
f:operatorLogLevel: {}
manager: cluster-version-operator
operation: Update
time: "2025-08-01T18:29:46Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:spec:
f:observedConfig:
.: {}
f:oauthAPIServer:
.: {}
f:apiServerArguments:
.: {}
f:api-audiences: {}
f:cors-allowed-origins: {}
f:etcd-servers: {}
f:tls-cipher-suites: {}
f:tls-min-version: {}
f:oauthServer:
.: {}
f:corsAllowedOrigins: {}
f:oauthConfig:
.: {}
f:assetPublicURL: {}
f:loginURL: {}
f:templates:
.: {}
f:error: {}
f:login: {}
f:providerSelection: {}
f:tokenConfig:
.: {}
f:accessTokenMaxAgeSeconds: {}
f:authorizeTokenMaxAgeSeconds: {}
f:serverArguments:
.: {}
f:audit-log-format: {}
f:audit-log-maxbackup: {}
f:audit-log-maxsize: {}
f:audit-log-path: {}
f:audit-policy-file: {}
f:servingInfo:
.: {}
f:cipherSuites: {}
f:minTLSVersion: {}
f:namedCertificates: {}
f:volumesToMount:
.: {}
f:identityProviders: {}
f:unsupportedConfigOverrides: {}
manager: authentication-operator
operation: Update
time: "2025-08-01T18:46:38Z"
name: cluster
ownerReferences:
- apiVersion: config.openshift.io/v1
kind: ClusterVersion
name: version
uid: fd412cff-9592-4cb6-b0e9-97a5c376f29e
resourceVersion: "30999"
uid: fa91c2c0-b4a8-44f5-b2bf-1d34f1ffba2d
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Server-managed metadata must be removed for this Authentication to seed.

This manifest is packed with creationTimestamp, managedFields, resourceVersion, and uid. Those fields cannot be set by clients, so the API (or envtest) will reject the create/apply call and the integration test will fail during setup. Please strip every server-managed field from metadata before loading the resource.

-  creationTimestamp: "2025-08-01T18:29:46Z"
-  managedFields:
-
-  resourceVersion: "30999"
-  uid: fa91c2c0-b4a8-44f5-b2bf-1d34f1ffba2d
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
creationTimestamp: "2025-08-01T18:29:46Z"
generation: 9
managedFields:
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"WebhookAuthenticatorCertApprover_OpenShiftAuthenticatorDegraded"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
manager: WebhookAuthenticatorCertApprover_OpenShiftAuthenticator-reportDegraded
operation: Apply
subresource: status
time: "2025-08-01T18:34:07Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"ManagementStateDegraded"}:
.: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
manager: authentication-ManagementState
operation: Apply
subresource: status
time: "2025-08-01T18:34:07Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"UnsupportedConfigOverridesUpgradeable"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
manager: openshift-authentication-UnsupportedConfigOverrides
operation: Apply
subresource: status
time: "2025-08-01T18:34:07Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"Encrypted"}:
.: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
manager: openshift-oauth-apiserver-EncryptionCondition
operation: Apply
subresource: status
time: "2025-08-01T18:34:09Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"EncryptionKeyControllerDegraded"}:
.: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
manager: openshift-oauth-apiserver-EncryptionKey
operation: Apply
subresource: status
time: "2025-08-01T18:34:09Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"EncryptionMigrationControllerDegraded"}:
.: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
k:{"type":"EncryptionMigrationControllerProgressing"}:
.: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
manager: openshift-oauth-apiserver-EncryptionMigration
operation: Apply
subresource: status
time: "2025-08-01T18:34:09Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"EncryptionPruneControllerDegraded"}:
.: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
manager: openshift-oauth-apiserver-EncryptionPrune
operation: Apply
subresource: status
time: "2025-08-01T18:34:09Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"EncryptionStateControllerDegraded"}:
.: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
manager: openshift-oauth-apiserver-EncryptionState
operation: Apply
subresource: status
time: "2025-08-01T18:34:09Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"OAuthAPIServerConfigObservationDegraded"}:
.: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
manager: oauth-apiserver-ConfigObserver
operation: Apply
subresource: status
time: "2025-08-01T18:34:11Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"ResourceSyncControllerDegraded"}:
.: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
manager: oauth-server-ResourceSync
operation: Apply
subresource: status
time: "2025-08-01T18:34:13Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"AuditPolicyDegraded"}:
.: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
manager: oauth-apiserver-AuditPolicy
operation: Apply
subresource: status
time: "2025-08-01T18:34:14Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"RouterCertsDomainValidationControllerDegraded"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
manager: RouterCertsDomainValidationController-reportDegraded
operation: Apply
subresource: status
time: "2025-08-01T18:34:15Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"AuthenticatorCertKeyProgressing"}:
.: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
manager: openshift-authentication-WebhookAuthenticator
operation: Apply
subresource: status
time: "2025-08-01T18:34:19Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"APIServerStaticResourcesDegraded"}:
.: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
manager: APIServerStaticResources-StaticResources
operation: Apply
subresource: status
time: "2025-08-01T18:34:24Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:latestAvailableRevision: {}
manager: oauth-apiserver-RevisionController
operation: Apply
subresource: status
time: "2025-08-01T18:34:24Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"RevisionControllerDegraded"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
manager: RevisionController-reportDegraded
operation: Apply
subresource: status
time: "2025-08-01T18:34:25Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"OpenshiftAuthenticationStaticResourcesDegraded"}:
.: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
manager: OpenshiftAuthenticationStaticResources-StaticResources
operation: Apply
subresource: status
time: "2025-08-01T18:34:39Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"OAuthServiceDegraded"}:
.: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"SystemServiceCAConfigDegraded"}:
.: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
manager: openshift-authentication-ServiceCA
operation: Apply
subresource: status
time: "2025-08-01T18:34:41Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"OAuthServerConfigObservationDegraded"}:
.: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
manager: oauth-server-ConfigObserver
operation: Apply
subresource: status
time: "2025-08-01T18:34:45Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"RouterCertsDegraded"}:
.: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
manager: openshift-authentication-RouterCertsDomainValidation
operation: Apply
subresource: status
time: "2025-08-01T18:34:45Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"WebhookAuthenticatorControllerDegraded"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
manager: WebhookAuthenticatorController-reportDegraded
operation: Apply
subresource: status
time: "2025-08-01T18:35:35Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"ReadyIngressNodesAvailable"}:
.: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
manager: openshift-authentication-IngressNodesAvailable
operation: Apply
subresource: status
time: "2025-08-01T18:41:27Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"APIServicesAvailable"}:
.: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
k:{"type":"APIServicesDegraded"}:
.: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
manager: openshift-apiserver-APIService
operation: Apply
subresource: status
time: "2025-08-01T18:43:54Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"ProxyConfigControllerDegraded"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
manager: ProxyConfigController-reportDegraded
operation: Apply
subresource: status
time: "2025-08-01T18:45:32Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"IngressStateEndpointsDegraded"}:
.: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"IngressStatePodsDegraded"}:
.: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
manager: openshift-authentication-IngressState
operation: Apply
subresource: status
time: "2025-08-01T18:45:39Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"OAuthConfigDegraded"}:
.: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"OAuthConfigIngressDegraded"}:
.: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"OAuthConfigRouteDegraded"}:
.: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"OAuthConfigServiceDegraded"}:
.: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"OAuthSessionSecretDegraded"}:
.: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
manager: openshift-authentication-PayloadConfig
operation: Apply
subresource: status
time: "2025-08-01T18:45:40Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"OAuthServerServiceEndpointAccessibleControllerAvailable"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
manager: OAuthServerService-EndpointAccessible
operation: Apply
subresource: status
time: "2025-08-01T18:45:51Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"OAuthServerServiceEndpointAccessibleControllerDegraded"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
manager: OAuthServerServiceEndpointAccessibleController-reportDegraded
operation: Apply
subresource: status
time: "2025-08-01T18:45:51Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"CustomRouteControllerDegraded"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
manager: CustomRouteController-reportDegraded
operation: Apply
subresource: status
time: "2025-08-01T18:45:52Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"OAuthServerRouteEndpointAccessibleControllerAvailable"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
manager: OAuthServerRoute-EndpointAccessible
operation: Apply
subresource: status
time: "2025-08-01T18:45:53Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"OAuthServerServiceEndpointsEndpointAccessibleControllerAvailable"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
manager: OAuthServerServiceEndpoints-EndpointAccessible
operation: Apply
subresource: status
time: "2025-08-01T18:45:53Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"AuthConfigDegraded"}:
.: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"IngressConfigDegraded"}:
.: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"OAuthSystemMetadataDegraded"}:
.: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
manager: openshift-authentication-Metadata
operation: Apply
subresource: status
time: "2025-08-01T18:45:54Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"OAuthClientsControllerDegraded"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
manager: OAuthClientsController-reportDegraded
operation: Apply
subresource: status
time: "2025-08-01T18:45:55Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
manager: OAuthServerServiceEndpointsEndpointAccessibleController-reportDegraded
operation: Apply
subresource: status
time: "2025-08-01T18:45:55Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"OAuthServerRouteEndpointAccessibleControllerDegraded"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
manager: OAuthServerRouteEndpointAccessibleController-reportDegraded
operation: Apply
subresource: status
time: "2025-08-01T18:45:56Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"OAuthServerDeploymentAvailable"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"OAuthServerDeploymentDegraded"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"OAuthServerDeploymentProgressing"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"OAuthServerWorkloadDegraded"}:
.: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
f:generations:
k:{"group":"apps","name":"oauth-openshift","namespace":"openshift-authentication","resource":"deployments"}:
.: {}
f:group: {}
f:lastGeneration: {}
f:name: {}
f:namespace: {}
f:resource: {}
manager: OAuthServer-Workload
operation: Apply
subresource: status
time: "2025-08-01T18:47:47Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"APIServerDeploymentAvailable"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"APIServerDeploymentDegraded"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"APIServerDeploymentProgressing"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"APIServerWorkloadDegraded"}:
.: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
f:generations:
k:{"group":"apps","name":"apiserver","namespace":"openshift-oauth-apiserver","resource":"deployments"}:
.: {}
f:group: {}
f:lastGeneration: {}
f:name: {}
f:namespace: {}
f:resource: {}
manager: OAuthAPIServerController-Workload
operation: Apply
subresource: status
time: "2025-08-01T18:49:38Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"WellKnownReadyControllerDegraded"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
manager: WellKnownReadyController-reportDegraded
operation: Apply
subresource: status
time: "2025-08-01T18:56:32Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"WellKnownAvailable"}:
.: {}
f:lastTransitionTime: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"WellKnownReadyControllerProgressing"}:
.: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
manager: openshift-authentication-WellKnownReady
operation: Apply
subresource: status
time: "2025-08-01T18:56:32Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:include.release.openshift.io/self-managed-high-availability: {}
f:include.release.openshift.io/single-node-developer: {}
f:release.openshift.io/create-only: {}
f:ownerReferences:
.: {}
k:{"uid":"fd412cff-9592-4cb6-b0e9-97a5c376f29e"}: {}
f:spec:
.: {}
f:logLevel: {}
f:managementState: {}
f:operatorLogLevel: {}
manager: cluster-version-operator
operation: Update
time: "2025-08-01T18:29:46Z"
- apiVersion: operator.openshift.io/v1
fieldsType: FieldsV1
fieldsV1:
f:spec:
f:observedConfig:
.: {}
f:oauthAPIServer:
.: {}
f:apiServerArguments:
.: {}
f:api-audiences: {}
f:cors-allowed-origins: {}
f:etcd-servers: {}
f:tls-cipher-suites: {}
f:tls-min-version: {}
f:oauthServer:
.: {}
f:corsAllowedOrigins: {}
f:oauthConfig:
.: {}
f:assetPublicURL: {}
f:loginURL: {}
f:templates:
.: {}
f:error: {}
f:login: {}
f:providerSelection: {}
f:tokenConfig:
.: {}
f:accessTokenMaxAgeSeconds: {}
f:authorizeTokenMaxAgeSeconds: {}
f:serverArguments:
.: {}
f:audit-log-format: {}
f:audit-log-maxbackup: {}
f:audit-log-maxsize: {}
f:audit-log-path: {}
f:audit-policy-file: {}
f:servingInfo:
.: {}
f:cipherSuites: {}
f:minTLSVersion: {}
f:namedCertificates: {}
f:volumesToMount:
.: {}
f:identityProviders: {}
f:unsupportedConfigOverrides: {}
manager: authentication-operator
operation: Update
time: "2025-08-01T18:46:38Z"
name: cluster
ownerReferences:
- apiVersion: config.openshift.io/v1
kind: ClusterVersion
name: version
uid: fd412cff-9592-4cb6-b0e9-97a5c376f29e
resourceVersion: "30999"
uid: fa91c2c0-b4a8-44f5-b2bf-1d34f1ffba2d
# … previous context omitted …
metadata:
name: cluster
ownerReferences:
- apiVersion: config.openshift.io/v1
kind: ClusterVersion
name: version
uid: fd412cff-9592-4cb6-b0e9-97a5c376f29e
# … following spec/stanza unchanged …
🤖 Prompt for AI Agents
In
test-data/apply-configuration/overall/oauth-server-creation-minimal/input-dir/cluster-scoped-resources/operator.openshift.io/authentications/cluster.yaml
around lines 9 to 787, the manifest contains server-managed metadata
(creationTimestamp, managedFields, resourceVersion, uid and other status fields)
that must not be present when seeding; remove all server-managed fields from
metadata (delete creationTimestamp, managedFields, resourceVersion, uid, and any
owner/managed status subresources) and remove the top-level status section
before loading the resource so the API/envtest accepts the create/apply.

@p0lyn0mial
Copy link
Contributor Author

/verified by @p0lyn0mial

@openshift-ci-robot openshift-ci-robot added the verified Signifies that the PR passed pre-merge verification criteria label Oct 1, 2025
@openshift-ci-robot
Copy link
Contributor

@p0lyn0mial: This PR has been marked as verified by @p0lyn0mial.

In response to this:

/verified by @p0lyn0mial

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Contributor

/retest-required

Remaining retests: 0 against base HEAD cb20eca and 2 for PR HEAD ee16ec3 in total

@p0lyn0mial
Copy link
Contributor Author

/retest-required

@openshift-ci-robot
Copy link
Contributor

/retest-required

Remaining retests: 0 against base HEAD e31142a and 1 for PR HEAD ee16ec3 in total

1 similar comment
@openshift-ci-robot
Copy link
Contributor

/retest-required

Remaining retests: 0 against base HEAD e31142a and 1 for PR HEAD ee16ec3 in total

@openshift-ci-robot
Copy link
Contributor

/retest-required

Remaining retests: 0 against base HEAD 215805c and 0 for PR HEAD ee16ec3 in total

@openshift-ci-robot
Copy link
Contributor

/hold

Revision ee16ec3 was retested 3 times: holding

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Oct 15, 2025
@p0lyn0mial
Copy link
Contributor Author

/hold cancel
/retest-required

@openshift-ci openshift-ci bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Oct 27, 2025
@openshift-ci-robot
Copy link
Contributor

/retest-required

Remaining retests: 0 against base HEAD a0db9c2 and 2 for PR HEAD ee16ec3 in total

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 27, 2025

@p0lyn0mial: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-aws-single-node ee16ec3 link false /test e2e-aws-single-node
ci/prow/e2e-agnostic-ipv6 ee16ec3 link false /test e2e-agnostic-ipv6
ci/prow/e2e-oidc ee16ec3 link true /test e2e-oidc

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged. verified Signifies that the PR passed pre-merge verification criteria

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants