-
Couldn't load subscription status.
- Fork 48
OCPBUGS-62325: Updates InfraMachine watch_filters for MachineSync controller #371
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
OCPBUGS-62325: Updates InfraMachine watch_filters for MachineSync controller #371
Conversation
WalkthroughAdds terminal infra-reference errors and handling in reconciliations; introduces ResolveCAPIMachineFromInfraMachine to enqueue CAPI Machines from InfraMachine ownerRefs; replaces klog with controller-runtime/GinkgoLogr logging in utilities and tests; updates reconciler signatures to return explicit requeue flags; adjusts tests for ownerReference wiring. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Infra as InfraMachine (event)
participant Resolver as ResolveCAPIMachineFromInfraMachine
participant Reconciler as MachineSyncReconciler
participant API as Kubernetes API
Infra->>Resolver: event (create/update/delete)
Resolver->>Reconciler: enqueue reconcile for owning CAPI Machine
Reconciler->>API: Get CAPI Machine
Reconciler->>API: Get MAPI Machine
Reconciler->>API: Get InfraMachine reference
alt terminal invalid infra refs
Reconciler->>Reconciler: detect errInvalidInfraClusterReference / errInvalidInfraMachineReference
Reconciler-->>Reconciler: log terminal error, emit warning event, do not requeue
else normal sync flow
Reconciler->>Reconciler: ensureSyncFinalizer(...) → (shouldRequeue)
alt deletion
Reconciler->>Reconciler: reconcileMAPItoCAPIMachineDeletion(...) → (shouldRequeue, err)
else update/sync
Reconciler->>API: Patch status / conditions
end
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Cache: Disabled due to data retention organization setting Knowledge base: Disabled due to data retention organization setting 📒 Files selected for processing (2)
🔇 Additional comments (10)
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 golangci-lint (2.5.0)Error: can't load config: unsupported version of the configuration: "" See https://golangci-lint.run/docs/product/migration-guide for migration instructions Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Skipping CI for Draft Pull Request. |
|
@theobarberbany: This pull request references Jira Issue OCPBUGS-62325, which is invalid:
Comment The bug has been updated to refer to the pull request using the external bug tracker. In response to this: Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
4b03d42 to
a3b9582
Compare
a3b9582 to
6c6813f
Compare
d360bd1 to
5e6d8d4
Compare
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
|
@theobarberbany: This pull request references Jira Issue OCPBUGS-62325, which is valid. The bug has been moved to the POST state. 3 validation(s) were run on this bug
Requesting review from QA contact: The bug has been updated to refer to the pull request using the external bug tracker. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
pkg/controllers/machinesync/machine_sync_controller.go (2)
1039-1046: Do not wrap a nil error when removing the MAPI sync finalizer.This path now always returns an error (
fmt.Errorf(... %w, err) even whenerris nil, breaking the deletion flow.Use this fix:
- _, err := util.RemoveFinalizer(ctx, r.Client, mapiMachine, SyncFinalizer) - - return false, fmt.Errorf("failed to remove finalizer: %w", err) + changed, err := util.RemoveFinalizer(ctx, r.Client, mapiMachine, SyncFinalizer) + if err != nil { + return false, fmt.Errorf("failed to remove finalizer: %w", err) + } + return changed, nil
1161-1167: Likewise, avoid wrapping a nil error when pruning the CAPI sync finalizer.Here too,
fmt.Errorf(... %w, err)returns a non-nil error even whenerris nil, so reconciliation always fails instead of continuing.Patch it like this:
- _, err := util.RemoveFinalizer(ctx, r.Client, capiMachine, SyncFinalizer) - - return false, fmt.Errorf("failed to remove finalizer: %w", err) + changed, err := util.RemoveFinalizer(ctx, r.Client, capiMachine, SyncFinalizer) + if err != nil { + return false, fmt.Errorf("failed to remove finalizer: %w", err) + } + return changed, nil
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (4)
pkg/controllers/machinesync/machine_sync_controller.go(8 hunks)pkg/controllers/machinesync/machine_sync_controller_test.go(14 hunks)pkg/controllers/machinesync/suite_test.go(2 hunks)pkg/util/watch_filters.go(4 hunks)
|
/test unit |
5e6d8d4 to
db0597d
Compare
856d7b6 to
125151e
Compare
31c38ef to
6a39fb7
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
pkg/controllers/machinesync/machine_sync_controller_test.go (1)
193-194: Verify GinkgoLogr reference.Past review flagged this line as requiring
komega.GinkgoLogrinstead ofGinkgoLogr. The current code showsGinkgoLogrwithout a package qualifier—confirm this resolves correctly or update tokomega.GinkgoLogr.Run this check to verify the symbol resolves:
#!/bin/bash # Description: Check if GinkgoLogr is properly defined or if komega.GinkgoLogr is needed # Search for GinkgoLogr declarations in the test file and komega package usage rg -n 'GinkgoLogr' pkg/controllers/machinesync/machine_sync_controller_test.go # Check if there's a local GinkgoLogr variable defined in the test ast-grep --pattern $'var GinkgoLogr = $$$' # Verify komega import and GinkgoLogr export rg -n 'komega\.' pkg/controllers/machinesync/pkg/controllers/machinesync/machine_sync_controller.go (1)
1032-1034: Critical: Past review issue not addressed - incorrect finalizer removal handling.The past review correctly identified that lines 1032-1034 have critical bugs:
- The boolean return from
RemoveFinalizeris ignored with_, so the requeue signal is lostfmt.Errorfwraps the error even whenerrisnil, returning(false, nil)instead of the intended behavior- Should return
(changed, nil)when removal succeedsApply this fix:
- _, err := util.RemoveFinalizer(ctx, r.Client, mapiMachine, SyncFinalizer) - - return false, fmt.Errorf("failed to remove finalizer: %w", err) + changed, err := util.RemoveFinalizer(ctx, r.Client, mapiMachine, SyncFinalizer) + if err != nil { + return false, fmt.Errorf("failed to remove finalizer: %w", err) + } + return changed, nil
🧹 Nitpick comments (1)
pkg/util/watch_filters.go (1)
84-114: Consider logging successful matches and clarify multi-owner behavior.The implementation correctly parses ownerReferences and matches CAPI Machines, but consider:
- Add logging when a match is found (similar to line 72-73 in
ResolveCAPIMachineSetFromInfraMachineTemplate)- The function appends all matching owners—is this intentional? If an InfraMachine has multiple Machine owners, multiple reconcile requests will be enqueued.
- Should the function filter for
Controller: trueowners only?Example enhancement:
if ref.Kind == "Machine" && gv.Group == clusterv1.GroupVersion.Group { + logger.V(4).Info("InfraMachine owned by CAPI Machine, enqueueing request", + "Machine", ref.Name) requests = append(requests, reconcile.Request{ NamespacedName: client.ObjectKey{Namespace: namespace, Name: ref.Name}, }) }
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (4)
pkg/controllers/machinesync/machine_sync_controller.go(9 hunks)pkg/controllers/machinesync/machine_sync_controller_test.go(12 hunks)pkg/controllers/machinesync/suite_test.go(2 hunks)pkg/util/watch_filters.go(4 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- pkg/controllers/machinesync/suite_test.go
🔇 Additional comments (18)
pkg/util/watch_filters.go (3)
24-27: LGTM! Imports support new functionality.The new imports correctly support:
schemafor parsing API versionsclusterv1for CAPI Machine identificationlogffor structured logging (addressing past review feedback)
39-51: LGTM! Structured logging properly implemented.Successfully addresses past review feedback by using
logf.FromContextwith structured key-value pairs instead of klog.Based on learnings
58-82: LGTM! Consistent structured logging.The logging follows the same pattern as
RewriteNamespaceand properly logs the label-based reconciliation trigger.pkg/controllers/machinesync/machine_sync_controller_test.go (9)
271-292: LGTM! Owner reference correctly wired for reconciliation.The owner reference from CAPI Machine to CAPA Machine ensures the sync controller watches infra machine changes. The setup correctly precedes the Create call.
307-341: LGTM! Test properly validates infra machine recreation on providerSpec update.The test correctly verifies that updating the MAPI machine providerSpec triggers recreation of the CAPI infra machine with the new instance type.
356-375: LGTM! Builder pattern properly maintains owner references.Updating
capaMachineBuilderwith owner references ensures consistency across subsequent test uses without repetition.
465-490: LGTM! Properly handles naming and owner reference verification.The test correctly addresses the generateName issue by rebuilding the CAPA machine with the actual MAPI machine name and verifying owner references are set.
564-589: LGTM! Correct creation order for owner references.The test properly sequences resource creation:
- Build CAPA machine template
- Create CAPI Machine (gets UID from API server)
- Add owner reference with UID to CAPA machine
- Create CAPA machine
This ensures valid owner references.
607-607: LGTM! Explicit name instead of generateName.Clearing generateName and using an explicit name is clearer and avoids having both set simultaneously.
650-661: LGTM! Consistent owner reference setup for MachineSet-owned machines.Ensures CAPA machine ownership is properly established even when the CAPI Machine has a MachineSet owner.
963-975: LGTM! VAP tests maintain owner reference consistency.The admission policy tests now include proper owner reference setup, maintaining consistency with the rest of the test suite.
1032-1044: LGTM! Simplified sentinel machine creation.Using explicit names instead of generateName for sentinel/throwaway machines makes test behavior more predictable while still serving the VAP verification purpose.
Also applies to: 1232-1238
pkg/controllers/machinesync/machine_sync_controller.go (6)
126-130: LGTM! Clear terminal error definitions.The new error types correctly identify configuration issues that require user intervention rather than retry.
172-172: LGTM! More precise InfraMachine event handling.Switching from
RewriteNamespacetoResolveCAPIMachineFromInfraMachineensures only InfraMachines with CAPI Machine owners trigger reconciliation, reducing unnecessary reconcile requests.
290-317: LGTM! Terminal error handling prevents infinite retries.The code correctly identifies and handles terminal configuration errors by logging and not requeuing, which is appropriate for issues requiring user intervention.
Minor observation: Line 316's
return ctrl.Result{}, fetchErris only reached for non-terminal errors (since terminal errors return at line 313), so the logic is correct but the flow could be slightly clearer.
431-437: LGTM! Consistent terminal error handling.Applies the same terminal error handling pattern as in
reconcileCAPIMachinetoMAPIMachine, maintaining consistency across reconciliation paths.
976-986: LGTM! Pre-flight validation prevents downstream nil pointer issues.The validation checks catch malformed machine references early and return descriptive terminal errors, preventing nil pointer dereferences in subsequent code.
1007-1008: Good: Named returns improve clarity.The function signature updates use named return values which improves readability. The logger initialization on line 1008 is appropriate.
Also applies to: 1224-1224
6a39fb7 to
60a792f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
pkg/controllers/machinesync/machine_sync_controller.go (1)
290-316: Proper terminal error handling prevents infinite requeues.The logic correctly identifies terminal configuration errors (empty cluster name or infrastructure machine reference) and exits without requeuing, while allowing transient errors to propagate for retry. This prevents wasted reconciliation cycles for user configuration mistakes.
Note: Lines 297-300 and 311-314 contain very similar terminal error checks. Consider extracting this into a helper function to reduce duplication.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (4)
pkg/controllers/machinesync/machine_sync_controller.go(9 hunks)pkg/controllers/machinesync/machine_sync_controller_test.go(12 hunks)pkg/controllers/machinesync/suite_test.go(2 hunks)pkg/util/watch_filters.go(4 hunks)
🔇 Additional comments (10)
pkg/util/watch_filters.go (1)
84-114: LGTM! Well-structured ownership-based reconciliation resolver.The new
ResolveCAPIMachineFromInfraMachinefunction correctly inspects owner references, parses the API group/version, and enqueues reconcile requests for the owning CAPI Machine in the MAPI namespace. The structured logging and error handling are appropriate.pkg/controllers/machinesync/machine_sync_controller_test.go (4)
193-194: Logger setup is correct.The use of
GinkgoLogrwithout a package qualifier is valid becausegithub.com/onsi/ginkgo/v2is dot-imported (line 24), makingGinkgoLogrdirectly accessible. Based on learnings.
271-292: Proper owner reference wiring for testing new reconciliation flow.The test now correctly establishes the CAPI Machine as an owner of the CAPA machine before creating it. This aligns with the updated controller behavior where
ResolveCAPIMachineFromInfraMachineuses owner references to trigger reconciliation, ensuring the watch/event triggering logic is properly tested.
473-490: Good test coverage for owner reference verification.This test block properly verifies that the CAPA machine is created with the correct owner references pointing to the CAPI machine, which is critical for the new ownership-based reconciliation resolver introduced in
pkg/util/watch_filters.go.
1032-1046: Sentinel machine pattern correctly implements VAP test verification.The use of a throwaway sentinel machine to verify that the ValidatingAdmissionPolicy is active and blocking forbidden operations is appropriate test design. The pattern ensures that subsequent test assertions can rely on the policy being enforced.
pkg/controllers/machinesync/machine_sync_controller.go (4)
126-130: Good addition of terminal error sentinels.Introducing explicit error variables for invalid infrastructure references allows the reconciler to distinguish terminal configuration errors (which should not requeue) from transient failures. This improves observability and prevents infinite requeue loops for user configuration mistakes.
171-173: Critical fix: InfraMachine watch now uses ownership-based resolution.Replacing
util.RewriteNamespacewithutil.ResolveCAPIMachineFromInfraMachineensures that reconciliation is triggered correctly based on the actual owning CAPI Machine (via owner references) rather than just name-based namespace rewriting. This is essential for proper event handling when InfraMachines are updated.
976-986: Early validation prevents nil pointer dereferences.Adding pre-validation for empty cluster names and infrastructure machine references before attempting API calls is the right approach. The wrapped error messages include the machine namespace/name for debugging, which improves observability when these configuration errors occur.
1007-1008: Function signature improvements enhance clarity.The updated signature with explicit
shouldRequeue boolreturn value and structured logging vialogf.FromContext(ctx)improves code clarity and aligns with controller-runtime best practices. The named return parameters make the intent clear at the call sites.pkg/controllers/machinesync/suite_test.go (1)
67-68: Test logging correctly configured with controller-runtime.Setting both
logf.SetLoggerandctrl.SetLoggertoGinkgoLogrensures that all controller-runtime logging (including manager and reconciler logs) is properly captured by Ginkgo's test output. This aligns with the structured logging migration across the PR.
60a792f to
5950f19
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
pkg/controllers/machinesync/machine_sync_controller.go (2)
1028-1035: Critical: finalizer removal return-path bug; lost ‘changed’ and nil-wrapping risk.Capture the bool from RemoveFinalizer and only wrap non-nil errors; return requeue when changed.
- _, err := util.RemoveFinalizer(ctx, r.Client, mapiMachine, SyncFinalizer) - - return false, fmt.Errorf("failed to remove finalizer: %w", err) + changed, err := util.RemoveFinalizer(ctx, r.Client, mapiMachine, SyncFinalizer) + if err != nil { + return false, fmt.Errorf("failed to remove finalizer: %w", err) + } + return changed, nil
1045-1051: Fix error message on infra Delete.This path deletes the InfraMachine; message says “failed to remove finalizer”.
- if err := r.Client.Delete(ctx, infraMachine); err != nil { - return false, fmt.Errorf("failed to remove finalizer: %w", err) + if err := r.Client.Delete(ctx, infraMachine); err != nil { + return false, fmt.Errorf("failed to delete Cluster API infra machine: %w", err) }
♻️ Duplicate comments (6)
pkg/controllers/machinesync/machine_sync_controller_test.go (5)
357-375: Same Controller flag concern as above.Mirror the earlier suggestion: set Controller to true for InfraMachine ownerRef.
565-590: OwnerReference Controller flag — align with production.Same recommendation: Controller should be true.
651-662: Repeat: InfraMachine ownerRef Controller flag.Prefer Controller: true.
963-976: Repeat: InfraMachine ownerRef Controller flag.Prefer Controller: true.
192-195: Fix Logger reference to komega.GinkgoLogr.Manager Logger must be komega.GinkgoLogr; unqualified GinkgoLogr won’t compile.
- Logger: GinkgoLogr, + Logger: komega.GinkgoLogr,pkg/controllers/machinesync/machine_sync_controller.go (1)
431-436: Duplicate: terminal error handling mirrors the CAPI->MAPI path.Consistent behaviour; no requeue on configuration errors.
🧹 Nitpick comments (2)
pkg/controllers/machinesync/machine_sync_controller_test.go (1)
271-292: OwnerReference Controller flag likely should be true for InfraMachine.CAPI Machine is the controller of the InfraMachine; use Controller: ptr.To(true) for realism and closer parity with production.
- Controller: ptr.To(false), + Controller: ptr.To(true),pkg/controllers/machinesync/machine_sync_controller.go (1)
976-986: Pre-validate references before GETs; minor tidy.Validation is correct. Optionally move key construction below validation for clarity.
- infraMachineKey := client.ObjectKey{ Namespace: infraMachineRef.Namespace, Name: infraMachineRef.Name } - // Validate... + // Validate... if capiMachine.Spec.ClusterName == "" { ... } if infraMachineRef.Name == "" || infraMachineRef.Namespace == "" { ... } + infraMachineKey := client.ObjectKey{ Namespace: infraMachineRef.Namespace, Name: infraMachineRef.Name }
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (4)
pkg/controllers/machinesync/machine_sync_controller.go(9 hunks)pkg/controllers/machinesync/machine_sync_controller_test.go(12 hunks)pkg/controllers/machinesync/suite_test.go(2 hunks)pkg/util/watch_filters.go(4 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- pkg/controllers/machinesync/suite_test.go
- pkg/util/watch_filters.go
🔇 Additional comments (12)
pkg/controllers/machinesync/machine_sync_controller_test.go (7)
307-341: LGTM: providerSpec update flow and assertions.Update -> recreate infra machine and condition expectations look correct.
466-471: LGTM: name alignment before get.Rebuilding capiMachine with MAPI name avoids 404s.
473-490: LGTM: verifies InfraMachine ownerRef points at the created CAPI Machine.Good UID-based assertion.
547-551: No action.Message-only change; fine to keep.
1032-1046: LGTM: VAP sentinel flow.State transition + VAP denial check looks correct.
1232-1239: LGTM: CAPI sentinel + VAP check.Covers namespace-scoped policy behaviour.
607-607: Verify whether builder state persists across test contexts.The review comment flags a valid concern: line 607 clears
GenerateNameand setsNameon the suite-levelmapiMachineBuilder, but line 1032 (in a different test context) callsWithGenerateName("sentinel-machine")on the same builder instance without clearingName.Standard Go builders typically accumulate state. If the builder retains
Namefrom line 607, thensentinelMachineat line 1032 would have bothNameandGenerateName, which violates Kubernetes object naming rules.Confirm:
- Whether the builder implementation resets state after
.Build()or retains it- Whether Ginkgo test context isolation resets the builder between test executions
- That
sentinelMachineis created with onlyGenerateNameand not an inheritedNamepkg/controllers/machinesync/machine_sync_controller.go (5)
126-131: Good: explicit terminal error types for invalid references.Clear sentinel errors simplify control flow and user-facing messages.
291-315: Good: do-not-requeue on terminal configuration errors.The early exit for empty cluster name/infraRef prevents hot loops and surfaces a clear error.
1007-1015: Signature change is good; deletion path readability improved.Named return shouldRequeue clarifies intent.
1224-1264: LGTM: ensureSyncFinalizer returns shouldRequeue; aggregates errors.Correctly sets requeue when any finalizer is added; safe nil-object checks.
171-174: Mapping implementation verified as correct and robust.The
ResolveCAPIMachineFromInfraMachinefunction properly:
- Resolves CAPI Machines via owner references with explicit Kind and APIVersion validation
- Handles nil/empty owner references and non-Machine owners safely through iteration and filtering
- Works generically across all providers (AWS/OpenStack/PowerVS) with no provider-specific logic
The controller correctly injects
r.MAPINamespaceand usesFilterNamespace(r.CAPINamespace)to ensure InfraMachine watches trigger reconciliation of the corresponding MAPI Machine mirror.
5950f19 to
961a450
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
|
Scheduling tests matching the |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: chrischdi The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
| Eventually(k.Get(capiMachine)).Should(Succeed()) | ||
| }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this is testing anything. We already asserted that Create() succeeded in BeforeEach. Are we wondering if it was deleted since? I would just delete this test.
| Eventually(k.Get( | ||
| awsv1resourcebuilder.AWSMachine().WithName(mapiMachine.Name).WithNamespace(capiNamespace.Name).Build(), | ||
| )).Should(Succeed()) | ||
| It("should successfully create the CAPA machine with correct owner references", func() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto. I would delete this test. All it's doing is testing that we created the test objects in BeforeEach, but BeforeEach already asserted that.
A valid test would be asserting something else related to the machine sync controller, like some status.
| requests = append(requests, reconcile.Request{ | ||
| NamespacedName: client.ObjectKey{Namespace: namespace, Name: ref.Name}, | ||
| }) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we expect multiple Machine owner references? You should probably just return at this point.
This also means you don't need to append, you can return a slice you instantiate here, initialised with a single element.
| } | ||
| } | ||
|
|
||
| return requests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Which also means you can return nil here instead of an empty slice.
|
/retest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, left a couple of Qs.
Also, do we need to do this for MachineSets/InfraMachineTemplates in any form?
| logf.SetLogger(GinkgoLogr) | ||
| ctrl.SetLogger(GinkgoLogr) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I worry that the other controllers suites will drift away from this change if we don't do that in the others too. Would you be able to follow up with a PR and do the same where we do the "old approach"? TY
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's not forget about this :)
|
|
||
| if existingMAPIMachine == nil { | ||
| // Don't requeue for terminal configuration errors | ||
| if errors.Is(err, errInvalidInfraClusterReference) || errors.Is(err, errInvalidInfraMachineReference) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are repeating this check quite a lot, how about having a small function for this, so the configuration errors set can also be extended in a single point in the future.
e.g. something along the lines of
isTerminalConfigurationError()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still see these repeated, would it make sense to add this util func?
| fetchErr := fmt.Errorf("failed to fetch Cluster API infra resources: %w", err) | ||
|
|
||
| if existingMAPIMachine == nil { | ||
| // Don't requeue for terminal configuration errors |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to do the same for MachineSets?
| By("Creating the CAPI infra machine") | ||
| // we must set the capi machine as an owner of the capa machine | ||
| // in order to ensure we reconcile capa changes in our sync controller. | ||
|
|
||
| // Updates the capaMachineBuilder with the correct owner ref, | ||
| // so when we use it later on, we don't need to repeat ourselves. | ||
| capaMachineBuilder = capaMachineBuilder.WithOwnerReferences([]metav1.OwnerReference{ | ||
| { | ||
| Kind: machineKind, | ||
| APIVersion: clusterv1.GroupVersion.String(), | ||
| Name: capiMachine.Name, | ||
| UID: capiMachine.UID, | ||
| BlockOwnerDeletion: ptr.To(true), | ||
| Controller: ptr.To(false), | ||
| }, | ||
| }) | ||
|
|
||
| capaMachine = capaMachineBuilder.Build() | ||
| Expect(k8sClient.Create(ctx, capaMachine)).To(Succeed(), "capa machine should be able to be created") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same
| Name: capaMachine.GetName(), | ||
| Namespace: capaMachine.GetNamespace(), | ||
| }).Build() | ||
| Expect(k8sClient.Create(ctx, capiMachine)).Should(Succeed()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd rather use Eventuallys instead of Expect all over the place for these k8s api calls
| capiMachine = capiMachineBuilder.Build() | ||
| Eventually(k8sClient.Create(ctx, capiMachine)).Should(Succeed(), "capi machine should be able to be created") | ||
|
|
||
| By("Updating the CAPA machine adding the CAPI machine as an owner") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we retain the By()s?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/approve cancel
There's a bunch of outstanding feedback here
Lets get that resolved
|
New changes are detected. LGTM label has been removed. |
|
@theobarberbany: This pull request references Jira Issue OCPBUGS-62325, which is valid. 3 validation(s) were run on this bug
Requesting review from QA contact: In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
@theobarberbany: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Summary by CodeRabbit
New Features
Refactor
Tests
Chores