-
Notifications
You must be signed in to change notification settings - Fork 260
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🌱 Switch e2e to kind #2209
🌱 Switch e2e to kind #2209
Conversation
Skipping CI for Draft Pull Request. |
Required tests have passed locally for redfish, redfish-virtualmedia and IPMI. Let's see what the CI thinks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cleanup suggestion. Less work to update those.
ironic-deployment/overlays/e2e-release-24.0-with-inspector/ironic_bmo_configmap.env
Outdated
Show resolved
Hide resolved
6480c59
to
8e3272e
Compare
8e3272e
to
543ed55
Compare
Remove BMO overlays no longer used in the e2e tests. Drop upgrade from ironic-24.0 as it is out of support and not needed to be tested anymore. Signed-off-by: Lennart Jern <[email protected]>
Minikube is having troubles starting sometimes. It was nice to work with since the VM could easily be attached to the same network as the BMH VMs, but it is possible to work around that also with kind. Signed-off-by: Lennart Jern <[email protected]>
Signed-off-by: Lennart Jern <[email protected]>
543ed55
to
192fc86
Compare
Sometimes the fixture tests hit the timeout for namespace deletion. The BMO logs indicate that BMO is trying to create new objects while the namespace is terminating. For example HardwareDetails. To avoid this, I think we can trigger deletion of the BMHs before we delete the namespace. We are running a bit close to the 1m deadline on successful runs in the re-inspection test. I believe this is explained by an extra reconcile loop when the hardwaredetails are updated because of the inspection. No other fixture test is close to this deadline normally. Signed-off-by: Lennart Jern <[email protected]>
192fc86
to
2f49b57
Compare
Alright I think this is ready for review. It became a bit larger than planned... Some of it can definitely be split into separate PRs if you wish. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One nit.
Also, @kashifest brought up https://github.com/medyagh/setup-minikube as a way to set up minikube in a GH action. I did not check it out myself if it would actually even be flexible enough for our needs. Did you check that?
@@ -2,6 +2,8 @@ images: | |||
# Use locally built e2e images | |||
- name: quay.io/metal3-io/baremetal-operator:e2e | |||
loadBehavior: tryLoad | |||
# - name: quay.io/metal3-io/ironic:local |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why have this added but commented out?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh I forgot about it!
@Rozzii you added the option to use a locally built ironic image recently. This is basically all that is required for it as far as I understand. If you uncomment this, the image is loaded. The rest is handled by changing the manifests directly or by pointing to another kustomization to deploy below.
Is this acceptable?
Previous process:
- Set
LOAD_LOCAL_IRONIC=true
and build the image - Change e2e config to deploy ironic from e2e-local-ironic
Process after this PR:
- Build the image and comment out these lines in e2e config
- Change e2e config to deploy ironic from e2e-local-ironic
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we move to kind then the image transfer is not needed anymore as kind runs on the host VM so in that context I am fine with the changes.
I am fine with doing it via the config file although I prefer env variable based config because I am a bit worried that it is easy to accidentally commit changes to ironic.yaml.
/cc @Rozzii |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/lgtm cancel |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: Rozzii The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
🚀 let's get rid of the minikube/docker flake!
Thank you @lentzi90 for taking time to do this.
/cherry-pick release-0.9 |
@lentzi90: #2209 failed to apply on top of branch "release-0.9":
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
What this PR does / why we need it:
Minikube is having troubles starting sometimes.
It was nice to work with since the VM could easily be attached to the same network as the BMH VMs, but it is possible to work around that also with kind.
It also removes overlays that are no longer used and the upgrade tests for ironic-24.0 as these are not supported/needed anymore. With that we also get rid of the last inspector code from the scripts 🎉
Fixture tests were only uploading artifacts on success. New they upload also on failure, just like e2e.
I have also tried stabilizing the fixture tests. It seems like there is some condition when namespace deletion takes a long time. From the BMO logs I saw that BMO was trying to create some resources in the terminating namespace. In an attempt to avoid this I added BMH deletion to the cleanup before deleting the namespace. That way at least BMO should be aware what is coming. Not sure if that is the root cause of the slow namespace deletion though.
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #1783