-
Notifications
You must be signed in to change notification settings - Fork 1.9k
Add cleanup jobs for Kind workflows #12549
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Add cleanup jobs for Kind workflows #12549
Conversation
|
Skipping CI for Draft Pull Request. |
c9e4ab9 to
6ce6ab0
Compare
|
@sduvvuri1603: No presubmit jobs available for kubeflow/pipelines@master In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
/retest |
a1d11c7 to
e64b119
Compare
e64b119 to
32b002e
Compare
494471f to
6433f30
Compare
6433f30 to
de4fabd
Compare
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
de4fabd to
f505767
Compare
Signed-off-by: sduvvuri1603 <[email protected]>
f505767 to
3a8389e
Compare
|
/retest |
| params['page'] = page | ||
| response = requests.get(url, headers=headers, params=params) | ||
|
|
||
| if response.status_code == requests.codes.forbidden: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice catch! thanks for fixing it
I am assuming 403 is when we hit some type of api rate limit?
| if: ${{ steps.deploy.outcome != 'success' || steps.forward-frontend-port.outcome != 'success' || steps.tests.outcome != 'success' }} | ||
| run: | | ||
| ./.github/resources/scripts/collect-logs.sh --ns kubeflow --output /tmp/tmp_pod_log.txt | ||
| if kubectl get namespace kubeflow >/dev/null 2>&1; then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wondering, if we should move this check to the collect-logs.sh script itself, create a function that checks if the namespace exist and if it does not then skips collection of the logs, just like here. Because this step is used at multiple places, so will be good to avoid code duplicacy
| - name: Checkout code | ||
| uses: actions/checkout@v5 | ||
|
|
||
| - name: Free up disk space |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think all workflows that create/deploy cluster needs this step, so may be the better solution is to move this step to create_cluster action and remove this step from all the workflows that explicitly define it. (FYI: AI Assistant can perform this step for you very easily)
Description of your changes:
add a shared cleanup step to integration-tests-v1, kfp-webhooks, and e2e-test-frontend so we free disk and skip log collection when the kubeflow namespace never comes up; this unblocks the disk-exhaustion regressions seen in feat: Add pipeline run parallelism config #12442
harden .github/actions/check-artifact-exists so GitHub’s 403 pagination limit is treated as “artifact not found” rather than crashing the job
fix our Docker builds (backend/Dockerfile, test/release/Dockerfile.release) to fetch Argo CLI and git-cliff with Accept: application/octet-stream and then extract, avoiding HTML responses that were breaking the unpack step