Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions static/compatibilities/argo-workflows.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
icon: https://avatars.githubusercontent.com/u/30269780?s=200&v=4
git_url: https://github.com/argoproj/argo-workflows
release_url: https://github.com/argoproj/argo-workflows/releases/tag/v{vsn}
helm_repository_url: https://argoproj.github.io/argo-helm
chart_name: argo-workflows
versions: []
Comment on lines +1 to +6
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Missing eolApiSlug field

Several other addon YAML files (e.g. cert-manager.yaml, cilium.yaml) include an eolApiSlug field, which main.py uses to enrich version entries with EOL dates from endoflife.date. Argo Workflows has an entry there under the slug argo-workflows.

Omitting this field means the EOL enrichment step will be skipped entirely for this addon. Consider adding:

Suggested change
icon: https://avatars.githubusercontent.com/u/30269780?s=200&v=4
git_url: https://github.com/argoproj/argo-workflows
release_url: https://github.com/argoproj/argo-workflows/releases/tag/v{vsn}
helm_repository_url: https://argoproj.github.io/argo-helm
chart_name: argo-workflows
versions: []
icon: https://avatars.githubusercontent.com/u/30269780?s=200&v=4
git_url: https://github.com/argoproj/argo-workflows
release_url: https://github.com/argoproj/argo-workflows/releases/tag/v{vsn}
helm_repository_url: https://argoproj.github.io/argo-helm
chart_name: argo-workflows
eolApiSlug: argo-workflows
versions: []

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should have a list of versions discovered, did the scraper not run or is it just not yet functional?

1 change: 1 addition & 0 deletions static/compatibilities/manifest.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@ names:
- gatekeeper
- tigera-operator
- argo-cd
- argo-workflows
- vector
- victoria-metrics-operator
- gpu-operator
Expand Down
107 changes: 107 additions & 0 deletions utils/compatibility/scrapers/argo-workflows.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,107 @@
import re
import requests
from collections import OrderedDict

from utils import (
expand_kube_versions,
get_chart_versions,
get_kube_release_info,
get_github_releases_timestamps,
find_last_n_releases,
clean_kube_version,
print_error,
print_success,
print_warning,
update_compatibility_info,
)

app_name = "argo-workflows"
github_repo_owner = "argoproj"
github_repo_name = "argo-workflows"


def fetch_k8s_versions_from_tag(tag):
"""
Fetch the hack/k8s-versions.sh file for a given tag and parse the min/max K8s versions.
Returns a tuple (min_version, max_version) or None if not found/parseable.
"""
url = f"https://raw.githubusercontent.com/{github_repo_owner}/{github_repo_name}/{tag}/hack/k8s-versions.sh"
response = requests.get(url)
if response.status_code != 200:
return None

content = response.text
min_match = re.search(r'\[min\]=v?(\d+\.\d+)', content)
max_match = re.search(r'\[max\]=v?(\d+\.\d+)', content)

if min_match and max_match:
return (min_match.group(1), max_match.group(1))
return None


def fetch_github_tags():
"""Fetch release tags from GitHub API."""
tags = []
for page in range(1, 5):
url = f"https://api.github.com/repos/{github_repo_owner}/{github_repo_name}/tags"
response = requests.get(url, params={"page": page, "per_page": 100})
if response.status_code != 200:
print_error(f"Failed to fetch GitHub tags. Status code: {response.status_code}")
break
page_tags = [tag["name"] for tag in response.json()]
if not page_tags:
break
tags.extend(page_tags)
return tags
Comment on lines +42 to +55
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Per-tag HTTP requests with no throttling

fetch_k8s_versions_from_tag is called for every tag that passes the "-" in tag filter and has a matching chart version. With up to 400 tags fetched across 4 pages, this can generate hundreds of sequential unauthenticated HTTP requests to raw.githubusercontent.com with no rate limiting or delay between calls.

The argo-rollouts.py scraper follows the same pattern, so this is consistent with the codebase — but given the larger number of argo-workflows releases, it may be worth adding a short delay or pre-checking whether the version already exists in the YAML before fetching.



def scrape():
release_tags = fetch_github_tags()
if not release_tags:
print_error("No release tags found.")
return

chart_versions = get_chart_versions(app_name)
kube_releases = get_kube_release_info()
argo_releases = list(reversed(list(get_github_releases_timestamps(github_repo_owner, github_repo_name))))
pruned_argo_releases = {r.lstrip("v"): ts for r, ts in argo_releases if "-" not in r}
Comment on lines +63 to +67
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Page-count asymmetry between tag fetching and release fetching

fetch_github_tags fetches up to 4 pages (≤ 400 tags), but get_github_releases_timestamps in utils.py only fetches 2 pages (≤ 200 releases). For any tag that falls outside the 200-release window and does not have a hack/k8s-versions.sh file, the fallback at lines 85–91 will find no matching entry in pruned_argo_releases and silently skip that version:

print_warning(f"No K8s version info found for {tag}, skipping.")
continue

In practice the most recent releases are always within the first 200 entries, so current data should be complete. For completeness of historical data, consider aligning the two page counts or documenting the intentional cap.


rows = []
for tag in release_tags:
if "-" in tag:
continue

tag_version = tag.lstrip("v")
chart_version = chart_versions.get(tag_version)
if not chart_version:
continue

k8s_versions_from_file = fetch_k8s_versions_from_tag(tag)

if k8s_versions_from_file:
min_k8s, max_k8s = k8s_versions_from_file
kube_versions = expand_kube_versions(min_k8s, max_k8s)
else:
release_ts = pruned_argo_releases.get(tag_version)
if release_ts:
compatible_kube_releases = find_last_n_releases(kube_releases, release_ts, n=3)
kube_versions = [clean_kube_version(kr[0]) for kr in compatible_kube_releases]
else:
print_warning(f"No K8s version info found for {tag}, skipping.")
continue

if not kube_versions:
print_warning(f"Could not determine K8s versions for {tag}, skipping.")
continue

rows.append(OrderedDict([
("version", tag_version),
("kube", kube_versions),
("chart_version", chart_version),
("images", []),
("requirements", []),
("incompatibilities", []),
]))
print_success(f"Fetched compatibility info for tag: {tag}")

update_compatibility_info(f"../../static/compatibilities/{app_name}.yaml", rows)
Loading