Skip to content

Add service template chart#595

Draft
chuckbutkus wants to merge 29 commits intomainfrom
add-service-template-chart
Draft

Add service template chart#595
chuckbutkus wants to merge 29 commits intomainfrom
add-service-template-chart

Conversation

@chuckbutkus
Copy link
Copy Markdown
Contributor

Description

Helm Chart Checklist

  • I have updated the version field in Chart.yaml for each modified chart
  • I have tested the chart upgrade path from the previous version
  • I have verified backwards compatibility with existing values.yaml configurations
  • I have updated the chart's README.md if there are any breaking changes or new required values

Additional Notes

openhands-agent and others added 29 commits April 30, 2026 01:31
Based on the automation chart pattern, this adds a Helm chart for deploying
the service-template FastAPI microservice with:
- PostgreSQL support (subchart or external)
- Database migrations via Alembic init container
- Optional Datadog tracing integration
- Health probes (startup, liveness, readiness)
- Security context (non-root user)
- Configurable environment variables

Co-authored-by: openhands <openhands@all-hands.dev>
Updates preview-helm-charts.yml, publish-helm-charts.yml, and
validate-chart-versions.yml to include service-template chart:
- Added to detect-changes job outputs
- Added to publish-charts matrix
- Added to lint-and-test matrix
- Added publishable output for version validation

Co-authored-by: openhands <openhands@all-hands.dev>
The deployment was always expecting the 'app-slug' key in the github-app
Secret, but the Secret template only includes it when configured. This
caused pod failures with:

  Error: couldn't find key app-slug in Secret .../github-app

This fix adds a new 'github.appSlugEnabled' value (default: false) that
controls whether the GITHUB_APP_SLUG env var is included. Users who have
configured github_app_slug in their Secret should set this to true.
- Add db-secret.yaml template that auto-generates a random password
  if no existing secret is found
- Add database.existingSecret option to control secret generation
- Bump chart version to 0.1.1

This fixes the 'secret service-template-db-secret not found' error
in feature environments by having the chart create the secret.

Co-authored-by: openhands <openhands@all-hands.dev>
- Add ingress-service-template.yaml for routing /servicetemplate and
  /api/service-template paths to the service-template service
- Add service-template configuration section to openhands values.yaml

This makes service-template accessible at <base_url>/servicetemplate,
similar to how automation is accessed at <base_url>/automations.

Co-authored-by: openhands <openhands@all-hands.dev>
Add Traefik StripPrefix middlewares so that:
- /servicetemplate/foo/bar -> /foo/bar (to backend)
- /api/service-template/foo/bar -> /foo/bar (to backend)

This allows the service-template backend to receive clean paths
without the routing prefix.

Co-authored-by: openhands <openhands@all-hands.dev>
- Add global.ingress.host, global.ingress.prefixWithBranch, and
  global.branchSanitized to values.yaml for subchart access
- Update service-template _env.yaml to derive SERVICE_AUTH_API_BASE_URL
  from global ingress config when authApiBaseUrl is not explicitly set
- Handles branch-prefixed URLs for feature environments

For feature environments, set these global values to match the parent
ingress config so service-template can reach the OpenHands auth API.

Co-authored-by: openhands <openhands@all-hands.dev>
Use nested if statements instead of 'and' to properly check each
level of the global values hierarchy before accessing properties.

Co-authored-by: openhands <openhands@all-hands.dev>
Add required Helm labels and annotations to db-secret:
- app.kubernetes.io/managed-by: Helm
- meta.helm.sh/release-name
- meta.helm.sh/release-namespace

Note: If deploying to a namespace with an existing secret that lacks
these labels, you must either:
1. Delete the existing secret: kubectl delete secret service-template-db-secret -n <namespace>
2. Patch it with labels: kubectl label secret service-template-db-secret app.kubernetes.io/managed-by=Helm -n <namespace>
3. Set database.existingSecret=true in values

Co-authored-by: openhands <openhands@all-hands.dev>
Database secrets are now created by the deploy workflow (_k8s_deploy.yaml)
instead of the Helm chart, avoiding Helm ownership/adoption issues.

Co-authored-by: openhands <openhands@all-hands.dev>
Match the automation pattern - route /servicetemplate and
/api/service-template directly to the service without middleware.
The app now serves frontend at /servicetemplate path.

Co-authored-by: openhands <openhands@all-hands.dev>
Consistent naming with API path /api/service-template.

Co-authored-by: openhands <openhands@all-hands.dev>
When createDatabaseUser is enabled with gcp.dbInstance configured, the init
container now downloads and runs the Cloud SQL Auth Proxy to connect to
Cloud SQL and create the database/user.

This enables automatic database creation for staging and production
environments that use Cloud SQL instead of an in-cluster PostgreSQL.

The init container:
1. Downloads Cloud SQL Auth Proxy v2.15.2
2. Starts the proxy in the background
3. Waits for connectivity via the proxy
4. Creates the database and user if they don't exist
5. Stops the proxy and exits

Co-authored-by: openhands <openhands@all-hands.dev>
The postgres:14 image doesn't include curl by default. Added apt-get
install step to install curl and ca-certificates before downloading
the Cloud SQL Auth Proxy.

Co-authored-by: openhands <openhands@all-hands.dev>
The init container needs root privileges to install curl via apt-get.
Added securityContext with runAsUser: 0.

Co-authored-by: openhands <openhands@all-hands.dev>
Instead of trying to download and run Cloud SQL Auth Proxy within the
postgres container (which requires root for apt-get), use Kubernetes
1.28+ native sidecar containers.

The cloud-sql-proxy init container with restartPolicy: Always runs
continuously while the create-db-user init container executes psql
commands through it.

This approach:
- Uses the official gcr.io/cloud-sql-connectors/cloud-sql-proxy image
- Doesn't require root privileges
- Doesn't need to install any packages
- Is cleaner and more maintainable

Co-authored-by: openhands <openhands@all-hands.dev>
Now that the Cloud SQL Auth Proxy is a separate sidecar container, the
database creation logic is identical for GCP and non-GCP environments.

The only difference is the DB_HOST:
- GCP: 127.0.0.1 (via Cloud SQL Auth Proxy sidecar)
- Non-GCP: database.host value

Co-authored-by: openhands <openhands@all-hands.dev>
Temporarily remove output suppression to see why psql connections
are failing.

Co-authored-by: openhands <openhands@all-hands.dev>
For GCP Cloud SQL deployments, the database and user are now created
via Terraform in the infra repo instead of via Helm init containers.

Changes:
- Remove Cloud SQL Auth Proxy sidecar for DB creation
- Skip create-db-user init container when gcp.dbInstance is set
- Update values.yaml comments to clarify GCP vs non-GCP behavior

The createDatabaseUser option still works for non-GCP deployments
(e.g., self-hosted PostgreSQL or in-cluster PostgreSQL).

Co-authored-by: openhands <openhands@all-hands.dev>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants