diff --git a/CLOUDNATIVEPG.md b/CLOUDNATIVEPG.md new file mode 100644 index 00000000..b600a12c --- /dev/null +++ b/CLOUDNATIVEPG.md @@ -0,0 +1,341 @@ +# CloudNativePG Integration Guide + +This guide covers the production-grade PostgreSQL deployment using CloudNativePG for Supabase on Kubernetes. + +## Overview + +CloudNativePG provides enterprise-grade PostgreSQL management with: + +- **High Availability**: Multi-replica clusters with automatic failover +- **Automated Backups**: Point-in-time recovery capabilities +- **Connection Pooling**: Built-in PgBouncer integration +- **Monitoring**: Prometheus metrics and observability +- **Rolling Updates**: Zero-downtime PostgreSQL updates + +## Architecture + +``` +┌─────────────────────────────────────────────────────────────────┐ +│ Kubernetes Cluster │ +├─────────────────────────────────────────────────────────────────┤ +│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │ +│ │ Supabase │ │ CloudNativePG │ │ MinIO │ │ +│ │ Services │ │ PostgreSQL │ │ Storage │ │ +│ │ │ │ Cluster │ │ │ │ +│ │ • Auth │ │ │ │ • S3 Backend │ │ +│ │ • Storage │ │ • Primary │ │ • File Storage │ │ +│ │ • Realtime │ │ • Replicas │ │ │ │ +│ │ • Functions │ │ • Pooler │ │ │ │ +│ │ • Kong │ │ │ │ │ │ +│ └─────────────────┘ └─────────────────┘ └─────────────────┘ │ +└─────────────────────────────────────────────────────────────────┘ +``` + +## Prerequisites + +- Kubernetes cluster (1.21+) +- Helm 3.x +- CloudNativePG operator + +## Installation Steps + +### 1. Install CloudNativePG Operator + +```bash +# Add CloudNativePG Helm repository +helm repo add cnpg https://cloudnative-pg.github.io/charts +helm repo update + +# Install CloudNativePG operator +helm install cnpg-operator cnpg/cloudnative-pg -n cnpg-system --create-namespace +``` + +### 2. Deploy PostgreSQL Cluster + +```bash +# Deploy PostgreSQL cluster using our configuration +helm install postgres-cluster cnpg/cluster \ + -f values/cloudnativepg/cnpg-cluster/cluster-values.yaml \ + -n supabase-dev --create-namespace +``` + +### 3. Deploy Supabase Services + +```bash +# Install Supabase with CloudNativePG configuration +helm install supabase charts/supabase \ + -f values/supabase/values-cloudnativepg.yaml \ + -n supabase-dev +``` + +### 4. Automated Deployment + +Use the provided script for one-command deployment: + +```bash +chmod +x scripts/deploy-supabase.sh +./scripts/deploy-supabase.sh +``` + +## Configuration Files + +### CloudNativePG Cluster Configuration + +**File**: `values/cloudnativepg/cnpg-cluster/cluster-values.yaml` + +Key configurations: +- **Image**: `supabase/postgres:17.5.1.024-orioledb` +- **Storage**: 100Gi with gp3 storage class +- **Resources**: 4Gi memory, 2 CPU cores +- **Extensions**: All Supabase-required extensions pre-loaded +- **Backup**: S3-compatible backup configuration + +### Supabase Integration Configuration + +**File**: `values/supabase/values-cloudnativepg.yaml` + +Key features: +- **Database**: Disabled internal PostgreSQL, uses CloudNativePG +- **High Availability**: 2+ replicas for critical services +- **Secrets**: Automatic integration with CloudNativePG secrets +- **MinIO**: S3-compatible storage backend +- **Migration**: Comprehensive database setup job + +## High Availability Features + +### Replica Configuration + +```yaml +# Enable HA in values/supabase/values-cloudnativepg.yaml +global: + highAvailability: + enabled: true + minReplicas: 2 + maxReplicas: 10 +``` + +### Anti-Affinity Rules + +Services are automatically spread across nodes: + +```yaml +auth: + replicaCount: 2 + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 100 + podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/component: auth + topologyKey: kubernetes.io/hostname +``` + +### Pod Disruption Budgets + +Automatic PDB creation ensures service availability during maintenance: + +```yaml +global: + highAvailability: + podDisruptionBudget: + enabled: true + maxUnavailable: 1 +``` + +## Migration Job + +The migration job handles complete Supabase database setup: + +### Features + +- **Schema Setup**: All Supabase schemas and extensions +- **Role Management**: Proper user roles and permissions +- **JWT Configuration**: Automatic JWT secret configuration +- **Extension Installation**: All required PostgreSQL extensions +- **Error Handling**: Robust error handling for known issues + +### Monitoring Migration + +```bash +# Check migration job status +kubectl get jobs -n supabase-dev +kubectl logs job/supabase-migrations -n supabase-dev +``` + +## Monitoring and Observability + +### PostgreSQL Metrics + +CloudNativePG provides built-in Prometheus metrics: + +```bash +# Check cluster status +kubectl get cluster -n supabase-dev + +# View cluster details +kubectl describe cluster supabase-postgres-cluster -n supabase-dev +``` + +### Service Health + +```bash +# Check all pods +kubectl get pods -n supabase-dev + +# Check service endpoints +kubectl get svc -n supabase-dev +``` + +## Backup and Recovery + +### Automated Backups + +Configured in `cluster-values.yaml`: + +```yaml +backups: + enabled: true + destinationPath: "s3://supabase-backups/postgres" + scheduledBackups: + - name: daily-backup + schedule: "0 2 * * *" # Daily at 2 AM + retentionPolicy: "30d" +``` + +### Point-in-Time Recovery + +CloudNativePG supports PITR for disaster recovery scenarios. + +## Scaling + +### Horizontal Scaling + +```bash +# Scale auth service +kubectl scale deployment supabase-supabase-auth --replicas=3 -n supabase-dev + +# Scale PostgreSQL replicas +kubectl patch cluster supabase-postgres-cluster \ + --type='merge' -p='{"spec":{"instances":3}}' -n supabase-dev +``` + +### Vertical Scaling + +Update resource limits in configuration files and upgrade: + +```bash +helm upgrade supabase charts/supabase \ + -f values/supabase/values-cloudnativepg.yaml \ + -n supabase-dev +``` + +## Troubleshooting + +### Common Issues + +#### Migration Job Fails + +```bash +# Check migration logs +kubectl logs job/supabase-migrations -n supabase-dev + +# Common fixes: +# 1. Ensure PostgreSQL cluster is ready +kubectl get cluster -n supabase-dev +# 2. Verify database connectivity +kubectl exec -it supabase-postgres-cluster-1 -n supabase-dev -- pg_isready +``` + +#### Database Connection Issues + +```bash +# Check PostgreSQL cluster status +kubectl get cluster supabase-postgres-cluster -n supabase-dev + +# Test connectivity +kubectl exec -it supabase-postgres-cluster-1 -n supabase-dev -- \ + psql -U postgres -c "SELECT version();" +``` + +#### Service Startup Issues + +```bash +# Check service logs +kubectl logs deployment/supabase-supabase-auth -n supabase-dev + +# Verify secrets +kubectl get secret supabase-postgres-cluster-superuser -n supabase-dev -o yaml +``` + +### Performance Tuning + +#### PostgreSQL Configuration + +Adjust in `cluster-values.yaml`: + +```yaml +postgresql: + parameters: + max_connections: "200" + shared_buffers: "256MB" + effective_cache_size: "1GB" + work_mem: "4MB" +``` + +#### Connection Pooling + +PgBouncer is configured automatically: + +```yaml +poolers: + - enabled: true + name: pooler-rw + instances: 4 + pgbouncer: + poolMode: transaction + parameters: + max_client_conn: "2000" + default_pool_size: "50" +``` + +## Security Considerations + +### Network Policies + +Implement network policies to restrict traffic between services. + +### Secret Management + +- Use external secret management systems +- Rotate JWT secrets regularly +- Enable encryption at rest + +### Database Security + +- Enable SSL/TLS for database connections +- Use strong passwords +- Implement proper role-based access control + +## Production Checklist + +- [ ] **SSL/TLS**: Configure SSL for all endpoints +- [ ] **Monitoring**: Set up Prometheus and Grafana +- [ ] **Backups**: Configure S3 backup storage +- [ ] **Secrets**: Use external secret management +- [ ] **Network**: Implement network policies +- [ ] **Resources**: Set appropriate resource limits +- [ ] **Storage**: Use high-performance storage classes +- [ ] **Scaling**: Configure HPA and VPA +- [ ] **Disaster Recovery**: Test backup and restore procedures + +## Support + +For CloudNativePG-specific issues: +- [CloudNativePG Documentation](https://cloudnative-pg.io/documentation/) +- [CloudNativePG GitHub](https://github.com/cloudnative-pg/cloudnative-pg) + +For Supabase integration issues: +- Open an issue in this repository +- Check the troubleshooting section above diff --git a/README.md b/README.md index e0149fc7..c7d1c92d 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@ # Supabase Kubernetes -This repository contains the charts to deploy a [Supabase](https://github.com/supabase/supabase) instance inside a Kubernetes cluster using Helm 3. +This repository contains the charts to deploy a [Supabase](https://github.com/supabase/supabase) instance inside a Kubernetes cluster using Helm 3, with **production-ready PostgreSQL support** using CloudNativePG. For any information regarding Supabase itself you can refer to the [official documentation](https://supabase.io/docs). @@ -8,13 +8,62 @@ For any information regarding Supabase itself you can refer to the [official doc Supabase is an open source Firebase alternative. We're building the features of Firebase using enterprise-grade open source tools. +## Features + +✅ **Production-Ready PostgreSQL**: Optional CloudNativePG integration for enterprise-grade PostgreSQL management +✅ **High Availability**: Multi-replica deployments with anti-affinity rules and pod disruption budgets +✅ **Automated Database Setup**: Comprehensive migration job that mimics official Supabase `migrate.sh` +✅ **Horizontal Pod Autoscaling**: Automatic scaling based on CPU/memory utilization +✅ **Multi-Zone Support**: Topology spread constraints for zone-aware deployments +✅ **Backward Compatibility**: Existing embedded PostgreSQL option still available + +## Deployment Options + +### Standard Deployment (Default) +Uses the embedded PostgreSQL container - suitable for development and testing: +```bash +helm install supabase charts/supabase +``` + +### Production Deployment with CloudNativePG +For production workloads with enterprise-grade PostgreSQL: + +1. **Install CloudNativePG operator:** + ```bash + helm repo add cnpg https://cloudnative-pg.github.io/charts + helm install cnpg-operator cnpg/cloudnative-pg -n cnpg-system --create-namespace + ``` + +2. **Deploy PostgreSQL cluster:** + ```bash + helm install postgres-cluster cnpg/cluster \ + -f values/cloudnativepg/cnpg-cluster/cluster-values.yaml \ + -n supabase-dev --create-namespace + ``` + +3. **Deploy Supabase with CloudNativePG:** + ```bash + helm install supabase charts/supabase \ + -f values/supabase/values-cloudnativepg.yaml \ + -n supabase-dev + ``` + +### Quick Deployment Script +Use the automated deployment script: +```bash +chmod +x scripts/deploy-supabase.sh +./scripts/deploy-supabase.sh +``` + ## How to use ? You can find the documentation inside the [chart directory](./charts/supabase/README.md) # Roadmap -- [ ] Multi-node Support +- [x] Multi-node Support ✅ +- [x] High Availability ✅ +- [x] Production-grade PostgreSQL ✅ ## Support diff --git a/charts/supabase/Chart.yaml b/charts/supabase/Chart.yaml index 92a3b4de..8477341a 100644 --- a/charts/supabase/Chart.yaml +++ b/charts/supabase/Chart.yaml @@ -1,6 +1,6 @@ apiVersion: v2 name: supabase -description: The open source Firebase alternative. +description: The open source Firebase alternative with high availability and multi-node support. # A chart can be either an 'application' or a 'library' chart. # @@ -15,7 +15,7 @@ type: application # This is the chart version. This version number should be incremented each time you make changes # to the chart and its templates, including the app version. # Versions are expected to follow Semantic Versioning (https://semver.org/) -version: 0.1.3 +version: 0.2.0 # This is the version number of the application being deployed. This version number should be # incremented each time you make changes to the application. Versions are not expected to diff --git a/charts/supabase/templates/_ha-helpers.tpl b/charts/supabase/templates/_ha-helpers.tpl new file mode 100644 index 00000000..3a36fece --- /dev/null +++ b/charts/supabase/templates/_ha-helpers.tpl @@ -0,0 +1,98 @@ +{{/* +High Availability helper templates +*/}} + +{{/* +Get replica count for a service based on HA settings +*/}} +{{- define "supabase.ha.replicaCount" -}} +{{- $service := .service -}} +{{- $values := .values -}} +{{- $global := .global -}} +{{- if and $global.highAvailability.enabled $service.highAvailability.enabled -}} +{{- $service.highAvailability.minReplicas | default $global.highAvailability.minReplicas -}} +{{- else -}} +{{- $service.replicaCount | default 1 -}} +{{- end -}} +{{- end -}} + +{{/* +Generate anti-affinity rules for HA services +*/}} +{{- define "supabase.ha.antiAffinity" -}} +{{- $service := .service -}} +{{- $serviceName := .serviceName -}} +{{- $global := .global -}} +{{- if and $global.highAvailability.enabled $service.highAvailability.enabled $service.highAvailability.antiAffinity.enabled -}} +podAntiAffinity: + {{- if eq $service.highAvailability.antiAffinity.type "hard" }} + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: app.kubernetes.io/name + operator: In + values: [{{ $serviceName }}] + topologyKey: kubernetes.io/hostname + {{- else }} + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 100 + podAffinityTerm: + labelSelector: + matchExpressions: + - key: app.kubernetes.io/name + operator: In + values: [{{ $serviceName }}] + topologyKey: kubernetes.io/hostname + {{- end }} + {{- if $global.multiZone.enabled }} + - weight: 50 + podAffinityTerm: + labelSelector: + matchExpressions: + - key: app.kubernetes.io/name + operator: In + values: [{{ $serviceName }}] + topologyKey: topology.kubernetes.io/zone + {{- end }} +{{- end -}} +{{- end -}} + +{{/* +Generate HPA configuration for HA services +*/}} +{{- define "supabase.ha.hpa" -}} +{{- $service := .service -}} +{{- $global := .global -}} +{{- if and $global.highAvailability.enabled $service.highAvailability.enabled -}} +minReplicas: {{ $service.highAvailability.minReplicas | default $global.highAvailability.minReplicas }} +maxReplicas: {{ $global.highAvailability.maxReplicas }} +targetCPUUtilizationPercentage: {{ $global.highAvailability.targetCPUUtilization }} +{{- if $global.highAvailability.targetMemoryUtilization }} +targetMemoryUtilizationPercentage: {{ $global.highAvailability.targetMemoryUtilization }} +{{- end }} +{{- else }} +minReplicas: {{ $service.autoscaling.minReplicas }} +maxReplicas: {{ $service.autoscaling.maxReplicas }} +targetCPUUtilizationPercentage: {{ $service.autoscaling.targetCPUUtilizationPercentage }} +{{- if $service.autoscaling.targetMemoryUtilizationPercentage }} +targetMemoryUtilizationPercentage: {{ $service.autoscaling.targetMemoryUtilizationPercentage }} +{{- end }} +{{- end -}} +{{- end -}} + +{{/* +Generate zone spread topology constraints +*/}} +{{- define "supabase.ha.topologySpreadConstraints" -}} +{{- $serviceName := .serviceName -}} +{{- $global := .global -}} +{{- if and $global.highAvailability.enabled $global.multiZone.enabled -}} +topologySpreadConstraints: +- maxSkew: 1 + topologyKey: topology.kubernetes.io/zone + whenUnsatisfiable: {{ if $global.multiZone.preferZoneSpread }}ScheduleAnyway{{ else }}DoNotSchedule{{ end }} + labelSelector: + matchLabels: + app.kubernetes.io/name: {{ $serviceName }} +{{- end -}} +{{- end -}} diff --git a/charts/supabase/templates/common/hpa.yaml b/charts/supabase/templates/common/hpa.yaml new file mode 100644 index 00000000..f14ea9ef --- /dev/null +++ b/charts/supabase/templates/common/hpa.yaml @@ -0,0 +1,227 @@ +{{- if .Values.global.highAvailability.enabled }} + +{{- if and .Values.kong.enabled .Values.kong.autoscaling.enabled }} +--- +apiVersion: autoscaling/v2 +kind: HorizontalPodAutoscaler +metadata: + name: {{ include "supabase.kong.fullname" . }} + labels: + {{- include "supabase.labels" . | nindent 4 }} + app.kubernetes.io/component: kong +spec: + scaleTargetRef: + apiVersion: apps/v1 + kind: Deployment + name: {{ include "supabase.kong.fullname" . }} + {{- include "supabase.ha.hpa" (dict "service" .Values.kong "global" .Values.global) | nindent 2 }} + metrics: + - type: Resource + resource: + name: cpu + target: + type: Utilization + averageUtilization: {{ .Values.global.highAvailability.targetCPUUtilization | default .Values.kong.autoscaling.targetCPUUtilizationPercentage }} + {{- if .Values.global.highAvailability.targetMemoryUtilization }} + - type: Resource + resource: + name: memory + target: + type: Utilization + averageUtilization: {{ .Values.global.highAvailability.targetMemoryUtilization }} + {{- end }} +{{- end }} + +{{- if and .Values.auth.enabled .Values.auth.autoscaling.enabled }} +--- +apiVersion: autoscaling/v2 +kind: HorizontalPodAutoscaler +metadata: + name: {{ include "supabase.auth.fullname" . }} + labels: + {{- include "supabase.labels" . | nindent 4 }} + app.kubernetes.io/component: auth +spec: + scaleTargetRef: + apiVersion: apps/v1 + kind: Deployment + name: {{ include "supabase.auth.fullname" . }} + {{- include "supabase.ha.hpa" (dict "service" .Values.auth "global" .Values.global) | nindent 2 }} + metrics: + - type: Resource + resource: + name: cpu + target: + type: Utilization + averageUtilization: {{ .Values.global.highAvailability.targetCPUUtilization | default .Values.auth.autoscaling.targetCPUUtilizationPercentage }} + {{- if .Values.global.highAvailability.targetMemoryUtilization }} + - type: Resource + resource: + name: memory + target: + type: Utilization + averageUtilization: {{ .Values.global.highAvailability.targetMemoryUtilization }} + {{- end }} +{{- end }} + +{{- if and .Values.rest.enabled .Values.rest.autoscaling.enabled }} +--- +apiVersion: autoscaling/v2 +kind: HorizontalPodAutoscaler +metadata: + name: {{ include "supabase.rest.fullname" . }} + labels: + {{- include "supabase.labels" . | nindent 4 }} + app.kubernetes.io/component: rest +spec: + scaleTargetRef: + apiVersion: apps/v1 + kind: Deployment + name: {{ include "supabase.rest.fullname" . }} + {{- include "supabase.ha.hpa" (dict "service" .Values.rest "global" .Values.global) | nindent 2 }} + metrics: + - type: Resource + resource: + name: cpu + target: + type: Utilization + averageUtilization: {{ .Values.global.highAvailability.targetCPUUtilization | default .Values.rest.autoscaling.targetCPUUtilizationPercentage }} + {{- if .Values.global.highAvailability.targetMemoryUtilization }} + - type: Resource + resource: + name: memory + target: + type: Utilization + averageUtilization: {{ .Values.global.highAvailability.targetMemoryUtilization }} + {{- end }} +{{- end }} + +{{- if and .Values.realtime.enabled .Values.realtime.autoscaling.enabled }} +--- +apiVersion: autoscaling/v2 +kind: HorizontalPodAutoscaler +metadata: + name: {{ include "supabase.realtime.fullname" . }} + labels: + {{- include "supabase.labels" . | nindent 4 }} + app.kubernetes.io/component: realtime +spec: + scaleTargetRef: + apiVersion: apps/v1 + kind: Deployment + name: {{ include "supabase.realtime.fullname" . }} + {{- include "supabase.ha.hpa" (dict "service" .Values.realtime "global" .Values.global) | nindent 2 }} + metrics: + - type: Resource + resource: + name: cpu + target: + type: Utilization + averageUtilization: {{ .Values.global.highAvailability.targetCPUUtilization | default .Values.realtime.autoscaling.targetCPUUtilizationPercentage }} + {{- if .Values.global.highAvailability.targetMemoryUtilization }} + - type: Resource + resource: + name: memory + target: + type: Utilization + averageUtilization: {{ .Values.global.highAvailability.targetMemoryUtilization }} + {{- end }} +{{- end }} + +{{- if and .Values.meta.enabled .Values.meta.autoscaling.enabled }} +--- +apiVersion: autoscaling/v2 +kind: HorizontalPodAutoscaler +metadata: + name: {{ include "supabase.meta.fullname" . }} + labels: + {{- include "supabase.labels" . | nindent 4 }} + app.kubernetes.io/component: meta +spec: + scaleTargetRef: + apiVersion: apps/v1 + kind: Deployment + name: {{ include "supabase.meta.fullname" . }} + {{- include "supabase.ha.hpa" (dict "service" .Values.meta "global" .Values.global) | nindent 2 }} + metrics: + - type: Resource + resource: + name: cpu + target: + type: Utilization + averageUtilization: {{ .Values.global.highAvailability.targetCPUUtilization | default .Values.meta.autoscaling.targetCPUUtilizationPercentage }} + {{- if .Values.global.highAvailability.targetMemoryUtilization }} + - type: Resource + resource: + name: memory + target: + type: Utilization + averageUtilization: {{ .Values.global.highAvailability.targetMemoryUtilization }} + {{- end }} +{{- end }} + +{{- if and .Values.storage.enabled .Values.storage.autoscaling.enabled }} +--- +apiVersion: autoscaling/v2 +kind: HorizontalPodAutoscaler +metadata: + name: {{ include "supabase.storage.fullname" . }} + labels: + {{- include "supabase.labels" . | nindent 4 }} + app.kubernetes.io/component: storage +spec: + scaleTargetRef: + apiVersion: apps/v1 + kind: Deployment + name: {{ include "supabase.storage.fullname" . }} + {{- include "supabase.ha.hpa" (dict "service" .Values.storage "global" .Values.global) | nindent 2 }} + metrics: + - type: Resource + resource: + name: cpu + target: + type: Utilization + averageUtilization: {{ .Values.global.highAvailability.targetCPUUtilization | default .Values.storage.autoscaling.targetCPUUtilizationPercentage }} + {{- if .Values.global.highAvailability.targetMemoryUtilization }} + - type: Resource + resource: + name: memory + target: + type: Utilization + averageUtilization: {{ .Values.global.highAvailability.targetMemoryUtilization }} + {{- end }} +{{- end }} + +{{- if and .Values.analytics.enabled .Values.analytics.autoscaling.enabled }} +--- +apiVersion: autoscaling/v2 +kind: HorizontalPodAutoscaler +metadata: + name: {{ include "supabase.analytics.fullname" . }} + labels: + {{- include "supabase.labels" . | nindent 4 }} + app.kubernetes.io/component: analytics +spec: + scaleTargetRef: + apiVersion: apps/v1 + kind: Deployment + name: {{ include "supabase.analytics.fullname" . }} + {{- include "supabase.ha.hpa" (dict "service" .Values.analytics "global" .Values.global) | nindent 2 }} + metrics: + - type: Resource + resource: + name: cpu + target: + type: Utilization + averageUtilization: {{ .Values.global.highAvailability.targetCPUUtilization | default .Values.analytics.autoscaling.targetCPUUtilizationPercentage }} + {{- if .Values.global.highAvailability.targetMemoryUtilization }} + - type: Resource + resource: + name: memory + target: + type: Utilization + averageUtilization: {{ .Values.global.highAvailability.targetMemoryUtilization }} + {{- end }} +{{- end }} + +{{- end }} diff --git a/charts/supabase/templates/common/poddisruptionbudget.yaml b/charts/supabase/templates/common/poddisruptionbudget.yaml new file mode 100644 index 00000000..3cc26572 --- /dev/null +++ b/charts/supabase/templates/common/poddisruptionbudget.yaml @@ -0,0 +1,125 @@ +{{- if .Values.global.highAvailability.enabled }} +{{- if .Values.global.highAvailability.podDisruptionBudget.enabled }} + +{{- if .Values.kong.enabled }} +--- +apiVersion: policy/v1 +kind: PodDisruptionBudget +metadata: + name: {{ include "supabase.kong.fullname" . }}-pdb + labels: + {{- include "supabase.labels" . | nindent 4 }} + app.kubernetes.io/component: kong +spec: + {{- if .Values.global.highAvailability.podDisruptionBudget.minAvailable }} + minAvailable: {{ .Values.global.highAvailability.podDisruptionBudget.minAvailable }} + {{- else }} + maxUnavailable: {{ .Values.global.highAvailability.podDisruptionBudget.maxUnavailable | default 1 }} + {{- end }} + selector: + matchLabels: + {{- include "supabase.kong.selectorLabels" . | nindent 6 }} +{{- end }} + +{{- if .Values.auth.enabled }} +--- +apiVersion: policy/v1 +kind: PodDisruptionBudget +metadata: + name: {{ include "supabase.auth.fullname" . }}-pdb + labels: + {{- include "supabase.labels" . | nindent 4 }} + app.kubernetes.io/component: auth +spec: + {{- if .Values.global.highAvailability.podDisruptionBudget.minAvailable }} + minAvailable: {{ .Values.global.highAvailability.podDisruptionBudget.minAvailable }} + {{- else }} + maxUnavailable: {{ .Values.global.highAvailability.podDisruptionBudget.maxUnavailable | default 1 }} + {{- end }} + selector: + matchLabels: + {{- include "supabase.auth.selectorLabels" . | nindent 6 }} +{{- end }} + +{{- if .Values.rest.enabled }} +--- +apiVersion: policy/v1 +kind: PodDisruptionBudget +metadata: + name: {{ include "supabase.rest.fullname" . }}-pdb + labels: + {{- include "supabase.labels" . | nindent 4 }} + app.kubernetes.io/component: rest +spec: + {{- if .Values.global.highAvailability.podDisruptionBudget.minAvailable }} + minAvailable: {{ .Values.global.highAvailability.podDisruptionBudget.minAvailable }} + {{- else }} + maxUnavailable: {{ .Values.global.highAvailability.podDisruptionBudget.maxUnavailable | default 1 }} + {{- end }} + selector: + matchLabels: + {{- include "supabase.rest.selectorLabels" . | nindent 6 }} +{{- end }} + +{{- if .Values.realtime.enabled }} +--- +apiVersion: policy/v1 +kind: PodDisruptionBudget +metadata: + name: {{ include "supabase.realtime.fullname" . }}-pdb + labels: + {{- include "supabase.labels" . | nindent 4 }} + app.kubernetes.io/component: realtime +spec: + {{- if .Values.global.highAvailability.podDisruptionBudget.minAvailable }} + minAvailable: {{ .Values.global.highAvailability.podDisruptionBudget.minAvailable }} + {{- else }} + maxUnavailable: {{ .Values.global.highAvailability.podDisruptionBudget.maxUnavailable | default 1 }} + {{- end }} + selector: + matchLabels: + {{- include "supabase.realtime.selectorLabels" . | nindent 6 }} +{{- end }} + +{{- if .Values.meta.enabled }} +--- +apiVersion: policy/v1 +kind: PodDisruptionBudget +metadata: + name: {{ include "supabase.meta.fullname" . }}-pdb + labels: + {{- include "supabase.labels" . | nindent 4 }} + app.kubernetes.io/component: meta +spec: + {{- if .Values.global.highAvailability.podDisruptionBudget.minAvailable }} + minAvailable: {{ .Values.global.highAvailability.podDisruptionBudget.minAvailable }} + {{- else }} + maxUnavailable: {{ .Values.global.highAvailability.podDisruptionBudget.maxUnavailable | default 1 }} + {{- end }} + selector: + matchLabels: + {{- include "supabase.meta.selectorLabels" . | nindent 6 }} +{{- end }} + +{{- if .Values.storage.enabled }} +--- +apiVersion: policy/v1 +kind: PodDisruptionBudget +metadata: + name: {{ include "supabase.storage.fullname" . }}-pdb + labels: + {{- include "supabase.labels" . | nindent 4 }} + app.kubernetes.io/component: storage +spec: + {{- if .Values.global.highAvailability.podDisruptionBudget.minAvailable }} + minAvailable: {{ .Values.global.highAvailability.podDisruptionBudget.minAvailable }} + {{- else }} + maxUnavailable: {{ .Values.global.highAvailability.podDisruptionBudget.maxUnavailable | default 1 }} + {{- end }} + selector: + matchLabels: + {{- include "supabase.storage.selectorLabels" . | nindent 6 }} +{{- end }} + +{{- end }} +{{- end }} diff --git a/charts/supabase/templates/migrations/job.yaml b/charts/supabase/templates/migrations/job.yaml new file mode 100644 index 00000000..8d140670 --- /dev/null +++ b/charts/supabase/templates/migrations/job.yaml @@ -0,0 +1,352 @@ +{{- if .Values.migrations.enabled }} +apiVersion: batch/v1 +kind: Job +metadata: + name: {{ include "supabase.fullname" . }}-migrations + labels: + {{- include "supabase.labels" . | nindent 4 }} + app.kubernetes.io/component: migrations + annotations: + "helm.sh/hook": post-install,post-upgrade + "helm.sh/hook-weight": "-5" + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded +spec: + template: + metadata: + labels: + {{- include "supabase.selectorLabels" . | nindent 8 }} + app.kubernetes.io/component: migrations + spec: + restartPolicy: Never + containers: + - name: supabase-migrations + image: {{ .Values.migrations.image.repository }}:{{ .Values.migrations.image.tag | default "latest" }} + imagePullPolicy: {{ .Values.migrations.image.pullPolicy }} + env: + - name: POSTGRES_HOST + value: {{ .Values.migrations.database.host | quote }} + - name: POSTGRES_PORT + value: {{ .Values.migrations.database.port | quote }} + - name: POSTGRES_DB + value: {{ .Values.migrations.database.name | quote }} + - name: POSTGRES_USER + value: {{ .Values.migrations.database.user | quote }} + - name: POSTGRES_PASSWORD + valueFrom: + secretKeyRef: + name: {{ .Values.migrations.database.secretRef | quote }} + key: {{ .Values.migrations.database.secretKey | quote }} + - name: SUPABASE_PASSWORD + valueFrom: + secretKeyRef: + name: {{ .Values.migrations.database.secretRef | quote }} + key: {{ .Values.migrations.database.secretKey | quote }} + command: + - /bin/bash + - -c + - | + # Mimic migrate.sh script but with better error handling + set -e # Keep exit on error, but handle specific known issues + + echo "Starting Supabase migrations (mimicking migrate.sh)..." + + # Set environment variables exactly like migrate.sh + export PGDATABASE="${POSTGRES_DB:-postgres}" + export PGHOST="${POSTGRES_HOST:-localhost}" + export PGPORT="${POSTGRES_PORT:-5432}" + export PGPASSWORD="${POSTGRES_PASSWORD:-}" + + # Wait for database to be ready + until pg_isready -h "$PGHOST" -p "$PGPORT" -U postgres; do + echo "Waiting for database to be ready..." + sleep 2 + done + + echo "Database is ready. Starting migration process..." + + # Get the directory path like migrate.sh does + db="/docker-entrypoint-initdb.d" + + # First, set the password for supabase_admin (roles exist but no password) + echo "Setting password for supabase_admin..." + psql -v ON_ERROR_STOP=1 --no-password --no-psqlrc -U postgres -c "ALTER USER supabase_admin WITH PASSWORD '$PGPASSWORD';" + + # First part: Create postgres role if it doesn't exist (like migrate.sh) + echo "Ensuring postgres role exists..." + psql -v ON_ERROR_STOP=1 --no-password --no-psqlrc -U supabase_admin </dev/null || echo "_supabase database already exists (expected)" + + # Create analytics schema and table in _supabase database + psql -h "$PGHOST" -p "$PGPORT" -U postgres -d "_supabase" -c " + CREATE SCHEMA IF NOT EXISTS _analytics; + GRANT ALL ON SCHEMA _analytics TO supabase_admin; + + CREATE TABLE IF NOT EXISTS _analytics.system_metrics ( + id serial primary key, + all_logs_logged boolean, + node text, + inserted_at timestamp, + updated_at timestamp + ); + GRANT ALL ON TABLE _analytics.system_metrics TO supabase_admin; + GRANT ALL ON SEQUENCE _analytics.system_metrics_id_seq TO supabase_admin; + GRANT ALL ON DATABASE _supabase TO supabase_admin; + " + + echo "Running additional Supabase initialization scripts..." + + # 99-jwt.sql - JWT settings + echo "Setting up JWT configuration..." + psql -h "$PGHOST" -p "$PGPORT" -U postgres -d "$PGDATABASE" -c " + ALTER DATABASE postgres SET \"app.settings.jwt_secret\" TO '{{ .Values.secret.jwt.secret }}'; + ALTER DATABASE postgres SET \"app.settings.jwt_exp\" TO '3600'; + " || echo "Warning: JWT settings may have failed (continuing)" + + # 99-logs.sql - Analytics schema (already done above, but ensure ownership) + echo "Ensuring analytics schema ownership..." + psql -h "$PGHOST" -p "$PGPORT" -U postgres -d "$PGDATABASE" -c " + CREATE SCHEMA IF NOT EXISTS _analytics; + ALTER SCHEMA _analytics OWNER TO postgres; + " || echo "Warning: Analytics schema setup may have failed (continuing)" + + # 99-realtime.sql - Realtime schema + echo "Setting up realtime schema..." + psql -h "$PGHOST" -p "$PGPORT" -U postgres -d "$PGDATABASE" -c " + CREATE SCHEMA IF NOT EXISTS _realtime; + ALTER SCHEMA _realtime OWNER TO postgres; + " || echo "Warning: Realtime schema setup may have failed (continuing)" + + # 99-roles.sql - Set passwords for roles + echo "Setting passwords for Supabase roles..." + psql -h "$PGHOST" -p "$PGPORT" -U postgres -d "$PGDATABASE" -c " + ALTER USER authenticator WITH PASSWORD '$PGPASSWORD'; + ALTER USER pgbouncer WITH PASSWORD '$PGPASSWORD'; + ALTER USER supabase_auth_admin WITH PASSWORD '$PGPASSWORD'; + ALTER USER supabase_storage_admin WITH PASSWORD '$PGPASSWORD'; + " || echo "Warning: Role password setup may have failed (continuing)" + + # 98-webhooks.sql - Functions and webhooks setup (broken into individual commands) + echo "Setting up functions and webhooks..." + + # Create pg_net extension + echo "Creating pg_net extension..." + psql -h "$PGHOST" -p "$PGPORT" -U postgres -d "$PGDATABASE" -c "CREATE EXTENSION IF NOT EXISTS pg_net SCHEMA extensions;" || echo "Warning: pg_net extension creation failed (continuing)" + + # Create supabase_functions schema + echo "Creating supabase_functions schema..." + psql -h "$PGHOST" -p "$PGPORT" -U postgres -d "$PGDATABASE" -c " + CREATE SCHEMA IF NOT EXISTS supabase_functions; + ALTER SCHEMA supabase_functions OWNER TO supabase_admin; + GRANT USAGE ON SCHEMA supabase_functions TO postgres, anon, authenticated, service_role; + ALTER DEFAULT PRIVILEGES IN SCHEMA supabase_functions GRANT ALL ON TABLES TO postgres, anon, authenticated, service_role; + ALTER DEFAULT PRIVILEGES IN SCHEMA supabase_functions GRANT ALL ON FUNCTIONS TO postgres, anon, authenticated, service_role; + ALTER DEFAULT PRIVILEGES IN SCHEMA supabase_functions GRANT ALL ON SEQUENCES TO postgres, anon, authenticated, service_role; + " || echo "Warning: supabase_functions schema creation failed (continuing)" + + # Create supabase_functions tables + echo "Creating supabase_functions tables..." + psql -h "$PGHOST" -p "$PGPORT" -U postgres -d "$PGDATABASE" -c " + CREATE TABLE IF NOT EXISTS supabase_functions.migrations ( + version text PRIMARY KEY, + inserted_at timestamptz NOT NULL DEFAULT NOW() + ); + INSERT INTO supabase_functions.migrations (version) VALUES ('initial') ON CONFLICT DO NOTHING; + + CREATE TABLE IF NOT EXISTS supabase_functions.hooks ( + id bigserial PRIMARY KEY, + hook_table_id integer NOT NULL, + hook_name text NOT NULL, + created_at timestamptz NOT NULL DEFAULT NOW(), + request_id bigint + ); + + CREATE INDEX IF NOT EXISTS supabase_functions_hooks_request_id_idx ON supabase_functions.hooks USING btree (request_id); + CREATE INDEX IF NOT EXISTS supabase_functions_hooks_h_table_id_h_name_idx ON supabase_functions.hooks USING btree (hook_table_id, hook_name); + " || echo "Warning: supabase_functions tables creation failed (continuing)" + + # Create supabase_functions_admin role + echo "Creating supabase_functions_admin role..." + psql -h "$PGHOST" -p "$PGPORT" -U postgres -d "$PGDATABASE" -c " + DO \$\$ + BEGIN + IF NOT EXISTS (SELECT 1 FROM pg_roles WHERE rolname = 'supabase_functions_admin') THEN + CREATE USER supabase_functions_admin NOINHERIT CREATEROLE LOGIN NOREPLICATION; + END IF; + END \$\$; + " || echo "Warning: supabase_functions_admin role creation failed (continuing)" + + # Set up supabase_functions permissions + echo "Setting up supabase_functions permissions..." + psql -h "$PGHOST" -p "$PGPORT" -U postgres -d "$PGDATABASE" -c " + GRANT ALL PRIVILEGES ON SCHEMA supabase_functions TO supabase_functions_admin; + GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA supabase_functions TO supabase_functions_admin; + GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA supabase_functions TO supabase_functions_admin; + ALTER USER supabase_functions_admin SET search_path = 'supabase_functions'; + ALTER TABLE supabase_functions.migrations OWNER TO supabase_functions_admin; + ALTER TABLE supabase_functions.hooks OWNER TO supabase_functions_admin; + GRANT supabase_functions_admin TO postgres; + " || echo "Warning: supabase_functions permissions setup failed (continuing)" + + # Set up pg_net permissions + echo "Setting up pg_net permissions..." + psql -h "$PGHOST" -p "$PGPORT" -U postgres -d "$PGDATABASE" -c " + DO \$\$ + BEGIN + IF EXISTS (SELECT 1 FROM pg_extension WHERE extname = 'pg_net') THEN + GRANT USAGE ON SCHEMA net TO supabase_functions_admin, postgres, anon, authenticated, service_role; + END IF; + END \$\$; + " || echo "Warning: pg_net permissions setup failed (continuing)" + + # Create additional extensions + echo "Creating additional extensions..." + psql -h "$PGHOST" -p "$PGPORT" -U postgres -d "$PGDATABASE" -c " + DO \$\$ + BEGIN + CREATE EXTENSION IF NOT EXISTS pgjwt SCHEMA extensions; + EXCEPTION WHEN OTHERS THEN + RAISE NOTICE 'Could not create pgjwt extension: %', SQLERRM; + END \$\$; + " || echo "Warning: pgjwt extension creation failed (continuing)" + + psql -h "$PGHOST" -p "$PGPORT" -U postgres -d "$PGDATABASE" -c " + CREATE SCHEMA IF NOT EXISTS pgsodium; + CREATE EXTENSION IF NOT EXISTS pgsodium SCHEMA pgsodium; + " || echo "Warning: pgsodium schema and extension creation failed (continuing)" + + echo "Fixing schema permissions to match reference setup..." + + # Fix auth schema permissions (postgres should have UC instead of U) + echo "Fixing auth schema permissions..." + psql -h "$PGHOST" -p "$PGPORT" -U postgres -d "$PGDATABASE" -c " + GRANT CREATE ON SCHEMA auth TO postgres; + " || echo "Warning: auth schema permissions fix failed (continuing)" + + # Fix storage schema permissions (postgres should have UC instead of U*) + echo "Fixing storage schema permissions..." + psql -h "$PGHOST" -p "$PGPORT" -U postgres -d "$PGDATABASE" -c " + GRANT CREATE ON SCHEMA storage TO postgres; + " || echo "Warning: storage schema permissions fix failed (continuing)" + + # Fix pgsodium schema permissions (should be owned by supabase_admin, not postgres) + echo "Fixing pgsodium schema permissions..." + psql -h "$PGHOST" -p "$PGPORT" -U postgres -d "$PGDATABASE" -c " + ALTER SCHEMA pgsodium OWNER TO supabase_admin; + GRANT USAGE ON SCHEMA pgsodium TO public; + " || echo "Warning: pgsodium schema permissions fix failed (continuing)" + + # Fix vault schema permissions (should have pgsodium_keyiduser instead of postgres/service_role) + echo "Fixing vault schema permissions..." + psql -h "$PGHOST" -p "$PGPORT" -U postgres -d "$PGDATABASE" -c " + -- Create pgsodium_keyiduser role if it doesn't exist + DO \$\$ + BEGIN + IF NOT EXISTS (SELECT 1 FROM pg_roles WHERE rolname = 'pgsodium_keyiduser') THEN + CREATE ROLE pgsodium_keyiduser NOINHERIT; + END IF; + END \$\$; + + -- Fix vault schema permissions + REVOKE ALL ON SCHEMA vault FROM postgres, service_role; + GRANT CREATE, USAGE ON SCHEMA vault TO pgsodium_keyiduser; + " || echo "Warning: vault schema permissions fix failed (continuing)" + + # Fix net schema permissions (remove extra =U/postgres) + echo "Fixing net schema permissions..." + psql -h "$PGHOST" -p "$PGPORT" -U postgres -d "$PGDATABASE" -c " + DO \$\$ + BEGIN + IF EXISTS (SELECT 1 FROM pg_namespace WHERE nspname = 'net') THEN + REVOKE USAGE ON SCHEMA net FROM public; + END IF; + END \$\$; + " || echo "Warning: net schema permissions fix failed (continuing)" + + # Create supabase_migrations schema if missing + echo "Creating supabase_migrations schema..." + psql -h "$PGHOST" -p "$PGPORT" -U postgres -d "$PGDATABASE" -c " + CREATE SCHEMA IF NOT EXISTS supabase_migrations; + ALTER SCHEMA supabase_migrations OWNER TO supabase_admin; + " || echo "Warning: supabase_migrations schema creation failed (continuing)" + + echo "Schema permissions fixes completed!" + + echo "Additional Supabase setup completed!" + + echo "Supabase migrations completed successfully (migrate.sh style)!" + resources: + {{- toYaml .Values.migrations.resources | nindent 10 }} + {{- with .Values.migrations.nodeSelector }} + nodeSelector: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.migrations.affinity }} + affinity: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.migrations.tolerations }} + tolerations: + {{- toYaml . | nindent 8 }} + {{- end }} +{{- end }} diff --git a/charts/supabase/values.yaml b/charts/supabase/values.yaml index 038c17a4..d0cd5952 100644 --- a/charts/supabase/values.yaml +++ b/charts/supabase/values.yaml @@ -1,4 +1,5 @@ # File structure of values.yaml: +# |-- 0. Global & High Availability # |-- 1. Database # |-- 2. Studio # |-- 3. Auth @@ -12,6 +13,32 @@ # |-- 11. Vector # |-- 12. Functions # |-- 13. Minio +# |-- 14. Migrations (CloudNativePG) + +# ============================================================================= +# GLOBAL CONFIGURATION & HIGH AVAILABILITY +# ============================================================================= + +global: + # High Availability configuration + # Enable this for production deployments with multiple replicas and advanced features + highAvailability: + enabled: false + minReplicas: 2 + maxReplicas: 10 + targetCPUUtilization: 70 + targetMemoryUtilization: 80 + + # Pod Disruption Budget settings + podDisruptionBudget: + enabled: true + maxUnavailable: 1 + # minAvailable: 1 # Alternative to maxUnavailable + + # Multi-zone deployment support + multiZone: + enabled: false + preferZoneSpread: true secret: # jwt will be used to reference secret in multiple services: @@ -177,7 +204,7 @@ studio: image: repository: supabase/studio pullPolicy: IfNotPresent - tag: "latest" + tag: "2025.09.08-sha-67c1421" imagePullSecrets: [] replicaCount: 1 nameOverride: "" @@ -252,7 +279,7 @@ auth: image: repository: supabase/gotrue pullPolicy: IfNotPresent - tag: "latest" + tag: "v2.179.0" imagePullSecrets: [] replicaCount: 1 nameOverride: "" @@ -353,7 +380,7 @@ rest: image: repository: postgrest/postgrest pullPolicy: IfNotPresent - tag: "latest" + tag: "v13.0.6" imagePullSecrets: [] nameOverride: "" fullnameOverride: "" @@ -430,7 +457,7 @@ realtime: image: repository: supabase/realtime pullPolicy: IfNotPresent - tag: "latest" + tag: "v2.47.2" imagePullSecrets: [] nameOverride: "" fullnameOverride: "" @@ -511,7 +538,7 @@ meta: image: repository: supabase/postgres-meta pullPolicy: IfNotPresent - tag: "latest" + tag: "v0.91.6" imagePullSecrets: [] replicaCount: 1 nameOverride: "" @@ -586,7 +613,7 @@ storage: image: repository: supabase/storage-api pullPolicy: IfNotPresent - tag: "latest" + tag: "v1.26.7" imagePullSecrets: [] replicaCount: 1 nameOverride: "" @@ -681,7 +708,7 @@ imgproxy: image: repository: darthsim/imgproxy pullPolicy: IfNotPresent - tag: "latest" + tag: "v3.29.1" imagePullSecrets: [] replicaCount: 1 nameOverride: "" @@ -761,7 +788,7 @@ kong: image: repository: kong pullPolicy: IfNotPresent - tag: "latest" + tag: "3.9.1" imagePullSecrets: [] replicaCount: 1 nameOverride: "" @@ -856,7 +883,7 @@ analytics: image: repository: supabase/logflare pullPolicy: IfNotPresent - tag: "latest" + tag: "1.21.1" imagePullSecrets: [] replicaCount: 1 nameOverride: "" @@ -940,7 +967,7 @@ vector: image: repository: timberio/vector pullPolicy: IfNotPresent - tag: "latest" + tag: "0.42.0-alpine" imagePullSecrets: [] replicaCount: 1 nameOverride: "" @@ -1007,7 +1034,7 @@ functions: image: repository: supabase/edge-runtime pullPolicy: IfNotPresent - tag: "latest" + tag: "v1.68.3" imagePullSecrets: [] replicaCount: 1 nameOverride: "" @@ -1081,7 +1108,7 @@ minio: image: repository: minio/minio pullPolicy: IfNotPresent - tag: "latest" + tag: "RELEASE.2025-09-07T16-13-09Z" imagePullSecrets: [] replicaCount: 1 nameOverride: "" @@ -1150,3 +1177,40 @@ minio: nodeSelector: {} tolerations: [] affinity: {} + + +# ============================================================================= +# MIGRATIONS (CloudNativePG Integration) +# ============================================================================= + +# CloudNativePG Migration Job +# This job handles database initialization and migration for CloudNativePG deployments +# Only enable this when using CloudNativePG as your PostgreSQL backend +migrations: + enabled: false # Set to true when using CloudNativePG + + image: + repository: supabase/postgres + tag: "15.14.1.003" + pullPolicy: IfNotPresent + + # Database connection configuration + database: + host: "supabase-postgres-cluster-rw.supabase-dev.svc.cluster.local" + port: "5432" + name: "postgres" + user: "postgres" + secretRef: "supabase-postgres-cluster-superuser" + secretKey: "password" + + resources: + requests: + cpu: 100m + memory: 256Mi + limits: + cpu: 500m + memory: 512Mi + + nodeSelector: {} + tolerations: [] + affinity: {} diff --git a/cloudnativepg-values/cnpg-cluster/cluster-values.yaml b/cloudnativepg-values/cnpg-cluster/cluster-values.yaml new file mode 100644 index 00000000..6a06340c --- /dev/null +++ b/cloudnativepg-values/cnpg-cluster/cluster-values.yaml @@ -0,0 +1,156 @@ +cluster: + instances: 1 + imageName: "supabase/postgres:15.14.1.003" + primaryUpdateStrategy: unsupervised + postgresUID: 101 + postgresGID: 102 + + storage: + size: 100Gi + storageClass: gp3 + + walStorage: + enabled: true + size: 50Gi + storageClass: gp3 + + resources: + requests: + memory: "4Gi" + cpu: "2000m" + limits: + memory: "8Gi" + cpu: "4000m" + + # PostgreSQL configuration for Supabase extensions (based on working CNPG config) + postgresql: + parameters: + max_connections: "200" + shared_buffers: "256MB" + effective_cache_size: "1GB" + work_mem: "4MB" + maintenance_work_mem: "64MB" + wal_level: "logical" + max_wal_senders: "10" + max_replication_slots: "10" + track_activities: "on" + track_counts: "on" + track_io_timing: "on" + track_functions: "all" + cron.database_name: "postgres" + # Use the exact shared_preload_libraries from working CNPG config + shared_preload_libraries: + - "pg_stat_statements" + - "pg_stat_monitor" + - "pgaudit" + - "plpgsql" + - "plpgsql_check" + - "pg_cron" + - "pg_net" + - "auto_explain" + - "pg_tle" + - "supautils" + # Add pg_hba configuration from working example + pg_hba: + - "local all supabase_admin scram-sha-256" + - "local all all peer map=supabase_map" + - "host all all 127.0.0.1/32 trust" + - "host all all ::1/128 trust" + - "host all all 10.0.0.0/8 scram-sha-256" + - "host all all 172.16.0.0/12 scram-sha-256" + - "host all all 192.168.0.0/16 scram-sha-256" + - "host all all 0.0.0.0/0 scram-sha-256" + - "host all all ::0/0 scram-sha-256" + # Add pg_ident configuration + pg_ident: + - "supabase_map postgres postgres" + - "supabase_map gotrue supabase_auth_admin" + - "supabase_map postgrest authenticator" + - "supabase_map adminapi postgres" + + # Declarative role management for Supabase + roles: + - name: authenticator + ensure: present + login: true + inherit: false + inRoles: + - anon + - authenticated + - service_role + - name: supabase_admin + ensure: present + login: true + inherit: true + createrole: true + bypassrls: true + superuser: true + # Database initialization with Supabase setup + initdb: + database: postgres + owner: postgres + encoding: UTF8 + secret: + name: supabase-postgres-cluster-superuser + postInitApplicationSQLRefs: {} + #configMapRefs: + #- name: post-init-sql-configmap + # key: configmap.sql + + # Enable monitoring + monitoring: + enabled: true + podMonitor: + enabled: true + prometheusRule: + enabled: true + customQueries: [] +poolers: + - enabled: true + name: pooler-rw + instances: 4 + type: rw + pgbouncer: + poolMode: transaction + parameters: + max_client_conn: "2000" + default_pool_size: "50" + max_db_connections: "200" + max_user_connections: "200" + server_reset_query: "DISCARD ALL" + resources: + requests: + cpu: "200m" + memory: "256Mi" + limits: + cpu: "500m" + memory: "512Mi" + +# ============================================================================= +# BACKUP CONFIGURATION +# ============================================================================= + +backups: + enabled: true + destinationPath: "s3://supabase-backups/postgres" + s3: + region: "us-west-2" + bucket: "supabase-backups" + path: "/postgres" + inheritFromIAMRole: true + + # Backup schedule and retention + scheduledBackups: + - name: daily-backup + schedule: "0 2 * * *" # Daily at 2 AM + backupOwnerReference: self + + retentionPolicy: "30d" + + # WAL and data backup settings + wal: + compression: gzip + encryption: AES256 + data: + compression: gzip + encryption: AES256 diff --git a/cloudnativepg-values/cnpg-operator/operator-values.yaml b/cloudnativepg-values/cnpg-operator/operator-values.yaml new file mode 100644 index 00000000..7ba0834b --- /dev/null +++ b/cloudnativepg-values/cnpg-operator/operator-values.yaml @@ -0,0 +1,191 @@ +# CloudNativePG Operator Helm Values +# This file configures the CloudNativePG operator deployment using official chart structure + +# ============================================================================= +# OPERATOR CONFIGURATION +# ============================================================================= + +# High availability for production +replicaCount: 2 + +# Operator image configuration +image: + repository: ghcr.io/cloudnative-pg/cloudnative-pg + pullPolicy: IfNotPresent + tag: "1.27.0" + +# ============================================================================= +# RESOURCE CONFIGURATION +# ============================================================================= + +# Resource requests and limits for the operator +resources: + requests: + cpu: 200m + memory: 400Mi + limits: + cpu: 1000m + memory: 1Gi + +# ============================================================================= +# SECURITY CONFIGURATION +# ============================================================================= + +# Container Security Context +containerSecurityContext: + allowPrivilegeEscalation: false + readOnlyRootFilesystem: true + runAsUser: 10001 + runAsGroup: 10001 + seccompProfile: + type: RuntimeDefault + capabilities: + drop: + - "ALL" + +# Security Context for the whole pod +podSecurityContext: + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + +# ============================================================================= +# SERVICE ACCOUNT CONFIGURATION +# ============================================================================= + +serviceAccount: + create: true + name: "" + +# ============================================================================= +# RBAC CONFIGURATION +# ============================================================================= + +rbac: + create: true + aggregateClusterRoles: false + +# ============================================================================= +# WEBHOOK CONFIGURATION +# ============================================================================= + +webhook: + port: 9443 + mutating: + create: true + failurePolicy: Fail + validating: + create: true + failurePolicy: Fail + livenessProbe: + initialDelaySeconds: 3 + readinessProbe: + initialDelaySeconds: 3 + startupProbe: + failureThreshold: 6 + periodSeconds: 5 + +# ============================================================================= +# OPERATOR CONFIGURATION +# ============================================================================= + +config: + create: true + name: cnpg-controller-manager-config + secret: false + clusterWide: true + # Production concurrent reconciles + maxConcurrentReconciles: 10 + data: + INHERITED_ANNOTATIONS: "service.beta.kubernetes.io/*" + INHERITED_LABELS: "" + WATCH_NAMESPACE: "" + +# ============================================================================= +# MONITORING CONFIGURATION +# ============================================================================= + +monitoring: + podMonitorEnabled: false + #podMonitorMetricRelabelings: [] + #podMonitorRelabelings: [] + #podMonitorAdditionalLabels: + # app.kubernetes.io/part-of: supabase + # environment: production + # team: platform + + grafanaDashboard: + create: false + # namespace: monitoring + # labels: + # grafana_dashboard: "1" + annotations: + # grafana-folder: "CloudNativePG" + +# ============================================================================= +# SERVICE CONFIGURATION +# ============================================================================= + +service: + type: ClusterIP + name: cnpg-webhook-service + port: 443 + +# ============================================================================= +# AVAILABILITY CONFIGURATION +# ============================================================================= + +# Node selector +nodeSelector: {} + +# Tolerations +tolerations: [] + +# Affinity (automatically enabled for production with replicaCount > 1) +affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 100 + podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/name: cloudnative-pg + topologyKey: kubernetes.io/hostname + +# Topology spread constraints +topologySpreadConstraints: [] + +# ============================================================================= +# ADDITIONAL CONFIGURATION +# ============================================================================= + +# Annotations to be added to all other resources +commonAnnotations: + app.kubernetes.io/managed-by: "helm" + app.kubernetes.io/part-of: "supabase" + +# Annotations to be added to the pod +podAnnotations: {} + +# Labels to be added to the pod +podLabels: + environment: production + team: platform + +# Priority class name +priorityClassName: "" + +# Update strategy +updateStrategy: + type: RollingUpdate + rollingUpdate: + maxSurge: 1 + maxUnavailable: 0 + +# Additional arguments +additionalArgs: [] + +# Additional environment variables +additionalEnv: +- name: LOG_LEVEL + value: "info" \ No newline at end of file diff --git a/cloudnativepg-values/supabase/cloudnativepg-values.yaml b/cloudnativepg-values/supabase/cloudnativepg-values.yaml new file mode 100644 index 00000000..cace356a --- /dev/null +++ b/cloudnativepg-values/supabase/cloudnativepg-values.yaml @@ -0,0 +1,525 @@ +# Supabase Helm Chart Values for CloudNativePG Integration +# This configuration assumes PostgreSQL is managed by CloudNativePG operator + +# ============================================================================= +# DATABASE CONFIGURATION +# ============================================================================= + +# Disable internal PostgreSQL deployment +db: + enabled: false + +# Database connection secrets (using CloudNativePG secrets) +secret: + # JWT tokens for Supabase services + jwt: + anonKey: "" + serviceKey: "" + secret: "" + # Database credentials - use CloudNativePG secret with database override + db: + username: postgres + password: "" # Will be retrieved from secret + database: postgres + # Use CloudNativePG secret and override database name via environment variables + secretRef: supabase-postgres-cluster-superuser + secretRefKey: + username: username + password: password + # Analytics API key + analytics: + apiKey: apiKey + # SMTP configuration + smtp: + username: "noreply@example.com" + password: "smtp-password-placeholder" + # Dashboard credentials + dashboard: + username: "admin" + password: "admin-password" + +# ============================================================================= +# HIGH AVAILABILITY CONFIGURATION +# ============================================================================= + +# Auth service +auth: + enabled: true + replicaCount: 2 + environment: + # Individual variables for init containers + DB_HOST: supabase-postgres-cluster-rw.supabase-dev.svc.cluster.local + DB_USER: supabase_auth_admin + DB_PORT: "5432" + DB_DRIVER: postgres + DB_SSL: disable + # Connection string for main container - will be constructed from secret + # GOTRUE_DB_DATABASE_URL will be set via secretKeyRef in deployment template + GOTRUE_DB_DRIVER: postgres + # Auth configuration + API_EXTERNAL_URL: https://supabase-dev.preview.ingenimax.ai + GOTRUE_API_HOST: "0.0.0.0" + GOTRUE_API_PORT: "9999" + GOTRUE_SITE_URL: https://supabase-dev.preview.ingenimax.ai + GOTRUE_URI_ALLOW_LIST: "*" + GOTRUE_DISABLE_SIGNUP: "false" + GOTRUE_JWT_DEFAULT_GROUP_NAME: authenticated + GOTRUE_JWT_ADMIN_ROLES: service_role + GOTRUE_JWT_AUD: authenticated + GOTRUE_JWT_EXP: "3600" + GOTRUE_EXTERNAL_EMAIL_ENABLED: "true" + GOTRUE_MAILER_AUTOCONFIRM: "true" + # SMTP Configuration + GOTRUE_SMTP_ADMIN_EMAIL: "admin@example.com" + GOTRUE_SMTP_HOST: "localhost" + GOTRUE_SMTP_PORT: "587" + GOTRUE_SMTP_SENDER_NAME: "Supabase" + GOTRUE_EXTERNAL_PHONE_ENABLED: "false" + GOTRUE_SMS_AUTOCONFIRM: "false" + GOTRUE_MAILER_URLPATHS_INVITE: "/auth/v1/verify" + GOTRUE_MAILER_URLPATHS_CONFIRMATION: "/auth/v1/verify" + GOTRUE_MAILER_URLPATHS_RECOVERY: "/auth/v1/verify" + GOTRUE_MAILER_URLPATHS_EMAIL_CHANGE: "/auth/v1/verify" + resources: + requests: + cpu: 100m + memory: 128Mi + limits: + cpu: 500m + memory: 512Mi + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 100 + podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/component: auth + topologyKey: kubernetes.io/hostname + +# Rest service (PostgREST) +rest: + enabled: true + replicaCount: 2 + environment: + # Database connection - will be constructed from secret + # PGRST_DB_URI will be set via secretKeyRef in deployment template + # PostgREST configuration + PGRST_DB_SCHEMAS: public,storage,graphql_public + PGRST_DB_ANON_ROLE: anon + PGRST_DB_USE_LEGACY_GUCS: "false" + PGRST_APP_SETTINGS_JWT_EXP: "3600" + resources: + requests: + cpu: 100m + memory: 128Mi + limits: + cpu: 500m + memory: 512Mi + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 100 + podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/component: api + topologyKey: kubernetes.io/hostname + +# Realtime service +realtime: + enabled: true + replicaCount: 2 + # Override environment with boolean DB_SSL for Elixir compatibility + environment: + # Database connection (DB_SSL as boolean for Elixir) + DB_HOST: supabase-postgres-cluster-rw.supabase-dev.svc.cluster.local + DB_USER: supabase_admin + DB_PORT: "5432" + DB_SSL: false + DB_AFTER_CONNECT_QUERY: "SET search_path TO _realtime" + DB_ENC_KEY: supabaserealtime + # Realtime application configuration + PORT: "4000" + APP_NAME: realtime + FLY_ALLOC_ID: fly123 + FLY_APP_NAME: realtime + SECRET_KEY_BASE: UpNVntn3cDxHJpq99YMc1T1AQgQpc8kfYTuRgBiYa15BLrx8etQoXz3gZv1/u2oq + ERL_AFLAGS: -proto_dist inet_tcp + ENABLE_TAILSCALE: "false" + DNS_NODES: "''" + resources: + requests: + cpu: 100m + memory: 128Mi + limits: + cpu: 500m + memory: 512Mi + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 100 + podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/component: realtime + topologyKey: kubernetes.io/hostname + +# Storage service +storage: + enabled: true + replicaCount: 2 + environment: + # Individual variables for init containers + DB_HOST: supabase-postgres-cluster-rw.supabase-dev.svc.cluster.local + DB_USER: supabase_storage_admin + DB_PORT: "5432" + DB_SSL: disable + # Connection string for main container - will be constructed from secret + # DATABASE_URL will be set via secretKeyRef in deployment template + # Storage configuration + PGOPTIONS: "-c search_path=storage,public" + FILE_SIZE_LIMIT: "52428800" + STORAGE_BACKEND: s3 + TENANT_ID: stub + REGION: us-east-1 + GLOBAL_S3_BUCKET: supabase-storage + GLOBAL_S3_ENDPOINT: http://supabase-minio:9000 + GLOBAL_S3_PROTOCOL: http + GLOBAL_S3_FORCE_PATH_STYLE: "true" + AWS_DEFAULT_REGION: us-east-1 + AWS_ACCESS_KEY_ID: supabase + AWS_SECRET_ACCESS_KEY: supabase123 + resources: + requests: + cpu: 100m + memory: 128Mi + limits: + cpu: 500m + memory: 512Mi + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 100 + podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/component: storage + topologyKey: kubernetes.io/hostname + +# Meta service +meta: + enabled: true + replicaCount: 2 + environment: + # Database connection - matches Docker Compose pattern + DB_HOST: supabase-postgres-cluster-rw.supabase-dev.svc.cluster.local + DB_PORT: "5432" + DB_USER: supabase_admin + DB_SSL: disable + PG_META_DB_HOST: supabase-postgres-cluster-rw.supabase-dev.svc.cluster.local + PG_META_DB_PORT: "5432" + PG_META_DB_NAME: postgres + PG_META_DB_USER: supabase_admin + PG_META_DB_SSL_MODE: disable + # PG_META_DB_PASSWORD will be set via secretKeyRef in deployment template + # Meta configuration + PG_META_PORT: "8080" + resources: + requests: + cpu: 100m + memory: 128Mi + limits: + cpu: 500m + memory: 512Mi + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 100 + podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/component: meta + topologyKey: kubernetes.io/hostname + +# Functions service (Edge Functions) +functions: + enabled: true + replicaCount: 2 + environment: + DB_DATABASE: postgres + # Database connection - will be constructed from secret + # SUPABASE_DB_URL will be set via secretKeyRef in deployment template + resources: + requests: + cpu: 100m + memory: 128Mi + limits: + cpu: 500m + memory: 512Mi + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 100 + podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/component: functions + topologyKey: kubernetes.io/hostname + +# Kong API Gateway +kong: + enabled: true + replicaCount: 2 + environment: + KONG_DATABASE: "off" + KONG_DECLARATIVE_CONFIG: /usr/local/kong/kong.yml + KONG_DNS_ORDER: LAST,A,CNAME + KONG_PLUGINS: request-transformer,cors,key-auth,acl,basic-auth + KONG_NGINX_PROXY_PROXY_BUFFER_SIZE: 160k + KONG_NGINX_PROXY_PROXY_BUFFERS: 64 160k + KONG_LOG_LEVEL: warn + resources: + requests: + cpu: 100m + memory: 128Mi + limits: + cpu: 500m + memory: 512Mi + ingress: + enabled: true + className: "nginx" + annotations: + cert-manager.io/cluster-issuer: letsencrypt-prod + nginx.ingress.kubernetes.io/rewrite-target: / + tls: + - secretName: supabase-ingress-tls + hosts: + - supabase-dev.preview.ingenimax.ai + hosts: + - host: supabase-dev.preview.ingenimax.ai + paths: + - path: / + pathType: Prefix +# Studio (Dashboard) +studio: + enabled: true + replicaCount: 1 # Studio doesn't need HA typically + environment: + STUDIO_DEFAULT_ORGANIZATION: Default Organization + STUDIO_DEFAULT_PROJECT: Default Project + STUDIO_PORT: "3000" + SUPABASE_PUBLIC_URL: https://supabase-dev.preview.ingenimax.ai + NEXT_PUBLIC_ENABLE_LOGS: "true" + NEXT_ANALYTICS_BACKEND_PROVIDER: postgres + resources: + requests: + cpu: 100m + memory: 128Mi + limits: + cpu: 300m + memory: 256Mi + +# Analytics service +analytics: + enabled: true + replicaCount: 1 + environment: + # Individual variables for init containers + DB_HOST: supabase-postgres-cluster-rw.supabase-dev.svc.cluster.local + DB_USER: supabase_admin + DB_PORT: "5432" + # Analytics-specific variables - matches Docker Compose pattern + LOGFLARE_NODE_HOST: 127.0.0.1 + DB_HOSTNAME: supabase-postgres-cluster-rw.supabase-dev.svc.cluster.local + DB_USERNAME: supabase_admin + DB_DATABASE: _supabase + DB_SCHEMA: _analytics + # Analytics configuration + LOGFLARE_SINGLE_TENANT: "true" + LOGFLARE_SUPABASE_MODE: "true" + LOGFLARE_MIN_CLUSTER_SIZE: "1" + LOGFLARE_PUBLIC_ACCESS_TOKEN: "your-logflare-public-token" + LOGFLARE_PRIVATE_ACCESS_TOKEN: "your-logflare-private-token" + LOGFLARE_FEATURE_FLAG_OVERRIDE: multibackend=true + # Backend URL for analytics - will be constructed from secret + # POSTGRES_BACKEND_URL will be set via secretKeyRef in deployment template + POSTGRES_BACKEND_SCHEMA: _analytics + resources: + requests: + cpu: 200m + memory: 512Mi + limits: + cpu: 500m + memory: 1Gi + +# ============================================================================= +# MONITORING AND OBSERVABILITY +# ============================================================================= + +# Enable monitoring +monitoring: + enabled: true + serviceMonitor: + enabled: true + namespace: monitoring + prometheusRule: + enabled: true + +# Logging configuration +logging: + level: info + format: json + +# ============================================================================= +# SECURITY CONFIGURATION +# ============================================================================= + +# Network policies +networkPolicy: + enabled: true + ingress: + enabled: true + egress: + enabled: true + +# Pod security context +podSecurityContext: + runAsNonRoot: true + runAsUser: 1000 + runAsGroup: 1000 + fsGroup: 1000 + +# Security context +securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + readOnlyRootFilesystem: true + +# ============================================================================= +# INGRESS CONFIGURATION +# ============================================================================= + +ingress: + enabled: true + className: nginx + annotations: + cert-manager.io/cluster-issuer: letsencrypt-prod + nginx.ingress.kubernetes.io/ssl-redirect: "true" + nginx.ingress.kubernetes.io/force-ssl-redirect: "true" + hosts: + - host: supabase-dev.preview.ingenimax.ai + paths: + - path: / + pathType: Prefix + tls: + - secretName: supabase-dev-tls + hosts: + - supabase-dev.preview.ingenimax.ai + +# ============================================================================= +# STORAGE CONFIGURATION +# ============================================================================= + +# Persistent storage for file uploads +persistence: + enabled: true + storageClass: gp3 + size: 100Gi + accessMode: ReadWriteOnce + +# Object storage (S3-compatible) +objectStorage: + enabled: true + provider: s3 + bucket: supabase-dev-storage + region: us-west-2 + # Configure via secrets + existingSecret: supabase-storage-credentials + +# ============================================================================= +# ENVIRONMENT CONFIGURATION +# ============================================================================= + +# JWT configuration +jwt: + secret: "your-jwt-secret-here" # Change in production + expiry: 3600 + + +# ============================================================================= +# BACKUP AND RECOVERY +# ============================================================================= + +# Database backups are handled by CloudNativePG +# Application-level backups can be configured here +backup: + enabled: false # CloudNativePG handles database backups + +# ============================================================================= +# AUTOSCALING +# ============================================================================= + +# Horizontal Pod Autoscaler +autoscaling: + enabled: true + minReplicas: 2 + maxReplicas: 10 + targetCPUUtilizationPercentage: 70 + targetMemoryUtilizationPercentage: 80 + +# ============================================================================= +# ADDITIONAL CONFIGURATION +# ============================================================================= + +# Service account +serviceAccount: + create: true + annotations: {} + name: "" + +# Pod disruption budget +podDisruptionBudget: + enabled: true + minAvailable: 1 + +# Node selector +nodeSelector: {} + +# Tolerations +tolerations: [] + +# Additional labels +additionalLabels: + environment: development + team: platform + +# Supabase Database Migrations Job +migrations: + enabled: true + image: + repository: supabase/postgres + tag: "15.14.1.003" + pullPolicy: IfNotPresent + # Database connection configuration + database: + host: "supabase-postgres-cluster-rw.supabase-dev.svc.cluster.local" + port: "5432" + name: "postgres" + user: "postgres" + # Use CloudNativePG secret + secretRef: "supabase-postgres-cluster-superuser" + secretKey: "password" + resources: + requests: + memory: "256Mi" + cpu: "100m" + limits: + memory: "512Mi" + cpu: "500m" + nodeSelector: {} + tolerations: [] + affinity: {} + diff --git a/scripts/deploy-supabase.sh b/scripts/deploy-supabase.sh new file mode 100755 index 00000000..42d4bbee --- /dev/null +++ b/scripts/deploy-supabase.sh @@ -0,0 +1,88 @@ +#!/bin/bash + +# Supabase on Kubernetes Deployment Script +# This script deploys Supabase with CloudNativePG in the correct order + +set -e + +# Configuration +NAMESPACE=${NAMESPACE:-supabase-kubernetes} +CLUSTER_NAME=${CLUSTER_NAME:-supabase-postgres} +SUPABASE_RELEASE=${SUPABASE_RELEASE:-supabase-kubernetes} +CNPG_OPERATOR_NAMESPACE=${CNPG_OPERATOR_NAMESPACE:-cnpg-system} + +echo "🚀 Deploying Supabase with CloudNativePG" +echo "Namespace: $NAMESPACE" +echo "Cluster: $CLUSTER_NAME" +echo "Release: $SUPABASE_RELEASE" +echo "" + +# Step 1: Add CloudNativePG Helm repository +echo "📦 Adding CloudNativePG Helm repository..." +helm repo add cnpg https://cloudnative-pg.github.io/charts +helm repo update + +# Step 2: Install CloudNativePG operator +echo "🔧 Installing CloudNativePG operator..." +if ! helm list -n $CNPG_OPERATOR_NAMESPACE | grep -q cnpg-operator; then + helm upgrade --install cnpg-operator \ + cnpg/cloudnative-pg \ + -n $CNPG_OPERATOR_NAMESPACE \ + --create-namespace \ + -f cloudnativepg-values/cnpg-operator/operator-values.yaml + echo "✅ CloudNativePG operator installed" +else + echo "✅ CloudNativePG operator already installed" +fi + +# Step 3: Wait for operator to be ready +echo "⏳ Waiting for CloudNativePG operator to be ready..." +kubectl wait --for=condition=available --timeout=300s deployment/cnpg-controller-manager -n $CNPG_OPERATOR_NAMESPACE + +# Step 4: Deploy PostgreSQL cluster +echo "🐘 Deploying PostgreSQL cluster..." +if ! helm list -n $NAMESPACE | grep -q $CLUSTER_NAME; then + helm upgrade --install $CLUSTER_NAME cnpg/cluster \ + -n $NAMESPACE \ + --create-namespace \ + -f cloudnativepg-values/cnpg-cluster/cluster-values.yaml + echo "✅ PostgreSQL cluster deployment initiated" +else + echo "✅ PostgreSQL cluster already deployed" +fi + +# Step 5: Wait for PostgreSQL cluster to be ready +echo "⏳ Waiting for PostgreSQL cluster to be ready..." +kubectl wait --for=condition=Ready --timeout=600s cluster/$CLUSTER_NAME -n $NAMESPACE + +# Step 6: Deploy Supabase services +echo "🔥 Deploying Supabase services..." +if ! helm list -n $NAMESPACE | grep -q $SUPABASE_RELEASE; then + helm upgrade --install $SUPABASE_RELEASE charts/supabase \ + -n $NAMESPACE \ + -f cloudnativepg-values/supabase/cloudnativepg-values.yaml \ + --create-namespace + echo "✅ Supabase services deployment initiated" +else + echo "✅ Supabase services already deployed" +fi + +# Step 7: Wait for migration job to complete +echo "⏳ Waiting for migration job to complete..." +kubectl wait --for=condition=complete --timeout=300s job/supabase-migrations -n $NAMESPACE || true + +# Step 8: Show deployment status +echo "" +echo "🎉 Deployment completed!" +echo "" +echo "📊 Checking pod status..." +kubectl get pods -n $NAMESPACE + +echo "" +echo "🔍 Useful commands:" +echo " Check cluster status: kubectl get cluster -n $NAMESPACE" +echo " Check migration logs: kubectl logs job/supabase-migrations -n $NAMESPACE" +echo " Port-forward Studio: kubectl port-forward svc/supabase-supabase-studio 3000:3000 -n $NAMESPACE" +echo " Port-forward Database: kubectl port-forward svc/$CLUSTER_NAME-rw 5432:5432 -n $NAMESPACE" +echo "" +echo "✨ Happy building with Supabase!"