-
-
Notifications
You must be signed in to change notification settings - Fork 10.1k
Description
Bug report
- I confirm this is a bug with Supabase, not with my own application.
- I confirm I have searched the [Docs](https://docs.supabase.com), GitHub [Discussions](https://github.com/supabase/supabase/discussions), and [Discord](https://discord.supabase.com).
Describe the bug
When self-hosting Supabase on Kubernetes (via Helm), the vector
service fails to send logs to the logflare
(analytics) service, consistently receiving a 401 Unauthorized
error.
This issue appears to be specific to the Kubernetes deployment. The same log forwarding functionality from vector
to logflare
works correctly in the standard Docker Compose self-hosted environment.
Troubleshooting has confirmed that API keys (LOGFLARE_PUBLIC_ACCESS_TOKEN
) are correctly mounted in both the vector
and logflare
pods, and basic network connectivity exists. The failure points to a potential configuration difference in how vector
is set up to communicate with the multi-tenant logflare
instance within the Kubernetes environment, as opposed to the Docker environment.
To Reproduce
Steps to reproduce the behavior, please provide code snippets or a repository:
- Deploy Supabase on a Kubernetes cluster using the official/community Helm chart with default settings (which enables multi-tenant
logflare
). - Ensure all pods, including
vector
andlogflare
, are running and healthy. - Monitor the logs for any
vector
pod usingkubectl logs -f <vector-pod-name>
. - See the recurring
Http status: 401 Unauthorized
error.
Expected behavior
The vector
service in the Kubernetes deployment should successfully authenticate with the logflare
service and forward logs without errors, identical to its behavior in the Docker Compose environment.
Screenshots
This section includes logs and terminal output demonstrating the issue.
Vector Pod Logs (Kubernetes)
The vector
container repeatedly logs the following error:
2025-08-18T03:27:34.317820Z ERROR sink{component_kind="sink" component_id=logflare_unmatched component_type=http}:request{request_id=58}: vector::sinks::util::retries: Not retriable; dropping the request. reason="Http status: 401 Unauthorized" internal_log_rate_limit=true
2025-08-18T03:27:44.416921Z ERROR sink{component_kind="sink" component_id=logflare_unmatched component_type=http}:request{request_id=60}: vector::sinks::util::retries: Not retriable; dropping the request. reason="Http status: 401 Unauthorized" internal_log_rate_limit=true
Kubernetes Pod Status
All Supabase pods are running correctly:
NAME READY STATUS RESTARTS AGE
supabase215710662251257856-analytics-58dc7889f6-8g5dc 1/1 Running 0 37m
supabase215710662251257856-auth-75dc86dbb8-d2xzn 1/1 Running 0 37m
# ...and all other pods are running
System information
- OS: Linux (Kubernetes Nodes)
- Deployment: Kubernetes (via Helm)
- Image Versions:
timberio/vector:0.49.0-alpine
supabase/logflare:1.19.0
Additional context
Key Distinction: Docker vs. Kubernetes Behavior
The most critical finding is that this issue is specific to Kubernetes. When using the docker-compose
setup for self-hosting, vector
is able to successfully write logs to logflare
. This strongly suggests the logflare
application itself is functional and that the bug lies within the Kubernetes deployment's configuration, likely in the vector
sink setup provided by the Helm chart.
API Key Verification
The LOGFLARE_PUBLIC_ACCESS_TOKEN
has been verified to be identical and correctly mounted as an environment variable in both the logflare
and vector
pods, ruling out a missing or mismatched secret.
Verification in logflare
pod (Kubernetes):
$ kubectl exec -it supabase215710662251257856-analytics-58dc7889f6-8g5dc -n supabase215710662251257856 -- sh
# echo $LOGFLARE_PUBLIC_ACCESS_TOKEN
a62353e9a704ec12a0d26fde198f8b53ff4b55e19dfe78dd8ee0c49edd46131b
Verification in vector
pod (Kubernetes):
$ kubectl exec -it supabase215710662251257856-vector-j4kwn -n supabase215710662251257856 -- sh
/ # echo $LOGFLARE_PUBLIC_ACCESS_TOKEN
a62353e9a704ec12a0d26fde198f8b53ff4b55e19dfe78dd8ee0c49edd46131b
Network Connectivity
Basic network connectivity to the logflare
pod is confirmed. A curl
request to its /health
endpoint receives a successful 200 OK
response. This confirms the 401
is an application-level rejection, not a network failure.
$ curl http://10.225.12.195:4000/health
{"status":"ok", ...}
Manual curl
Requests
Manual curl
requests from the host to the generic /api/logs
endpoint fail with 401 Unauthorized
in both Docker and Kubernetes environments. This is likely expected behavior for logflare
's default multi-tenant mode, reinforcing the idea that vector
must use a more specific, correctly configured endpoint which seems to be missing in the Kubernetes setup.