Skip to content

Conversation

shahvrushali22
Copy link
Contributor

@shahvrushali22 shahvrushali22 commented Jun 26, 2025

Summary

This PR adds in-cluster service proxy support to Headlamp, enables Helm operations with authenticated access, and refactors token validation for consistency. It also ensures namespace scoping in Helm charts and enables secure HTTPS-based readiness and liveness probes.

Changes

  • Added serviceproxy backend module to enable authenticated in-cluster service access.
  • Registered new route /clusters/{cluster}/serviceproxy/{namespace}/{name}.
  • Refactored checkHeadlampBackendToken into a method for consistency.
  • Enabled Helm support via --enable-helm flag and validated tokens using SelfSubjectReview.
  • Forwarded Bearer tokens for secure Helm operations.
  • Updated Helm chart templates:
  • Added namespace scoping to all resources.
  • Enabled HTTPS for liveness/readiness probes and services.
  • Made Helm feature toggleable via values.yaml.
  • Added config test for enableHelm flag and updated expected chart templates.

Related and required app-catalog PR:

Steps to Test

  1. Install Headlamp with Helm support enabled
  2. Test in-cluster service proxy route
  3. Test Helm integration in the UI

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Jun 26, 2025
@k8s-ci-robot k8s-ci-robot added needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. and removed needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. labels Jun 26, 2025
@illume illume requested a review from Copilot July 2, 2025 10:20
Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds authenticated in-cluster service proxy support and enables Helm operations in Headlamp, alongside refactoring token validation and enhancing Helm charts for namespace scoping and HTTPS readiness/liveness probes.

  • Introduce a new serviceproxy backend module and route for in-cluster service access
  • Enable Helm via --enable-helm, forward bearer tokens, and perform a SelfSubjectReview check
  • Update Helm chart templates for namespace scoping, HTTPS probes, and a toggleable Helm feature

Reviewed Changes

Copilot reviewed 29 out of 29 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
frontend/src/components/account/Auth.tsx Reload page on successful authentication
frontend/package.json Add npx update-browserslist-db@latest to prebuild
frontend/make-env.js Disable source maps by setting GENERATE_SOURCEMAP: false
charts/headlamp/values.yaml Add enableHelm flag and change default service port to 443
charts/headlamp/templates/serviceaccount.yaml Scope ServiceAccount to release namespace
charts/headlamp/templates/service.yaml Scope Service and switch targetPort to HTTPS
charts/headlamp/templates/deployment.yaml Scope Deployment, add -enable-helm, and enable HTTPS probes
charts/headlamp/templates/secret.yaml Scope Secret to release namespace
charts/headlamp/templates/pvc.yaml Scope PVC to release namespace
charts/headlamp/templates/ingress.yaml Scope Ingress to release namespace
backend/pkg/serviceproxy/service.go Implement service lookup and URL prefix logic
backend/pkg/serviceproxy/http.go Provide HTTPGet helper for proxied requests
backend/pkg/serviceproxy/handler.go Handle proxy requests and enforce no-cache headers
backend/pkg/serviceproxy/connection.go Wrap HTTP calls in a ServiceConnection interface
backend/pkg/helm/release.go Add SelfSubjectReview before Helm install
backend/pkg/config/config.go & config_test.go Add enable-helm flag parsing and tests
backend/cmd/headlamp.go Register serviceproxy route and refactor token check
Comments suppressed due to low confidence (2)

backend/pkg/serviceproxy/service.go:15

  • [nitpick] The constant name HTTPScheme is misleading since it holds the "http" scheme. Rename it to something like HTTPSchemeName or HTTPName and use HTT PSScheme for "https" to avoid confusion.
	HTTPScheme  = "http"

backend/pkg/serviceproxy/service.go:1

  • There are no unit tests for the serviceproxy package. Consider adding tests for getService, getPort, and getServiceURLPrefix to ensure correct behavior and catch regressions.
package serviceproxy

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Aug 6, 2025
Copy link

linux-foundation-easycla bot commented Aug 18, 2025

CLA Signed

The committers listed above are authorized under a signed CLA.

@k8s-ci-robot k8s-ci-robot added cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. labels Aug 18, 2025
@k8s-ci-robot k8s-ci-robot added needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. and removed needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. labels Aug 20, 2025
@illume
Copy link
Contributor

illume commented Aug 28, 2025

Thanks for those changes for the copilot comments.

It looks like there's some conflicts with the main branch at the moment. Would you be able to have a look?

@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Sep 16, 2025
@muraliinformal
Copy link
Contributor

@joaquimrocha @illume Could you please take a look ?

@joaquimrocha
Copy link
Contributor

I will let @illume continue the review.

@muraliinformal muraliinformal force-pushed the add-serviceproxy branch 2 times, most recently from 40d09c2 to 16e1d15 Compare September 22, 2025 19:50
@muraliinformal
Copy link
Contributor

@illume Could you please review the changes when you get chance ? Thanks

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Sep 25, 2025
@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. labels Sep 26, 2025
@illume illume requested a review from Copilot October 9, 2025 20:57
Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

Copilot reviewed 14 out of 14 changed files in this pull request and generated 3 comments.


Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

@illume
Copy link
Contributor

illume commented Oct 9, 2025

  • Can you please check some things appear to be back which you said were removed before. I reopened a couple of convos.
  • It would be best if this was broken up into atomic commits, so different features are in separate commits. I could also help with that if you like?
  • Need to add @shahvrushali22 back as a co author. Co-authored-by: Name [email protected]

@muraliinformal
Copy link
Contributor

  • Can you please check some things appear to be back which you said were removed before. I reopened a couple of convos.
  • It would be best if this was broken up into atomic commits, so different features are in separate commits. I could also help with that if you like?
  • Need to add @shahvrushali22 back as a co author. Co-authored-by: Name [email protected]

You probably looking at old commit, this file is not part of the PR now, could you please check in the latest ?

I already squashed commits, i'll see what can be done to break enableHelm and serviceproxy changes into atomic commits

@illume
Copy link
Contributor

illume commented Oct 10, 2025

Can you please check some things appear to be back which you said were removed before. I reopened a couple of convos.
It would be best if this was broken up into atomic commits, so different features are in separate commits. I could also help with that if you like?

Need to add @shahvrushali22 back as a co author. Co-authored-by: Name [email protected]

You probably looking at old commit, this file is not part of the PR now, could you please check in the latest ?

It's gone this morning. I guess a github webpage hicup.

I already squashed commits, i'll see what can be done to break enableHelm and serviceproxy changes into atomic commits

Thanks.

@yolossn
Copy link
Contributor

yolossn commented Oct 10, 2025

After taking a look at the serviceproxy package in detail I am convinced that the proxy provided by kubernetes api is capable of handling the usecase.

@muraliinformal can you take a look at the proxy endpoint in the spec and how the prometheus plugin leverages it to fetch data from prometheus service running inside the cluster.

@illume
Copy link
Contributor

illume commented Oct 10, 2025

@yolossn The main thing serviceproxy can do that k8s proxy can't do is:

Using ExternalService services, a cluster admin can define what external endpoints Headlamp will proxy to on a per-user basis. In particular, it allows for out-of-cluster Helm repos.

Other benefits:

It allows for proxying arbitrary in-cluster services to plugins, again configurable on a per-user basis. For the catalog plugin, that does mean Harbor, Chart Museum, static Helm repos via an HTTP server, etc. One can also imagine other services in a cluster that one may want to expose via a plugin, like OpenSearch to view logs.
Basically the use case is to allow cluster admins to use Kubernetes RBAC to control access to what endpoints are available to users in a multi-user cluster without having a) a complex Headlamp configuration or b) requiring the Headlamp admin to redeploy the app each time the configuration changes.

@yolossn Please correct me if I'm wrong, but k8s proxy does not provide a mechanism for exposing arbitrary in-cluster HTTP endpoints (like Harbor, ChartMuseum, OpenSearch) to plugins in a controlled way?

What the k8s API Proxy Does

  • It proxies requests through the API server to Kubernetes resources.
  • RBAC applies to API objects (Pods, Services, etc.), so you can restrict who can access those objects.
  • It does not provide a mechanism for exposing arbitrary in-cluster HTTP endpoints (like Harbor, ChartMuseum, OpenSearch) to plugins in a controlled way.

Why serviceproxy Is Different

  • It allows direct HTTP access to arbitrary services inside the cluster, not just Kubernetes API objects.
  • It integrates with Headlamp’s routing (/clusters/{clusterName}/serviceproxy/...) so plugins can call services without custom configuration.
  • RBAC still applies because the user’s token is used to fetch the Service object, but the proxy then handles the actual HTTP request.
  • Admins don’t need to redeploy Headlamp or maintain complex configs—RBAC rules dynamically control access.

Copy link
Contributor

@illume illume left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I noticed some files/functions that don't have any tests.

github.com/kubernetes-sigs/headlamp/backend/cmd/headlamp.go:1152:			handleClusterServiceProxy		50.0%
github.com/kubernetes-sigs/headlamp/backend/pkg/helm/release.go:585:			verifyUser				50.0%
github.com/kubernetes-sigs/headlamp/backend/pkg/serviceproxy/connection.go:19:		NewConnection				100.0%
github.com/kubernetes-sigs/headlamp/backend/pkg/serviceproxy/connection.go:27:		Get					72.7%
github.com/kubernetes-sigs/headlamp/backend/pkg/serviceproxy/handler.go:17:		RequestHandler				0.0%
github.com/kubernetes-sigs/headlamp/backend/pkg/serviceproxy/http.go:14:		HTTPGet					86.7%
github.com/kubernetes-sigs/headlamp/backend/pkg/serviceproxy/service.go:29:		GetService				85.7%
github.com/kubernetes-sigs/headlamp/backend/pkg/serviceproxy/service.go:64:		GetPort					100.0%
github.com/kubernetes-sigs/headlamp/backend/pkg/serviceproxy/service.go:83:		getServiceURLPrefix			100.0%

@yolossn
Copy link
Contributor

yolossn commented Oct 10, 2025

@yolossn The main thing serviceproxy can do that k8s proxy can't do is:
Using ExternalService services, a cluster admin can define what external endpoints Headlamp will proxy to on a per-user basis. In particular, it allows for out-of-cluster Helm repos.

Aah, the external service. Yes that is not supported by the k8s api proxy out of the box.

@yolossn Please correct me if I'm wrong, but k8s proxy does not provide a mechanism for exposing arbitrary in-cluster HTTP endpoints (like Harbor, ChartMuseum, OpenSearch) to plugins in a controlled way?

the RBAC can be used to provide access in a controlled way to the user. For the user to have proxy access to a service/pod the following rule has to be provided

apiGroups: [""]
resources: ["pods/proxy","services/proxy"]
verbs: ["get","list","watch"]

Refer this discussion

The ExternalService use case isn't handled by the kubernetes inbuilt proxy endpoints. So it does make sense to include this new package. Sorry the ExternalService case didn't strike for me earlier.

@illume
Copy link
Contributor

illume commented Oct 10, 2025

@muraliinformal I want to have some steps to test this locally. Preferrably with minikube. If you can share any config for this, it would help.

I've started to sketch out some config:

(WIP, Work in Progress)

# Headlamp Service headlamp-serviceproxy-test.yaml
apiVersion: v1
kind: Service
metadata:
  name: headlamp
  namespace: kube-system
spec:
  ports:
    - port: 80
      targetPort: 4466
  selector:
    k8s-app: headlamp
---
# Headlamp Deployment with serviceproxy enabled
apiVersion: apps/v1
kind: Deployment
metadata:
  name: headlamp
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: headlamp
  template:
    metadata:
      labels:
        k8s-app: headlamp
    spec:
      containers:
        - name: headlamp
          image: ghcr.io/headlamp-k8s/headlamp:latest
          args:
            - "-in-cluster"
            - "-plugins-dir=/headlamp/plugins"
            - "-enable-helm" # Enable Helm operations
          ports:
            - containerPort: 4466
              name: http
            - containerPort: 9090
              name: metrics
          readinessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 4466
            initialDelaySeconds: 30
            timeoutSeconds: 30
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 4466
            initialDelaySeconds: 30
            timeoutSeconds: 30
      nodeSelector:
        'kubernetes.io/os': linux
---
# ServiceAccount token for Headlamp admin
apiVersion: v1
kind: Secret
metadata:
  name: headlamp-admin
  namespace: kube-system
  annotations:
    kubernetes.io/service-account.name: "headlamp-admin"
type: kubernetes.io/service-account-token
---
# RBAC Role for serviceproxy access (nginx only)
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: serviceproxy-access
  namespace: test-services
rules:
  - apiGroups: [""]
    resources: ["services"]
    verbs: ["get"]
---
# RoleBinding for a test user
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: serviceproxy-access-binding
  namespace: test-services
subjects:
  - kind: User
    name: [email protected]
roleRef:
  kind: Role
  name: serviceproxy-access
  apiGroup: rbac.authorization.k8s.io
---
# Namespace for test services
apiVersion: v1
kind: Namespace
metadata:
  name: test-services
---
# Nginx Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  namespace: test-services
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:stable
          ports:
            - containerPort: 80
              name: http
---
# Nginx Service
apiVersion: v1
kind: Service
metadata:
  name: nginx
  namespace: test-services
spec:
  selector:
    app: nginx
  ports:
    - name: http
      port: 80
      targetPort: 80
  type: ClusterIP
---
# ExternalName Service pointing to httpbin.org
apiVersion: v1
kind: Service
metadata:
  name: external-httpbin
  namespace: test-services
spec:
  type: ExternalName
  externalName: httpbin.org
  ports:
    - name: https
      port: 443
# apply manifest
kubectl apply -f headlamp-serviceproxy-test.yaml

# port forward headlamp
kubectl port-forward svc/headlamp -n kube-system 4466:80

# With token, should be able to access the service proxy
curl -H "Authorization: Bearer <token>" \
  "http://localhost:4466/clusters/minikube/serviceproxy/test-services/nginx?request=/"

Breakdown of the service proxy URL

"http://localhost:4466/clusters/minikube/serviceproxy/test-services/nginx?request=/"

/serviceproxy

This is the special route implemented in Headlamp for proxying requests to in-cluster services.
It activates the logic in serviceproxy.RequestHandler.

/test-services/nginx

  • test-services → Namespace where the target service lives.
  • nginx → Name of the Kubernetes Service you want to proxy to.

?request=/

Query parameter specifying the path on the target service.
Here, / means the root path of the nginx service (you’ll get the default HTML page).

What Happens Internally

  • Headlamp extracts:
    • clusterName = minikube
    • namespace = test-services
    • serviceName = nginx
    • requestURI = /
  • It uses the user’s Kubernetes token to:
    • Check RBAC (get services in test-services).
    • Fetch the Service object.
  • Then it builds the URL: http://nginx.test-services: and makes the HTTP GET request.

External

These instructions are WIP too.

Operators configure external services like this:

# ExternalName Service pointing to httpbin.org
apiVersion: v1
kind: Service
metadata:
  name: external-httpbin
  namespace: test-services
spec:
  type: ExternalName
  externalName: httpbin.org
  ports:
    - name: https
      port: 443

test command

curl -H "Authorization: Bearer <token>" \ "http://localhost:4466/clusters/minikube/serviceproxy/test-services/external-httpbin?request=/get"

Headlamp sees "ExternalName" type, so checks the "externalName" part of the spec, and uses https.

Copy link
Contributor

@yolossn yolossn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The whole implementation looks good to me. Apart from the tests comments from @illume,Just a small change to include the HTTP method to the serviceproxy handler.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Oct 10, 2025
@yolossn
Copy link
Contributor

yolossn commented Oct 11, 2025

@muraliinformal I’ve refactored the handler.go file to make it more testable. Since a full handler test would require an end-to-end setup. I split the main function into smaller functions and wrote unit tests for each. These unit tests cover the overall flow of the handler.

handler.go

package serviceproxy

import (
	"fmt"
	"net/http"
	"strings"
	"time"

	"github.com/gorilla/mux"
	"github.com/kubernetes-sigs/headlamp/backend/pkg/auth"
	"github.com/kubernetes-sigs/headlamp/backend/pkg/kubeconfig"
	"github.com/kubernetes-sigs/headlamp/backend/pkg/logger"
	"k8s.io/apimachinery/pkg/api/errors"
	"k8s.io/client-go/kubernetes"
)

// RequestHandler is an HTTP handler that proxies requests to a Kubernetes service.
func RequestHandler(kubeConfigStore kubeconfig.ContextStore, w http.ResponseWriter, r *http.Request) { //nolint:funlen
	clusterName, namespace, name, requestURI := parseInfoFromRequest(r)

	defer disableResponseCaching(w)

	// Get the context
	ctx, err := kubeConfigStore.GetContext(clusterName)
	if err != nil {
		logger.Log(logger.LevelError, nil, err, "failed to get context")
		w.WriteHeader(http.StatusNotFound)

		return
	}

	bearerToken, err := getAuthToken(r, clusterName)
	if err != nil {
		logger.Log(logger.LevelError, nil, err, "failed to get auth token")
		w.WriteHeader(http.StatusUnauthorized)
		return
	}

	// Get a ClientSet with the auth token
	cs, err := ctx.ClientSetWithToken(bearerToken)
	if err != nil {
		logger.Log(logger.LevelError, nil, err, "failed to get ClientSet")
		w.WriteHeader(http.StatusNotFound)

		return
	}

	// Get the service
	ps, err, status := getServiceFromCluster(cs, namespace, name)
	if err != nil {
		w.WriteHeader(status)
		return
	}

	// Get a service connection object and make the request
	conn := NewConnection(ps)

	handleServiceProxy(conn, requestURI, w)
}

func parseInfoFromRequest(r *http.Request) (string, string, string, string) {
	clusterName := mux.Vars(r)["clusterName"]
	namespace := mux.Vars(r)["namespace"]
	name := mux.Vars(r)["name"]
	requestURI := r.URL.Query().Get("request")
	return clusterName, namespace, name, requestURI
}

func getAuthToken(r *http.Request, clusterName string) (string, error) {
	// Try to get token from cookie first
	tokenFromCookie, err := auth.GetTokenFromCookie(r, clusterName)
	if err == nil && tokenFromCookie != "" {
		return tokenFromCookie, nil
	}

	// Fall back to Authorization header
	authToken := r.Header.Get("Authorization")
	if len(authToken) == 0 {
		return "", fmt.Errorf("unauthorized")
	}

	bearerToken := strings.TrimPrefix(authToken, "Bearer ")
	if bearerToken == "" {
		return "", fmt.Errorf("unauthorized")
	}

	return bearerToken, nil
}

func getServiceFromCluster(cs kubernetes.Interface, namespace string, name string) (*proxyService, error, int) {
	ps, err := GetService(cs, namespace, name)
	if err != nil {
		if errors.IsUnauthorized(err) {
			return nil, err, http.StatusUnauthorized
		}
		return nil, err, http.StatusNotFound
	}

	return ps, nil, http.StatusOK
}

func disableResponseCaching(w http.ResponseWriter) {
	w.Header().Set("Cache-Control", "no-cache, private, max-age=0")
	w.Header().Set("Expires", time.Unix(0, 0).Format(http.TimeFormat))
	w.Header().Set("Pragma", "no-cache")
	w.Header().Set("X-Accel-Expires", "0")
}

func handleServiceProxy(conn ServiceConnection, requestURI string, w http.ResponseWriter) {
	resp, err := conn.Get(requestURI)
	if err != nil {
		logger.Log(logger.LevelError, nil, err, "service get request failed")
		http.Error(w, err.Error(), http.StatusInternalServerError)

		return
	}

	_, err = w.Write(resp)
	if err != nil {
		logger.Log(logger.LevelError, nil, err, "writing response")
		http.Error(w, err.Error(), http.StatusInternalServerError)

		return
	}
}

handler_test.go

package serviceproxy

import (
	"net/http"
	"net/http/httptest"
	"testing"

	"github.com/gorilla/mux"
	"github.com/stretchr/testify/assert"
	corev1 "k8s.io/api/core/v1"
	"k8s.io/apimachinery/pkg/api/errors"
	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
	"k8s.io/apimachinery/pkg/runtime"
	"k8s.io/apimachinery/pkg/runtime/schema"
	"k8s.io/client-go/kubernetes/fake"
	k8stesting "k8s.io/client-go/testing"
)

func TestHandleServiceProxy(t *testing.T) {
	tests := []struct {
		name           string
		proxyService   *proxyService
		requestURI     string
		mockResponse   string
		mockStatusCode int
		expectedCode   int
		expectedBody   string
		useMockServer  bool
	}{
		// Success cases
		{
			name:           "successful request",
			proxyService:   &proxyService{URIPrefix: "http://example.com"},
			requestURI:     "/test",
			mockResponse:   "Hello, World!",
			mockStatusCode: http.StatusOK,
			expectedCode:   http.StatusOK,
			expectedBody:   "Hello, World!",
			useMockServer:  true,
		},
		{
			name:           "successful request with different response",
			proxyService:   &proxyService{URIPrefix: "http://api.example.com"},
			requestURI:     "/api/v1/data",
			mockResponse:   `{"status": "success", "data": "test"}`,
			mockStatusCode: http.StatusOK,
			expectedCode:   http.StatusOK,
			expectedBody:   `{"status": "success", "data": "test"}`,
			useMockServer:  true,
		},
		{
			name:           "request with query parameters",
			proxyService:   &proxyService{URIPrefix: "https://service.example.com"},
			requestURI:     "/api?param=value&test=123",
			mockResponse:   "Query processed",
			mockStatusCode: http.StatusOK,
			expectedCode:   http.StatusOK,
			expectedBody:   "Query processed",
			useMockServer:  true,
		},
		{
			name:           "empty response",
			proxyService:   &proxyService{URIPrefix: "http://empty.example.com"},
			requestURI:     "/empty",
			mockResponse:   "",
			mockStatusCode: http.StatusOK,
			expectedCode:   http.StatusOK,
			expectedBody:   "",
			useMockServer:  true,
		},
		// Error cases
		{
			name:           "server returns 404",
			proxyService:   &proxyService{URIPrefix: "http://example.com"},
			requestURI:     "/notfound",
			mockResponse:   "error response",
			mockStatusCode: http.StatusNotFound,
			expectedCode:   http.StatusInternalServerError,
			expectedBody:   "failed HTTP GET, status code 404\n",
			useMockServer:  true,
		},
		{
			name:           "server returns 500",
			proxyService:   &proxyService{URIPrefix: "http://example.com"},
			requestURI:     "/error",
			mockResponse:   "error response",
			mockStatusCode: http.StatusInternalServerError,
			expectedCode:   http.StatusInternalServerError,
			expectedBody:   "failed HTTP GET, status code 500\n",
			useMockServer:  true,
		},
		{
			name:           "invalid URL in proxy service",
			proxyService:   &proxyService{URIPrefix: "://invalid-url"},
			requestURI:     "/test",
			mockResponse:   "",
			mockStatusCode: http.StatusOK,
			expectedCode:   http.StatusInternalServerError,
			expectedBody:   "invalid host uri: parse \"://invalid-url\": missing protocol scheme\n",
			useMockServer:  false,
		},
		{
			name:           "invalid request URI",
			proxyService:   &proxyService{URIPrefix: "http://example.com"},
			requestURI:     "://invalid-request-uri",
			mockResponse:   "",
			mockStatusCode: http.StatusOK,
			expectedCode:   http.StatusInternalServerError,
			expectedBody:   "invalid request uri: parse \"://invalid-request-uri\": missing protocol scheme\n",
			useMockServer:  false,
		},
	}

	for _, tt := range tests {
		t.Run(tt.name, func(t *testing.T) {
			// Create a mock HTTP server for cases that need it
			if tt.useMockServer {
				server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
					w.WriteHeader(tt.mockStatusCode)
					w.Write([]byte(tt.mockResponse))
				}))
				defer server.Close()

				// Update the proxy service to use the mock server
				tt.proxyService.URIPrefix = server.URL
			}

			// Create connection and test
			conn := NewConnection(tt.proxyService)
			w := httptest.NewRecorder()
			handleServiceProxy(conn, tt.requestURI, w)

			assert.Equal(t, tt.expectedCode, w.Code)
			assert.Equal(t, tt.expectedBody, w.Body.String())
		})
	}
}

func TestDisableResponseCaching(t *testing.T) {
	w := httptest.NewRecorder()
	disableResponseCaching(w)

	assert.Equal(t, "no-cache, private, max-age=0", w.Header().Get("Cache-Control"))
	assert.Equal(t, "no-cache", w.Header().Get("Pragma"))
	assert.Equal(t, "0", w.Header().Get("X-Accel-Expires"))
}

// createMockService creates a mock Kubernetes service for testing
func createMockService(namespace, name string) *corev1.Service {
	return &corev1.Service{
		ObjectMeta: metav1.ObjectMeta{
			Name:      name,
			Namespace: namespace,
		},
		Spec: corev1.ServiceSpec{
			Ports: []corev1.ServicePort{
				{
					Name: "https",
					Port: 443,
				},
			},
		},
	}
}

func TestGetAuthToken(t *testing.T) {
	tests := []struct {
		name          string
		clusterName   string
		setupRequest  func() *http.Request
		expectedToken string
		expectError   bool
		errorMsg      string
	}{
		{
			name:        "token from cookie",
			clusterName: "my-cluster",
			setupRequest: func() *http.Request {
				req := httptest.NewRequest("GET", "/test", nil)
				req.AddCookie(&http.Cookie{
					Name:  "headlamp-auth-my-cluster.0",
					Value: "cookie-token-xyz",
				})
				return req
			},
			expectedToken: "cookie-token-xyz",
			expectError:   false,
		},
		{
			name:        "token from Authorization header when no cookie exists",
			clusterName: "test-cluster",
			setupRequest: func() *http.Request {
				req := httptest.NewRequest("GET", "/test", nil)
				req.Header.Set("Authorization", "Bearer header-token-123")
				return req
			},
			expectedToken: "header-token-123",
			expectError:   false,
		},
		{
			name:        "cookie takes precedence over Authorization header",
			clusterName: "test-cluster",
			setupRequest: func() *http.Request {
				req := httptest.NewRequest("GET", "/test", nil)
				req.AddCookie(&http.Cookie{
					Name:  "headlamp-auth-test-cluster.0",
					Value: "cookie-token-wins",
				})
				req.Header.Set("Authorization", "Bearer header-token-loses")
				return req
			},
			expectedToken: "cookie-token-wins",
			expectError:   false,
		},
		{
			name:        "no Authorization header and no cookie returns error",
			clusterName: "test-cluster",
			setupRequest: func() *http.Request {
				req := httptest.NewRequest("GET", "/test", nil)
				return req
			},
			expectError: true,
			errorMsg:    "unauthorized",
		},
		{
			name:        "Authorization header with only Bearer keyword",
			clusterName: "test-cluster",
			setupRequest: func() *http.Request {
				req := httptest.NewRequest("GET", "/test", nil)
				req.Header.Set("Authorization", "Bearer")
				return req
			},
			expectedToken: "Bearer",
			expectError:   false,
		},
		{
			name:        "Authorization header with Bearer and space only - error",
			clusterName: "test-cluster",
			setupRequest: func() *http.Request {
				req := httptest.NewRequest("GET", "/test", nil)
				req.Header.Set("Authorization", "Bearer ")
				return req
			},
			expectError: true,
			errorMsg:    "unauthorized",
		},
		{
			name:        "valid token with Bearer prefix",
			clusterName: "test-cluster",
			setupRequest: func() *http.Request {
				req := httptest.NewRequest("GET", "/test", nil)
				req.Header.Set("Authorization", "Bearer valid-token-value")
				return req
			},
			expectedToken: "valid-token-value",
			expectError:   false,
		},
		{
			name:        "Authorization header without Bearer prefix",
			clusterName: "test-cluster",
			setupRequest: func() *http.Request {
				req := httptest.NewRequest("GET", "/test", nil)
				req.Header.Set("Authorization", "just-a-token")
				return req
			},
			expectedToken: "just-a-token",
			expectError:   false,
		},
	}

	for _, tt := range tests {
		t.Run(tt.name, func(t *testing.T) {
			req := tt.setupRequest()
			token, err := getAuthToken(req, tt.clusterName)

			if tt.expectError {
				assert.Error(t, err)
				if tt.errorMsg != "" {
					assert.Contains(t, err.Error(), tt.errorMsg)
				}
			} else {
				assert.NoError(t, err)
				assert.Equal(t, tt.expectedToken, token)
			}
		})
	}
}

func TestGetServiceFromCluster(t *testing.T) {
	tests := []struct {
		name           string
		namespace      string
		serviceName    string
		setupService   bool
		mockError      error
		expectedStatus int
		expectError    bool
	}{
		{
			name:           "service not found",
			namespace:      "default",
			serviceName:    "nonexistent-service",
			setupService:   false,
			mockError:      nil,
			expectedStatus: http.StatusNotFound,
			expectError:    true,
		},
		{
			name:           "service found successfully",
			namespace:      "default",
			serviceName:    "test-service",
			setupService:   true,
			mockError:      nil,
			expectedStatus: http.StatusOK,
			expectError:    false,
		},
		{
			name:           "service in different namespace",
			namespace:      "kube-system",
			serviceName:    "metrics-server",
			setupService:   true,
			mockError:      nil,
			expectedStatus: http.StatusOK,
			expectError:    false,
		},
		{
			name:           "unauthorized access",
			namespace:      "default",
			serviceName:    "restricted-service",
			setupService:   false,
			mockError:      errors.NewUnauthorized("user does not have permission"),
			expectedStatus: http.StatusUnauthorized,
			expectError:    true,
		},
		{
			name:         "forbidden access",
			namespace:    "default",
			serviceName:  "forbidden-service",
			setupService: false,
			mockError: errors.NewForbidden(
				schema.GroupResource{Resource: "services"},
				"forbidden-service",
				nil,
			),
			expectedStatus: http.StatusNotFound,
			expectError:    true,
		},
	}

	for _, tt := range tests {
		t.Run(tt.name, func(t *testing.T) {
			var cs *fake.Clientset

			if tt.mockError != nil {
				// Create a fake clientset with a reactor to simulate errors
				cs = fake.NewSimpleClientset()
				cs.PrependReactor("get", "services", func(action k8stesting.Action) (handled bool, ret runtime.Object, err error) {
					return true, nil, tt.mockError
				})
			} else if tt.setupService {
				// Setup a mock service
				service := createMockService(tt.namespace, tt.serviceName)
				cs = fake.NewSimpleClientset(service)
			} else {
				// Empty clientset (service not found)
				cs = fake.NewSimpleClientset()
			}

			ps, err, status := getServiceFromCluster(cs, tt.namespace, tt.serviceName)

			assert.Equal(t, tt.expectedStatus, status)

			if tt.expectError {
				assert.Error(t, err)
				assert.Nil(t, ps)
			} else {
				assert.NoError(t, err)
				assert.NotNil(t, ps)
				assert.Equal(t, tt.serviceName, ps.Name)
				assert.Equal(t, tt.namespace, ps.Namespace)
			}
		})
	}
}

func TestParseInfoFromRequest(t *testing.T) {
	tests := []struct {
		name                string
		setupRequest        func() *http.Request
		expectedClusterName string
		expectedNamespace   string
		expectedName        string
		expectedRequestURI  string
	}{
		{
			name: "standard case with all parameters",
			setupRequest: func() *http.Request {
				req := httptest.NewRequest("GET", "/clusters/test-cluster/namespaces/test-namespace/services/test-service/proxy?request=/api/v1/data", nil)
				req = mux.SetURLVars(req, map[string]string{
					"clusterName": "test-cluster",
					"namespace":   "test-namespace",
					"name":        "test-service",
				})
				return req
			},
			expectedClusterName: "test-cluster",
			expectedNamespace:   "test-namespace",
			expectedName:        "test-service",
			expectedRequestURI:  "/api/v1/data",
		},
		{
			name: "cluster name with hyphens and numbers",
			setupRequest: func() *http.Request {
				req := httptest.NewRequest("GET", "/clusters/prod-cluster-123/namespaces/kube-system/services/metrics-server/proxy?request=/metrics", nil)
				req = mux.SetURLVars(req, map[string]string{
					"clusterName": "prod-cluster-123",
					"namespace":   "kube-system",
					"name":        "metrics-server",
				})
				return req
			},
			expectedClusterName: "prod-cluster-123",
			expectedNamespace:   "kube-system",
			expectedName:        "metrics-server",
			expectedRequestURI:  "/metrics",
		},
		{
			name: "request URI with query parameters",
			setupRequest: func() *http.Request {
				// The & in the request parameter needs to be URL encoded as %26
				req := httptest.NewRequest("GET", "/proxy?request=/api/endpoint?param1=value1%26param2=value2", nil)
				req = mux.SetURLVars(req, map[string]string{
					"clusterName": "my-cluster",
					"namespace":   "default",
					"name":        "my-service",
				})
				return req
			},
			expectedClusterName: "my-cluster",
			expectedNamespace:   "default",
			expectedName:        "my-service",
			expectedRequestURI:  "/api/endpoint?param1=value1&param2=value2",
		},
		{
			name: "empty request URI parameter",
			setupRequest: func() *http.Request {
				req := httptest.NewRequest("GET", "/proxy", nil)
				req = mux.SetURLVars(req, map[string]string{
					"clusterName": "cluster1",
					"namespace":   "ns1",
					"name":        "svc1",
				})
				return req
			},
			expectedClusterName: "cluster1",
			expectedNamespace:   "ns1",
			expectedName:        "svc1",
			expectedRequestURI:  "",
		},
		{
			name: "request URI with special characters encoded",
			setupRequest: func() *http.Request {
				req := httptest.NewRequest("GET", "/proxy?request=/api/v1/users%2F123%2Fprofile", nil)
				req = mux.SetURLVars(req, map[string]string{
					"clusterName": "test",
					"namespace":   "app",
					"name":        "backend",
				})
				return req
			},
			expectedClusterName: "test",
			expectedNamespace:   "app",
			expectedName:        "backend",
			expectedRequestURI:  "/api/v1/users/123/profile",
		},
		{
			name: "missing mux variables returns empty strings",
			setupRequest: func() *http.Request {
				req := httptest.NewRequest("GET", "/proxy?request=/test", nil)
				// Not setting any mux vars
				return req
			},
			expectedClusterName: "",
			expectedNamespace:   "",
			expectedName:        "",
			expectedRequestURI:  "/test",
		},
		{
			name: "service name with dots (for headless services)",
			setupRequest: func() *http.Request {
				req := httptest.NewRequest("GET", "/proxy?request=/health", nil)
				req = mux.SetURLVars(req, map[string]string{
					"clusterName": "cluster",
					"namespace":   "default",
					"name":        "my-service.default.svc.cluster.local",
				})
				return req
			},
			expectedClusterName: "cluster",
			expectedNamespace:   "default",
			expectedName:        "my-service.default.svc.cluster.local",
			expectedRequestURI:  "/health",
		},
		{
			name: "complex request URI with path and multiple query params",
			setupRequest: func() *http.Request {
				// The & in the request parameter needs to be URL encoded as %26
				req := httptest.NewRequest("GET", "/proxy?request=/api/v2/search?q=test%26limit=10%26offset=0", nil)
				req = mux.SetURLVars(req, map[string]string{
					"clusterName": "production",
					"namespace":   "api-namespace",
					"name":        "search-service",
				})
				return req
			},
			expectedClusterName: "production",
			expectedNamespace:   "api-namespace",
			expectedName:        "search-service",
			expectedRequestURI:  "/api/v2/search?q=test&limit=10&offset=0",
		},
	}

	for _, tt := range tests {
		t.Run(tt.name, func(t *testing.T) {
			req := tt.setupRequest()
			clusterName, namespace, name, requestURI := parseInfoFromRequest(req)
			assert.Equal(t, tt.expectedClusterName, clusterName)
			assert.Equal(t, tt.expectedNamespace, namespace)
			assert.Equal(t, tt.expectedName, name)
			assert.Equal(t, tt.expectedRequestURI, requestURI)
		})
	}
}

@k8s-ci-robot k8s-ci-robot added size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. and removed size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. labels Oct 13, 2025
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Oct 13, 2025
Copy link
Contributor

@illume illume left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🎉 thanks!

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: illume, shahvrushali22

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Oct 15, 2025
@illume illume dismissed yolossn’s stale review October 15, 2025 11:03

The requested changes were done.

@illume illume merged commit 9230eba into kubernetes-sigs:main Oct 15, 2025
9 of 10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants