Skip to content

A test respository to put together instructions and guidance around how to deploy the router into a local k8s setup for learning and rapid testing

Notifications You must be signed in to change notification settings

FineAndDanD/router-k8s-setup

Repository files navigation

Router

router Rust Graph Routing runtime for Apollo Federation

Version: 1.59.1 Type: application AppVersion: v1.59.1

Prerequisites

  • Kubernetes v1.19+

Get Repo Info

helm pull oci://ghcr.io/apollographql/helm-charts/router --version 1.59.1

Install Chart

Important: only helm3 is supported

helm upgrade --install router oci://ghcr.io/apollographql/helm-charts/router --values values.yaml

See configuration below.

How to install from local chart for use with minikube

  1. First start minikube after ensuring that the docker daemon is running:
minikube start
  1. Ensure you have a namespace created to host the router. If you need to create one you can do so with:
kubectl create namespace <namespace-name-here>

To change your namespace use:

kubectl config set-context --current --namespace=my-namespace
  1. After pulling down the helm charts using the above pull command, unzip the directory and make your updates to the helm charts! TODO: provide more info on how to understand and update the helm charts

  2. Run this command to install the helm charts and deploy the service to kubernetes (ENSURE THAT YOU'RE IN THE SAME DIRECTORY AS YOUR MAIN Chart.yaml!!):

helm upgrade --install apollo-router  --namespace router-test ./ --version 1.59.1 --values ./values.yaml
  1. You'll get something similar to the following output, ensure you run those commands in the terminal:
export POD_NAME=$(kubectl get pods --namespace router-test -l "app.kubernetes.io/name=router,app.kubernetes.io/instance=apollo-router" -o jsonpath="{.items[0].metadata.name}")
  export CONTAINER_PORT=$(kubectl get pod --namespace router-test $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
  echo "Visit http://127.0.0.1:4000 to use your application"
  kubectl --namespace router-test port-forward $POD_NAME 4000:$CONTAINER_PORT

The last line of this will enable port-forwarding by default so you can access the rotuer locally. If you don't want to use port 8080 you can change that to whatever you want. Ex.

kubectl --namespace router-test port-forward $POD_NAME 4000:$CONTAINER_PORT

Your k8s pod cannot reach any services you have on your local machine by default. If you want to be able to call your local subgraphs from a router running in a pod in a k8s cluster you must associate that localhost address with another address. If using minikube you can replace localhost/127.0.0.1 with: host.minikube.internal. This will automatically map to the host machine minikube is running on to be able to allow pods to reach those services. If you're not using minikube then you'll need to find a way to map localhost to a local IP and then create an Endpoint kind to associate with that IP so the pod knows where to reach out to outside the cluster.

Set up tunnel quickly with minikube

Ensure you're on the correct namespace (since these commands aren't namespaced it will use whatever is the current).

run:

minikube tunnel

As a note: there's nothing wrong with using a tunnel here. When you deploy to GCP, AWS or Azure those providers have mechanisms that can set up external IP's to allow traffic into a cluster via a Load Balancer. Since there's nothing like that locally you'd need to set it up yourself. As an exercise however feel free to leverage MetalLB instead if you'd want to set something up like-for-like.

Create local docker registry

Use this command to create a registry:

docker run -d -p 5000:5000 --restart=always --name registry registry:2

You will then need to use docker push to push from the local docker daemon to the new registry.

If you want to check what is deployed to that registry you can run:

curl http://localhost:5000/v2/_catalog

Configmaps

In order to make configuration files available to your pods you can create a configmap, which will take a file in the location you indicate and copy it into the cluster so that any pod can reference it. Pretty dope eh?

To create one run this command:

kubectl create configmap router-config --from-file=router.yaml

NOTE: if you ever need to update the configmap and don't wanna mess with VIm with edit you can run something like this:

kubectl create configmap router-config --from-file=router.yaml --dry-run=client -o yaml | kubectl apply -f -

I also created a ConfigMap k8s file to allow you to update the config there and simply kubectl apply

Then you'll need to update your Deployment k8s script with this detail:

spec:
  containers:
    ...
    args: ["--config", "/config/router.yaml"]
    volumeMounts:
      - name: config-volume
        mountPath: /config
  volumes:
    - name: config-volume
      configMap:
        name: router-config

This will create a filesystem volume to store the file within the container and associate the config map.

Set up local LB within a cluster

deploy the pod from a k8s config file:

kubectl apply -f one-off-k8s-configs/quick-deployment.yaml

Create the k8s LB service and expose the pod

kubectl expose deployment apollo-router --type=LoadBalancer --port=4000

Check to make sure there's an external IP available to hit:

kubectl get svc

Configuration

CORS

If you need to configure cors with the router (like if you have it deployed in k8s), refer to this: https://www.apollographql.com/docs/graphos/routing/security/cors

See Customizing the Chart Before Installing. To see all configurable options with detailed comments, visit the chart's values.yaml, or run these configuration commands:

helm show values oci://ghcr.io/apollographql/helm-charts/router

Values

Key Type Default Description
affinity object {}
autoscaling.enabled bool false
autoscaling.maxReplicas int 100
autoscaling.minReplicas int 1
autoscaling.targetCPUUtilizationPercentage int 80
containerPorts.health int 8088 For exposing the health check endpoint
containerPorts.http int 4000 If you override the port in router.configuration.server.listen then make sure to match the listen port here
containerPorts.metrics int 9090 For exposing the metrics port when running a serviceMonitor for example
extraContainers list [] An array of extra containers to include in the router pod Example: extraContainers: - name: coprocessor image: acme/coprocessor:1.0 ports: - containerPort: 4001
extraEnvVars list []
extraEnvVarsCM string ""
extraEnvVarsSecret string ""
extraLabels object {} A map of extra labels to apply to the resources created by this chart Example: extraLabels: label_one_name: "label_one_value" label_two_name: "label_two_value"
extraVolumeMounts list []
extraVolumes list []
fullnameOverride string ""
image.pullPolicy string "IfNotPresent"
image.repository string "ghcr.io/apollographql/router"
image.tag string ""
imagePullSecrets list []
ingress.annotations object {}
ingress.className string ""
ingress.enabled bool false
ingress.hosts[0].host string "chart-example.local"
ingress.hosts[0].paths[0].path string "/"
ingress.hosts[0].paths[0].pathType string "ImplementationSpecific"
ingress.tls list []
initContainers list [] An array of init containers to include in the router pod Example: initContainers: - name: init-myservice image: busybox:1.28 command: ["sh"]
lifecycle object {}
managedFederation.apiKey string nil If using managed federation, the graph API key to identify router to Studio
managedFederation.existingSecret string nil If using managed federation, use existing Secret which stores the graph API key instead of creating a new one. If set along managedFederation.apiKey, a secret with the graph API key will be created using this parameter as name
managedFederation.existingSecretKeyRefKey string nil If using managed federation, the name of the key within the existing Secret which stores the graph API key. If set along managedFederation.apiKey, a secret with the graph API key will be created using this parameter as key, defaults to using a key of managedFederationApiKey
managedFederation.graphRef string "" If using managed federation, the variant of which graph to use
nameOverride string ""
nodeSelector object {}
podAnnotations object {}
podDisruptionBudget object {} Sets the pod disruption budget for Deployment pods
podSecurityContext object {}
priorityClassName string "" Set to existing PriorityClass name to control pod preemption by the scheduler
probes.liveness object {"initialDelaySeconds":0} Configure liveness probe
probes.readiness object {"initialDelaySeconds":0} Configure readiness probe
replicaCount int 1
resources object {}
restartPolicy string "Always" Sets the restart policy of pods
rollingUpdate object {} Sets the rolling update strategy parameters. Can take absolute values or % values.
router object {"args":["--hot-reload"],"configuration":{"health_check":{"listen":"0.0.0.0:8088"},"supergraph":{"listen":"0.0.0.0:4000"}}} See https://www.apollographql.com/docs/graphos/reference/router/configuration#yaml-config-file for yaml structure
securityContext object {}
service.annotations object {}
service.port int 80
service.type string "ClusterIP"
serviceAccount.annotations object {}
serviceAccount.create bool true
serviceAccount.name string ""
serviceMonitor.enabled bool false
serviceentry.enabled bool false
supergraphFile string nil
terminationGracePeriodSeconds int 30 Sets the termination grace period for Deployment pods
tolerations list []
topologySpreadConstraints list [] Sets the topology spread constraints for Deployment pods
virtualservice.enabled bool false

Autogenerated from chart metadata using helm-docs v1.14.2

Special notes and gotchas:

  • When the router config isn't getting mapped to the container correctly the router will ship with a default config which listens on 0.0.0.0:4000

Using kind

Installation

https://kind.sigs.k8s.io/docs/user/quick-start/#installation

Getting Started with kind

https://kind.sigs.k8s.io/docs/user/configuration/

Triage issues with the host machine not being able to hit the kind cluster:

Inspect and find the bridge network:

docker network ls | grep kind

then inspect the network:

docker network inspect <NETWORK_NAME>

Use the IP of the Gateway line in your /etc/hosts to map your custom host to. The config looks like this:

"IPAM": {
      "Driver": "default",
      "Options": null,
      "Config": [
          {
              "Subnet": "172.17.0.0/16",
              "Gateway": "172.17.0.1"
          }
      ]
  },

About

A test respository to put together instructions and guidance around how to deploy the router into a local k8s setup for learning and rapid testing

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages