Skip to content

Request & response logging the easy way. Inject SSL certs and headers. Delay requests if needed.

Notifications You must be signed in to change notification settings

martinschilliger/Enei

Repository files navigation

Enei

Your choice if Envoy proxy feels too heavy for the task.

Mostly used as ambassador container in Kubernetes, but runs everywhere. Based on Bun >1.3, zero dependencies.

Why?

This container can be dropped into a Kubernetes Deployment and act as a middleware between the service and the pod. It will by default print all the requests and their responses to stdout. This way you can watch it with kubectl logs or do whatever you want with them, for example forward to Grafana Loki.

Basically it makes what socat -v TCP4-LISTEN:42144,reuseaddr,fork TCP4:example.com:42118 does but in a nice formatted way.

If you feel comfortable with Python, you can also use mitmproxy for the task. But I did not. 🤷‍♂️

Features

  • 📝 Configure logging of request and response (that's not easy with nginx, traefik, ha-proxy). You can print metadata, headers, body, even secrets (if you want so). And all configurable by environment variables.
    • Supports colored output, can send HTTP response code >= 400 to stderr
    • Masks all secrets such as Bearer token, X-Api-Key, Basic Auth, X-Token (unless you configure …SHOW_SECRETS).
    • One line per request, one line per response, no logspam. And with unique ID per request (uses Bun.randomUUIDv7(), a sequential ID based on the current timestamp).
  • 🔐 Handles compression, TLS versions and custom CA (for example company wide root certificates) so you don't have to tweak your existing app.
  • 🐢 Can delay network requests on specific paths and request bodies (uses RegExp, test them for example on RegExr). And make sure Enei has enough RAM to keep the data in memory while waiting.
  • 📨 Inject or overwrite custom headers to your requests.

Warning

Warning

Still in development phase, please file issues for bugs you find!

Config

ENV-Variable Description Default / Notes
PORT Listener port 42144
ENEI_DESTINATION Destination URL. You can also specify protocol and port here. https://postman-echo.com
ENEI_DELAY_1_PATH_REGEX Regex on URL.pathname + URL.search to delay request forwarding. Enei will just wait with sending the request to the destination. Useful for debugging. ^\/delayed\/
ENEI_DELAY_1_BODY_REGEX Regex on request body to delay request forwarding, like on path above. If either path or body is found the delay is applied. Right now there is no possibility to delay on response body, file an issue if you think that should be supported. ☕ ``
ENEI_DELAY_1_MILLISECONDS The duration to delay request forwarding. Note: The delays are tested sequentially, so if all thre match you get a delay of 15 seconds in this example! 5001
ENEI_DELAY_2_PATH_REGEX Same as ENEI_DELAY_1_PATH_REGEX ``
ENEI_DELAY_2_BODY_REGEX Same as ENEI_DELAY_1_BODY_REGEX ``
ENEI_DELAY_2_MILLISECONDS Same as ENEI_DELAY_1_MILLISECONDS 3002
ENEI_DELAY_3_PATH_REGEX Same as ENEI_DELAY_1_PATH_REGEX ``
ENEI_DELAY_3_BODY_REGEX Same as ENEI_DELAY_1_BODY_REGEX ``
ENEI_DELAY_3_MILLISECONDS Same as ENEI_DELAY_1_MILLISECONDS 7003
ENEI_FORWARD_CUSTOM_HEADERS Custom headers added to the request. Should be a JSON object as string (gets parsed by Enei), like {"x-api-key": "token-42"}. Will overwrite existing (thats a feature!)
ENEI_BACKWARD_CUSTOM_HEADERS Custom headers added to the response. Should be a JSON object as string (gets parsed by Enei), like {"x-api-key": "token-42"}. Will overwrite existing (thats a feature!)
ENEI_LOG_IGNORE Regex on URL.pathname + URL.search to ignore in log output. Enei will forward traffic to /health to the ENEI_DESTINATION server. Use /enei/health to check Enei itself. ^\/health(z?)$
ENEI_LOG_COLORIZE Colorize log in terminal true
ENEI_LOG_STATUSCODE_STDERR Output to stderr if HTTP response code is >= 400 false
ENEI_LOG_FORWARD Print request true
ENEI_LOG_FORWARD_HEADERS Print request headers false
ENEI_LOG_FORWARD_HEADERS_SHOW_SECRETS Print sensitive request headers. By default printed as [redacted]. false
ENEI_LOG_FORWARD_BODY Print request body false
ENEI_LOG_FORWARD_BODY_CAP Cap request body after char count 1024
ENEI_LOG_BACKWARD Print response true
ENEI_LOG_BACKWARD_HEADERS Print response headers false
ENEI_LOG_BACKWARD_HEADERS_SHOW_SECRETS Print sensitive response headers. By default printed as [redacted]. false
ENEI_LOG_BACKWARD_BODY Print response body false
ENEI_LOG_BACKWARD_BODY_CAP Cap response body after char count 1024
HTTP_PROXY, HTTPS_PROXY, NO_PROXY Proxy configuration, supported natively by Bun

If you have special SSL-Certs, mount them to the file system on a path like /config/cafile.crt (you can add multiple to the same file) and inspect Bun to read it via NODE_EXTRA_CA_CERTS.

Running it

Example Kubernetes configuration

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  selector:
    matchLabels:
      app: myapp
  replicas: 1
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp-enei-inbound
        image: ghcr.io/martinschilliger/enei:latest # always specify release version, eg enei:v0.4.0
        imagePullPolicy: IfNotPresent # Like always recommended for producation use
        ports:
          - containerPort: 42144
            name: http
        env:
          - name: PORT # Specify where we should listen and configure yourapp to point to http://localhost:42144
            value: "42144"
          - name: ENEI_DESTINATION # Specify your endpoint, can also be an external host like https://postman-echo.com
            value: "http://localhost:42118"
          - name: ENEI_DELAY_1_PATH_REGEX
            value: "^\\/api\\/incidents\\/(keyword|protocol)"
          - name: ENEI_DELAY_1_MILLISECONDS
            value: "5000"
          - name: ENEI_LOG_IGNORE
            value: "^\\/HealthCheck$"
          - name: ENEI_LOG_FORWARD
            value: "true"
          - name: ENEI_LOG_FORWARD_HEADERS
            value: "false"
          - name: ENEI_LOG_FORWARD_BODY
            value: "true"
          - name: ENEI_LOG_BACKWARD
            value: "true"
          - name: ENEI_LOG_BACKWARD_HEADERS
            value: "false"
          - name: ENEI_LOG_BACKWARD_BODY
            value: "true"
          - name: HTTP_PROXY
            value: "http://proxy.corporate.local:8080"
          - name: HTTPS_PROXY
            value: "http://proxy.corporate.local:8080"
          - name: NO_PROXY
            value: "localhost,.corporate.local"
          - name: ENEI_LOG_FORWARD_BODY
            value: "true"
          - name: ENEI_LOG_BACKWARD_BODY
            value: "true"
          - name: NODE_USE_SYSTEM_CA # https://github.com/oven-sh/bun/issues/24581
            value: "true"
          - name: NODE_EXTRA_CA_CERTS
            value: "/config/cafile.crt"
        volumeMounts:
          - mountPath: /config/cafile.crt
            name: internal-root-ca
            subPath: COMPANY-ROOT-CA.crt
        securityContext:
          runAsNonRoot: true
          runAsUser: 1000
          runAsGroup: 1000
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
        livenessProbe:
            httpGet:
              path: /enei/health
              port: http
            initialDelaySeconds: 5
            periodSeconds: 10
            failureThreshold: 3
          readinessProbe:
            httpGet:
              path: /enei/health
              port: http
            initialDelaySeconds: 2
            periodSeconds: 5
            failureThreshold: 3
      - name: myapp
        image: yourregistry/yourapp:latest
        ports:
          - containerPort: 42118
        env:
          - name: MYAPP_ENDPOINT # You could also use Enei for outbound connections, just specify a second Enei container in the same pod and adjust values
            value: "http://localhost:42122"
      dnsPolicy: ClusterFirst
      imagePullSecrets:
      - name: rescuetrack
      volumes:
      - configMap:
          name: internal-root-ca
        name: internal-root-ca

Testing it locally

Assuming you already have a container runtime working on your machine:

docker build -t ghcr.io/martinschilliger/enei:latest .
docker run -it -p 42144:42144 --rm ghcr.io/martinschilliger/enei:latest

Now open http://localhost:42144 in your favorite browser. You will see the GET request of it. If you want to see body data, access the container with curl:

curl -X POST "http://localhost:42144/post?foo=bar" -d '{"blubb":"blabb"}' -H "Content-Type: application/json"

If you don't supply a ENEI_DESTINATION we will just mirror your data.

TODO

The following features could be nice to have, but have not yet been implemented:

  • SIGTERM: Wait for ongoing requests (especially important with delayed requests) to finish.
  • Supply random failures for testing

And there is also some code cleanup needed:

  • Move process.env.XZY === "true" to a solid Boolean test function or even better a global config object
  • Add proper testing

Development

Deploy on GitHub

  1. Commit the changes to git: git add ., git commit -m "My message"
  2. Create a new version: npm version major|minor|patch
  3. Push Commits and Tags: git push origin main --follow-tags

Install dependencies

Nothing to do. There are no dependencies, it's all plain Bun 🥳.

> bun install

Run locally

bun run dev
curl 'http://localhost:42144/put' -H 'Accept-Encoding: gzip' -T tests/request-body.json -v

# maybe easier during dev work. Make sure watch and jq are installed. 
# Add -v to curl if you want to see the headers
watch -c "curl 'http://localhost:42144/put' -H 'Accept-Encoding: gzip' -T tests/request-body.json --silent | jq -C"

Run tests

Not implemented yet!

bun test

Imprint

At Schutz & Rettung Zürich we are running the dispatch center for medical and fire fighting emergencies. Most of our newer applications run in Kubernetes and need to be high available and fully transparent about what they are doing. Because if something goes wrong during an emergency call, we need to make sure the same mistake cannot happen twice.

Logo of Schutz & Rettung Zürich

We often had the need to replay *exactly* what has happend, so request and response body are important. And because we run the service for our own staff only and on promise privacy is already handled.

We thought about using tools like envoy or basic socat, but we also wanted the log to be nicely formatted because we forward them to Grafana Loki, our centralized log storage. And mitmproxy just felt too complicated for daily use.

About

Request & response logging the easy way. Inject SSL certs and headers. Delay requests if needed.

Topics

Resources

Stars

Watchers

Forks

Packages