Skip to content

bhf/st-k8s

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

120 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

StayTuned

ST-K8s

Build Test Playwright Tests Coverage

View and chat to your Kubernetes cluster and container log files.

brew tap bhf/st-k8s
brew install st-k8s
st-k8s

Features a dashboard (with a K9s inspired dark theme and keyboard navigation), REST API, port forwarding management, resource monitoring, and MCP server. In browser AI chat powered by the Copilot SDK, any OpenAI API compatible provider, or local WebLLM models (requires WebGPU support).

Chat Context in chat Chat history

Uses Github Projects for planning and tracking.

Keyboard Navigation

The dashboard supports K9s-style keyboard navigation. Press : to open the command palette and navigate between resources using commands or aliases:

  • :pods or :po
  • :deployments or :deploy
  • :services or :svc
  • ...and many more standard K8s shortcuts.

Command Pallette

Log Viewer

View, copy and download streaming logs.

Log view

Port Forwarding

Manage Kubernetes port forwarding sessions directly from the dashboard or through AI chat. Supports both Pods and Services.

  • Dynamic Config: Specify target ports and local interface bindings.
  • Service Mapping: Automatically resolves Service targets to active Pods.
  • Agentic Control: Start or stop forwards using natural language through the Copilot integration or MCP server.

Port Forwarding

Resource Monitoring

Monitor CPU and memory usage for Nodes and Pods directly in the dashboard using interactive charts. Requires that your cluster has metrics server installed.

  • Real-time Data: Fetches live metrics from the Kubernetes Metrics Server.
  • Node Metrics: View cluster-wide resource utilization across all nodes.
  • Pod Metrics: Inspect resource consumption for individual pods in any namespace.
  • Visual Charts: Interactive Recharts-based visualizations for easier performance analysis.

Hardware Acceleration & WebGPU

ST-K8s supports local AI models running directly in your browser using WebLLM. This requires WebGPU and hardware acceleration to be enabled.

Google Chrome / Chromium

  1. Ensure you are on a recent version of Chrome.
  2. Enable WebGPU: Paste chrome://flags/#enable-unsafe-webgpu into your address bar and set it to Enabled.
  3. Enable Vulkan (Linux/Windows): Paste chrome://flags/#enable-vulkan and set it to Enabled.
  4. Relaunch Chrome.

Mozilla Firefox

  1. Type about:config in the address bar.
  2. Search for dom.webgpu.enabled and set it to true.
  3. Search for gfx.webgpu.force-enabled and set it to true if WebGPU doesn't work by default.
  4. MacOS users may also need to ensure gfx.webrender.all is true.

Verification

You can verify WebGPU support by visiting webgpu.github.io/webgpu-samples. If the samples run, ST-K8s will be able to load local models.

Table of Contents

How to Run

Using Homebrew (macOS/Linux)

The easiest way to install and run st-k8s is via Homebrew:

brew tap bhf/st-k8s
brew install st-k8s
st-k8s

From Source

To use the browser based chat feature make sure you install the Copilot CLI.

git clone https://github.com/bhf/st-k8s
cd st-k8s
npm run build
npm run start

Using the st-k8s CLI

You can install the project as a global CLI to run the app using the st-k8s command.

# From the repo root — install globally (or publish and install from a registry)
npm install -g .

# During development, link the local package to make `st-k8s` available globally
npm link

# Then launch the app with the CLI (it will build if no build exists)
st-k8s

Notes:

  • npm install -g . requires appropriate permissions (use sudo on some systems).
  • npm link is useful when iterating locally — run it once from the repo root.
  • The st-k8s command will attempt to use a Next.js standalone server if present (from next build), otherwise it runs npm run start.

Running Tests

This project uses Vitest for testing.

# Run all tests
npm test

# Run tests in watch mode
npm test -- --watch

# Run tests with coverage
npm run test:coverage

End-to-End Tests

This project uses Playwright for End-to-End testing.

# Run E2E tests
npm run test:e2e

Back to Top

API

Swagger spec available at http://localhost:3000/openapi.json after starting the server or from the public folder.

Back to Top

Model Context Protocol (MCP) Server

This project includes an MCP server that exposes Kubernetes tools to LLMs over stdio. Here are some example uses:

  • List of pods
  • Rank containers by their memory requests and limits
  • Summary of the last events in the namespace
  • Get the last 100 lines of logs for a specific pod

img_1.png

img_2.png

Back to Top

Features

Exposes read-only Kubernetes operations as tools:

  • list_namespaces
  • list_pods
  • list_deployments
  • list_services
  • list_daemonsets
  • list_replicasets
  • list_statefulsets
  • list_ingresses
  • list_endpoints
  • list_events
  • list_pvcs
  • list_nodes
  • list_configmaps
  • list_jobs
  • list_cronjobs
  • list_serviceaccounts
  • list_roles
  • list_rolebindings
  • get_pod_logs
  • list_port_forwards
  • start_port_forward
  • stop_port_forward
  • get_node_metrics
  • get_pod_metrics

Back to Top

Running the MCP Server

Make sure to auth your kubectl context in your preferred way before running the MCP server.

You can run the MCP server directly using:

npm run mcp

You can also run it from VSCode or any MCP-compatible client by configuring it as shown below.

Back to Top

Configuring for VSCode

Add the following to your mcp.json

{
  "servers": {
    "k8s-tools": {
      "command": "npm",
      "args": ["run", "mcp"],
      "cwd": "/absolute/path/to/st-k8s",
      "disabled": false,
      "autoApprove": [] 
    }
  }
}

Make sure to replace /absolute/path/to/st-k8s with the actual path to this repository on your machine.

Back to Top

LLM Integration Techniques

This project uses several LLM-based techniques to enhance the development lifecycle and user experience. These artifacts are located in the .github directory:

Back to Top

High Level Architecture

alt text

Back to Top

Accessibility

We are committed to making the dashboard accessible to all users. Please refer to our Accessibility Statement and Guidelines for details on current status, findings, and remediation plans.

Security

We take security seriously. Please refer to our Security Review for details on our security posture, findings, and recommendations.

Back to Top

About

View and chat to your K8s cluster & application logs. K9s themed dashboard, REST API+MCP server. In browser chat using Copilot SDK, OpenAI API or local WebLLM. Brew installable.

Topics

Resources

Stars

Watchers

Forks

Packages

 
 
 

Contributors