View and chat to your Kubernetes cluster and container log files.
brew tap bhf/st-k8s
brew install st-k8s
st-k8sFeatures a dashboard (with a K9s inspired dark theme and keyboard navigation), REST API, port forwarding management, resource monitoring, and MCP server. In browser AI chat powered by the Copilot SDK, any OpenAI API compatible provider, or local WebLLM models (requires WebGPU support).
Uses Github Projects for planning and tracking.
The dashboard supports K9s-style keyboard navigation. Press : to open the command palette and navigate between resources using commands or aliases:
:podsor:po:deploymentsor:deploy:servicesor:svc- ...and many more standard K8s shortcuts.
View, copy and download streaming logs.
Manage Kubernetes port forwarding sessions directly from the dashboard or through AI chat. Supports both Pods and Services.
- Dynamic Config: Specify target ports and local interface bindings.
- Service Mapping: Automatically resolves Service targets to active Pods.
- Agentic Control: Start or stop forwards using natural language through the Copilot integration or MCP server.
Monitor CPU and memory usage for Nodes and Pods directly in the dashboard using interactive charts. Requires that your cluster has metrics server installed.
- Real-time Data: Fetches live metrics from the Kubernetes Metrics Server.
- Node Metrics: View cluster-wide resource utilization across all nodes.
- Pod Metrics: Inspect resource consumption for individual pods in any namespace.
- Visual Charts: Interactive Recharts-based visualizations for easier performance analysis.
ST-K8s supports local AI models running directly in your browser using WebLLM. This requires WebGPU and hardware acceleration to be enabled.
- Ensure you are on a recent version of Chrome.
- Enable WebGPU: Paste
chrome://flags/#enable-unsafe-webgpuinto your address bar and set it to Enabled. - Enable Vulkan (Linux/Windows): Paste
chrome://flags/#enable-vulkanand set it to Enabled. - Relaunch Chrome.
- Type
about:configin the address bar. - Search for
dom.webgpu.enabledand set it to true. - Search for
gfx.webgpu.force-enabledand set it to true if WebGPU doesn't work by default. - MacOS users may also need to ensure
gfx.webrender.allis true.
You can verify WebGPU support by visiting webgpu.github.io/webgpu-samples. If the samples run, ST-K8s will be able to load local models.
- ST-K8s
The easiest way to install and run st-k8s is via Homebrew:
brew tap bhf/st-k8s
brew install st-k8s
st-k8sTo use the browser based chat feature make sure you install the Copilot CLI.
git clone https://github.com/bhf/st-k8s
cd st-k8s
npm run build
npm run startYou can install the project as a global CLI to run the app using the st-k8s command.
# From the repo root — install globally (or publish and install from a registry)
npm install -g .
# During development, link the local package to make `st-k8s` available globally
npm link
# Then launch the app with the CLI (it will build if no build exists)
st-k8sNotes:
npm install -g .requires appropriate permissions (usesudoon some systems).npm linkis useful when iterating locally — run it once from the repo root.- The
st-k8scommand will attempt to use a Next.js standalone server if present (fromnext build), otherwise it runsnpm run start.
This project uses Vitest for testing.
# Run all tests
npm test
# Run tests in watch mode
npm test -- --watch
# Run tests with coverage
npm run test:coverageThis project uses Playwright for End-to-End testing.
# Run E2E tests
npm run test:e2eSwagger spec available at http://localhost:3000/openapi.json after starting the server or from the public folder.
This project includes an MCP server that exposes Kubernetes tools to LLMs over stdio. Here are some example uses:
- List of pods
- Rank containers by their memory requests and limits
- Summary of the last events in the namespace
- Get the last 100 lines of logs for a specific pod
Exposes read-only Kubernetes operations as tools:
list_namespaceslist_podslist_deploymentslist_serviceslist_daemonsetslist_replicasetslist_statefulsetslist_ingresseslist_endpointslist_eventslist_pvcslist_nodeslist_configmapslist_jobslist_cronjobslist_serviceaccountslist_roleslist_rolebindingsget_pod_logslist_port_forwardsstart_port_forwardstop_port_forwardget_node_metricsget_pod_metrics
Make sure to auth your kubectl context in your preferred way before running the MCP server.
You can run the MCP server directly using:
npm run mcpYou can also run it from VSCode or any MCP-compatible client by configuring it as shown below.
Add the following to your mcp.json
{
"servers": {
"k8s-tools": {
"command": "npm",
"args": ["run", "mcp"],
"cwd": "/absolute/path/to/st-k8s",
"disabled": false,
"autoApprove": []
}
}
}
Make sure to replace /absolute/path/to/st-k8s with the actual path to this repository on your machine.
This project uses several LLM-based techniques to enhance the development lifecycle and user experience. These artifacts are located in the .github directory:
- Agents: Domain-specific personas which embody specialized knowledge for consistent code generation.
- Instructions: Contextual guidelines that enforce coding standards and architectural patterns.
- Skills: Reusable capabilities that allow the model to perform complex tasks.
- Prompts: Curated prompt templates ensuring high-quality, reproducible outputs for specific tasks.
We are committed to making the dashboard accessible to all users. Please refer to our Accessibility Statement and Guidelines for details on current status, findings, and remediation plans.
We take security seriously. Please refer to our Security Review for details on our security posture, findings, and recommendations.








