╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ ██╗ ██╗██╗ ██╗██████╗ ███████╗███████╗████████╗███████╗██╗ ██╗ █████╗ ██████╗ │
│ ██║ ██╔╝██║ ██║██╔══██╗██╔════╝██╔════╝╚══██╔══╝██╔════╝██║ ██║ ██╔══██╗██╔══██╗ │
│ █████╔╝ ██║ ██║██████╔╝█████╗ ███████╗ ██║ █████╗ ██║ ██║ ███████║██████╔╝ │
│ ██╔═██╗ ██║ ██║██╔══██╗██╔══╝ ╚════██║ ██║ ██╔══╝ ██║ ██║ ██╔══██║██╔══██╗ │
│ ██║ ██╗╚██████╔╝██████╔╝███████╗███████║ ██║ ███████╗███████╗███████╗██║ ██║██║ ██║ │
│ ╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚══════╝╚══════╝ ╚═╝ ╚══════╝╚══════╝╚══════╝╚═╝ ╚═╝╚═╝ ╚═╝ │
│ Multi-Cluster Kubernetes Management Agent │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
474365181-490b7d2b-6826-44da-b9f3-0b5adc63d511.mp4
# Enable virtual environment
uv venv
# Install with uv
uv pip install -e ".[dev]"
# Run commands
uv run kubestellar --help
uv run kubestellar list-functions
uv run kubestellar execute <function_name>
uv run kubestellar agent # Start interactive AI agentYou can install and use this project as a kubectl plugin. Primary plugin name is kubestellar (for Krew and release binaries). We also ship a Python-installed alias a2a for convenience. kubectl discovers plugins via executables named kubectl-<name> on your PATH.
Installation options:
# Using uv tool (recommended for per-user install)
uv tool install .
# Using pipx (isolated virtualenv)
pipx install .
# Or directly from GitHub
pipx install 'git+https://github.com/kubestellar/a2a'
# Using pip (installs into current Python environment)
python -m pip install .Usage:
# kubectl will find the plugin as long as the executable is on PATH
# Python-installed alias (uv/pipx/pip):
kubectl a2a --help
kubectl a2a list-functions
kubectl a2a execute <function_name> -P key=value
kubectl a2a agent
# Krew or release binary name:
kubectl kubestellar --help
kubectl kubestellar list-functions
kubectl kubestellar execute <function_name> -P key=value
kubectl kubestellar agentNotes:
- The plugin entrypoint is provided by the executable
kubectl-a2a, which is installed via the Python package entry points. This makeskubectl a2abehave the same as running thekubestellarCLI directly. - With
uv tool install, executables are placed under~/.local/binby default. Ensure it is on yourPATH.
Once a release is published, you can use the generated Krew manifest to install:
# Install from the generated manifest attached to a release (name: kubestellar.yaml)
kubectl krew install --manifest=kubestellar.yaml
# Use the plugin
kubectl kubestellar --helpTo make installation available via the central krew-index and install like kubectl krew install kubestellar, submit a PR to https://github.com/kubernetes-sigs/krew-index with the kubestellar.yaml manifest from your release.
You can install the plugin by placing a binary named kubectl-kubestellar (or kubectl-kubestellar.exe on Windows) on your PATH. Alternatively, the Python package installs kubectl-a2a which kubectl also discovers.
Option A — use a release binary (kubectl-kubestellar):
# Download the tarball for your OS/arch from the latest Release
tar -xzf kubectl-kubestellar-<os>-<arch>.tar.gz
chmod +x kubectl-kubestellar
mv kubectl-kubestellar ~/.local/bin/ # or any dir on your PATH
# verify
which kubectl-kubestellar
kubectl plugin list | grep kubestellar || true
kubectl kubestellar --helpOption B — build locally and copy to PATH (kubectl-kubestellar):
uv sync --dev
uv pip install pyinstaller
uv run pyinstaller --onefile --name kubectl-kubestellar --distpath dist --workpath build packaging/entry_kubectl_a2a.py
install -m 0755 dist/kubectl-kubestellar ~/.local/bin/kubectl-kubestellarOption C — reuse the Python entrypoint by symlink (alias a2a):
# If you've installed the package via uv tool/pipx and have `kubestellar` on PATH,
# create a symlink named kubectl-kubestellar pointing to it
ln -sf "$(command -v kubestellar)" ~/.local/bin/kubectl-kubestellar
kubectl kubestellar --helpWindows:
- Use the
.exefrom the Windows release archive and place it in a directory on your%PATH%. - Or create
kubectl-a2a.batforwarding tokubestellar.exeif using a symlink alternative isn’t convenient.
- kubectl discovers plugins by searching your
PATHfor executables namedkubectl-<name>(per the official docs). Examples:kubectl-kubestellar,kubectl-a2a. - When you run
kubectl <name> ..., kubectl executes the firstkubectl-<name>found onPATHand passes through the arguments. - List discovered plugins with
kubectl plugin list. - Krew is a plugin manager that installs such binaries under its own path;
kubectl krew install <name>makes<name>available askubectl <name>. - This project provides
kubectl-kubestellar(for Krew and binary releases) andkubectl-a2a(Python entrypoint). Once on your PATH, usekubectl kubestellar ...orkubectl a2a ....
Reference: https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/
For users who installed via uv tool from GitHub:
# Install latest from main
uv tool install --upgrade 'git+https://github.com/kubestellar/a2a@main'
# Or upgrade an existing install (uses original source spec)
uv tool upgrade kubestellarFor pipx installs:
pipx upgrade kubestellar
# or reinstall from GitHub
pipx install --force 'git+https://github.com/kubestellar/a2a@main'This repo includes an automated release workflow that builds platform-specific plugin binaries and publishes a Krew manifest.
Steps:
- Create a version tag and push, e.g.:
git tag v0.1.0 && git push origin v0.1.0 - Or run manually via Actions → Release → Run workflow (optional tag input).
- Or publish a GitHub Release with tag
vX.Y.Zto trigger the workflow. - GitHub Actions workflow
.github/workflows/release.ymlruns and produces:- Tarballs for
kubectl-kubestellaron Linux amd64, macOS amd64/arm64, Windows amd64 - SHA256 checksums
kubestellar.yamlKrew manifest with versioned asset URLs and checksums- A GitHub Release containing the above assets
- Tarballs for
Users can then install via Krew using the attached manifest, or you can submit it to the central krew-index.
- If the workflow didn’t run: ensure tag matches
v*.*.*, Actions are enabled, and branch protections allow workflows. - For manual runs: use the
workflow_dispatchentry. Optionally provide thetaginput. - For Release events: make sure the release is “published” (not draft/prerelease) and has a tag like
vX.Y.Z.
You can test kubectl a2a locally in two ways.
- Using uv tool (recommended):
# From the repo root
uv tool uninstall kubectl-a2a || true
uv tool uninstall kubestellar || true
uv tool install .
# Ensure ~/.local/bin is on PATH, then verify
which kubectl-a2a
kubectl plugin list | grep a2a || true
kubectl a2a --help
kubectl a2a list-functions
# Debug kubectl plugin discovery if needed
kubectl -v=6 a2a --help- Using a locally built single-file binary (optional):
# Build the binary
uv sync --dev
uv pip install pyinstaller
uv run pyinstaller --onefile --name kubectl-a2a --distpath dist --workpath build packaging/entry_kubectl_a2a.py
# Put it on PATH for this shell and test
export PATH="$PWD/dist:$PATH"
kubectl plugin list | grep a2a || true
kubectl a2a --helpNotes:
- If your shell caches command paths, run
hash -r(bash/zsh) after replacing the binary. - On Windows, ensure
dist/kubectl-a2a.exeis on your PATH. - If you use
piprather thanpipx, ensure the Python scripts directory is on yourPATH(e.g.,~/.local/binon Linux,~/Library/Python/<version>/binon macOS, or the virtual environment'sbin). - For isolated installs,
pipxis the simplest way to getkubectl-a2aonto yourPATH.
The agent supports multiple AI providers:
- OpenAI (GPT-4, GPT-4o, etc.)
- Google Gemini (gemini-1.5-flash, gemini-1.5-pro, etc.)
# Set OpenAI API key
uv run kubestellar config set-key openai YOUR_OPENAI_API_KEY
# Set Gemini API key
uv run kubestellar config set-key gemini YOUR_GEMINI_API_KEY
# Set default provider
uv run kubestellar config set-default gemini
# List configured providers
uv run kubestellar config list-keys
# Show current configuration
uv run kubestellar config show# Use default provider
uv run kubestellar agent
# Use specific provider
uv run kubestellar agent --provider gemini
uv run kubestellar agent --provider openaiAdd to MCP server (~/Library/Application Support/Claude/claude_desktop_config.json):
{
"mcpServers": {
"kubestellar": {
"command": "uv",
"args": ["run", "kubestellar-mcp"],
"cwd": "/path/to/a2a"
}
}
}📖 Complete Documentation: https://kubestellar.github.io/a2a/
This project is licensed under the Apache 2.0 License - see the LICENSE file for details.
- Built with MCP SDK
- Inspired by the KubeStellar project for multi-cluster Kubernetes management
- Thanks to all contributors and the open-source community
Made with ❤️ by the KubeStellar community