Current Status: MCP Hub v4.2.1 is production-ready with comprehensive test coverage, zero critical bugs in core functionality, and an active web UI under development. Currently running with 12 connected MCP servers providing 108+ tools including AI assistance, UI development, documentation, memory, web browsing, version control, vector search, deployment, ML models, and browser automation. The project is actively maintained with regular updates.
Quick Links: Installation Β· Configuration Β· REST API Β· Testing Β· Roadmap Β· Contributing
MCP Hub acts as a central coordinator for MCP servers and clients, providing two key interfaces:
- Management Interface (/api/*): Manage multiple MCP servers through a unified REST API and web UI
- MCP Server Interface (/mcp): Connect ANY MCP client to access ALL server capabilities through a single endpoint
This dual-interface approach means you can manage servers through the Hub's UI while MCP clients (Claude Desktop, Cline, etc.) only need to connect to one endpoint (localhost:7000/mcp) to access all capabilities. Implements MCP 2025-03-26 specification.
- Workspace Cache Improvements (v4.2.1): Enhanced lock file handling to prevent persistent deadlocks from crashed processes
- VS Code Configuration Compatibility (v4.2.0): Full support for
.vscode/mcp.jsonfiles with VS Code-style variable syntax (${env:},${workspaceFolder}, etc.) - seamless migration from VS Code - Enhanced Workspace Management (v4.1.x): Real-time workspace lifecycle tracking with detailed state management and SSE event streaming
- Multiple Configuration Files (v4.1.0): Support for layered configuration with intelligent merging (e.g., global + project configs)
- LLM SDK Upgrade: Migration to official OpenAI and Anthropic SDKs with automatic retries, typed errors, and better observability
- Comprehensive Test Coverage: 530+ backend tests with strategic branch coverage exceeding industry standards
- Stable Core: Production-tested with zero critical bugs in core server functionality
- 96%+ ESLint Compliance: Clean, maintainable codebase following best practices
- Zero Memory Leaks: Comprehensive resource cleanup with idempotent patterns
- Enterprise Features: HTTP connection pooling, prompt-based tool filtering, workspace management, and real-time event streaming
- Active Deployment: Currently running stable with 12+ connected servers and 108+ available tools
| Component | Status | Maturity | Notes |
|---|---|---|---|
| Core Server | β Stable | Production | Zero critical bugs, comprehensive error handling |
| STDIO Transport | β Stable | Production | Battle-tested with dev mode hot-reload |
| SSE Transport | β Stable | Production | Reliable with automatic reconnection |
| streamable-http | β Stable | Production | Primary transport for remote servers |
| OAuth 2.0 (PKCE) | β Stable | Production | Full authentication flow support |
| Tool Filtering | β Stable | Production | Reduces token usage by 60-85% |
| HTTP Connection Pool | β Stable | Production | 10-30% latency improvement |
| Workspace Management | β Stable | Production | Multi-instance coordination |
| Real-time Events (SSE) | β Stable | Production | Event batching with deduplication |
| Marketplace Integration | β Stable | Production | MCP Registry with 1-hour cache |
| VS Code Compatibility | β Stable | Production | Full .vscode/mcp.json support |
| Configuration System | β Stable | Production | Multi-file, VS Code compatible |
| Web UI | π§ In Progress | Beta | React-based UI under active development |
| TUI | π§ Planned | Future | Inspired by mcphub.nvim |
| Category | Feature | Support | Notes |
|---|---|---|---|
| Transport | |||
| streamable-http | β | Primary transport protocol for remote servers | |
| SSE | β | Fallback transport for remote servers | |
| STDIO | β | For running local servers | |
| Authentication | |||
| OAuth 2.0 | β | With PKCE flow | |
| Headers | β | For API keys/tokens | |
| Capabilities | |||
| Tools | β | List tools | |
| π Tool List Changed | β | Real-time updates | |
| Resources | β | Full support | |
| π Resource List Changed | β | Real-time updates | |
| Resource Templates | β | URI templates | |
| Prompts | β | Full support | |
| π Prompts List Changed | β | Real-time updates | |
| Roots | β | Not supported | |
| Sampling | β | Not supported | |
| Completion | β | Not supported | |
| Marketplace | |||
| Server Discovery | β | Browse available servers | |
| Installation | β | Auto configuration | |
| Real-time | |||
| Status Updates | β | Server & connection state | |
| Capability Updates | β | Automatic refresh | |
| Event Streaming to clients | β | SSE-based | |
| Auto Reconnection | β | With backoff | |
| Development | |||
| Hot Reload | β | Auto restart a MCP server on file changes with dev mode |
|
| Configuration | |||
${} Syntax |
β | Environment variables and command execution across all fields | |
| VS Code Compatibility | β | Support for servers key, ${env:}, ${input:}, predefined variables |
|
| JSON5 Support | β | Comments and trailing commas in configuration files |
Configure all MCP clients with just one endpoint:
{
"mcpServers" : {
"Hub": {
"url" : "http://localhost:7000/mcp"
}
}
}The Hub automatically:
- Namespaces capabilities to prevent conflicts (e.g.,
filesystem__searchvsdatabase__search) - Routes requests to the appropriate server
- Updates capabilities in real-time when servers are added/removed
- Handles authentication and connection management
Live Status: Currently serving 108+ tools from 12 connected servers through a single endpoint at
localhost:7000/mcp
-
Unified MCP Server Endpoint (/mcp):
- Single endpoint for ALL MCP clients to connect to
- Access capabilities from all managed servers through one connection
- Automatic namespacing prevents conflicts between servers
- Real-time capability updates when servers change
- Simplified client configuration - just one endpoint instead of many
-
π Intelligent Prompt-Based Tool Filtering:
- Zero-default tool exposure - clients start with only meta-tools
- LLM-powered intent analysis using Gemini
- Dynamic tool exposure based on user prompts
- Per-client session isolation
- Context-aware tool selection
- See Prompt-Based Filtering Guide
-
π§ Web UI (In Development):
- Server management dashboard with real-time status (in progress)
- Visual configuration editor (planned)
- Tool browser and search interface (planned)
- Monitoring dashboard with filtering statistics (planned)
- Will be available at
localhost:7000when completed
-
Dynamic Server Management:
- Start, stop, enable/disable servers on demand
- Real-time configuration updates with automatic server reconnection
- Support for local (STDIO) and remote (streamable-http/SSE) MCP servers
- Health monitoring and automatic recovery
- OAuth authentication with PKCE flow
- Header-based token authentication
-
Unified REST API:
- Execute tools from any connected server
- Access resources and resource templates
- Real-time status updates via Server-Sent Events (SSE)
- Full CRUD operations for server management
-
Real-time Events & Monitoring:
- Live server status and capability updates
- Client connection tracking
- Tool and resource list change notifications
- Structured JSON logging with file output
-
Client Connection Management:
- Simple SSE-based client connections via /api/events
- Automatic connection cleanup on disconnect
- Optional auto-shutdown when no clients connected
- Real-time connection state monitoring
-
Process Lifecycle Management:
- Graceful startup and shutdown handling
- Proper cleanup of server connections
- Error recovery and reconnection
-
Workspace Management:
- Track active MCP Hub instances across different working directories
- Global workspace cache in XDG-compliant state directory
- Real-time workspace updates via SSE events
- API endpoints to list and monitor active workspaces
The main management server that:
- Maintains connections to multiple MCP servers
- Provides unified API access to server capabilities
- Handles server lifecycle and health monitoring
- Manages SSE client connections and events
- Processes configuration updates and server reconnection
Connected services that:
- Provide tools, resources, templates, and prompts
- Support two connectivity modes:
- Script-based STDIO servers for local operations
- Remote servers (streamable-http/SSE) with OAuth support
- Implement real-time capability updates
- Support automatic status recovery
- Maintain consistent interface across transport types
# Install Bun if you haven't already
curl -fsSL https://bun.sh/install | bash
# Install MCP Hub globally
bun install -g mcp-hubnpm install -g mcp-hubStart the hub server:
mcp-hub --port 3000 --config path/to/config.json
# Or with multiple config files (merged in order)
mcp-hub --port 3000 --config ~/.config/mcphub/global.json --config ./.mcphub/project.jsonOptions:
--port Port to run the server on (required)
--config Path to config file(s). Can be specified multiple times. Merged in order. (required)
--watch Watch config file for changes, only updates affected servers (default: false)
--auto-shutdown Whether to automatically shutdown when no clients are connected (default: false)
--shutdown-delay Delay in milliseconds before shutting down when auto-shutdown is enabled (default: 0)
-h, --help Show help informationMCP Hub uses JSON configuration files to define managed servers with universal ${} placeholder syntax for environment variables and command execution.
MCP Hub provides a comprehensive JSON Schema for configuration validation and IDE support:
Schema Files:
config.schema.json- JSON Schema (v7) for validationconfig.schema.d.ts- TypeScript type definitionsdocs/CONFIG_SCHEMA.md- Complete documentation
Enable IDE Support:
Add this to the top of your mcp-servers.json:
{
"$schema": "./config.schema.json",
"mcpServers": {
// Your configuration with autocomplete and validation
}
}This enables:
- β Autocomplete for all configuration options
- β Inline validation with error messages
- β Hover documentation for properties
- β Type checking for TypeScript projects
Validate Configuration:
# Validate your configuration file
bun run validate:config [path/to/config.json]
# Or use the standalone script
bun scripts/validate-config.js mcp-servers.jsonExample Output:
π MCP Hub Configuration Validator
π Validating: mcp-servers.json
β Schema loaded
β Config parsed
β
Configuration is valid!
Summary:
- Servers: 12
- Connection pooling: enabled
- Tool filtering: prompt-based mode
TypeScript Support:
import type { McpHubConfig } from 'mcp-hub/config.schema';
const config: McpHubConfig = {
connectionPool: {
enabled: true,
maxConnections: 100
},
mcpServers: {
// Fully typed configuration
}
};See docs/CONFIG_SCHEMA.md for complete schema documentation including:
- All configuration properties and validation rules
- Connection pool configuration
- Tool filtering modes
- Transport types (STDIO, SSE, streamable-http)
- Environment variable resolution
- Best practices and examples
When connecting to remote MCP servers that require OAuth authentication (like Vercel), you may need to configure a public URL for OAuth redirect callbacks:
# Set the public URL that Vercel can reach for OAuth callbacks
export MCP_HUB_PUBLIC_URL="https://your-public-domain.com"
npm startFor local development, you can use tunneling services like ngrok:
# Start ngrok tunnel (in separate terminal)
ngrok http 7000
# Get the public URL from ngrok output (e.g., https://abc123.ngrok.io)
# Set environment variable and start MCP Hub
export MCP_HUB_PUBLIC_URL="https://abc123.ngrok.io"
npm startMCP Hub provides seamless compatibility with VS Code's .vscode/mcp.json configuration format, enabling you to use the same configuration files across both VS Code and MCP Hub.
Both mcpServers and servers keys are supported:
{
"servers": {
"github": {
"url": "https://api.githubcopilot.com/mcp/"
},
"perplexity": {
"command": "npx",
"args": ["-y", "server-perplexity-ask"],
"env": {
"API_KEY": "${env:PERPLEXITY_API_KEY}"
}
}
}
}MCP Hub supports VS Code-style variable substitution:
- Environment Variables:
${env:VARIABLE_NAME}or${VARIABLE_NAME} - Workspace Variables:
${workspaceFolder},${userHome},${pathSeparator} - Command Execution:
${cmd: command args}
Supported Predefined Variables:
${workspaceFolder}- Directory where mcp-hub is running${userHome}- User's home directory${pathSeparator}- OS path separator (/ or )${workspaceFolderBasename}- Just the folder name${cwd}- Alias for workspaceFolder${/}- VS Code shorthand for pathSeparator
For ${input:} variables used in VS Code configs, use the MCP_HUB_ENV environment variable:
# Set input variables globally
export MCP_HUB_ENV='{"input:api-key":"your-secret-key","input:database-url":"postgresql://..."}'
# Then use in config
{
"servers": {
"myserver": {
"env": {
"API_KEY": "${input:api-key}"
}
}
}
}Existing .vscode/mcp.json files work directly with MCP Hub. Simply point MCP Hub to your VS Code configuration:
mcp-hub --config .vscode/mcp.json --port 3000MCP Hub supports loading multiple configuration files that are merged in order. This enables flexible configuration management:
- Global Configuration: System-wide settings (e.g.,
~/.config/mcphub/global.json) - Project Configuration: Project-specific settings (e.g.,
./.mcphub/project.json) - Environment Configuration: Environment-specific overrides
When multiple config files are specified, they are merged with later files overriding earlier ones:
# Global config is loaded first, then project config overrides
mcp-hub --port 3000 --config ~/.config/mcphub/global.json --config ./.mcphub/project.jsonMerge Behavior:
mcpServerssections are merged (server definitions from later files override earlier ones)- Other top-level properties are completely replaced by later files
- Missing config files are silently skipped
${ENV_VAR}or${env:ENV_VAR}- Resolves environment variables${cmd: command args}- Executes commands and uses output${workspaceFolder}- Directory where mcp-hub is running${userHome}- User's home directory${pathSeparator}- OS path separator${input:variable-id}- Resolves from MCP_HUB_ENV (VS Code compatibility)nullor""- Falls back toprocess.env
{
"mcpServers": {
"local-server": {
"command": "${MCP_BINARY_PATH}/server",
"args": [
"--token", "${API_TOKEN}",
"--database", "${DB_URL}",
"--secret", "${cmd: op read op://vault/secret}"
],
"env": {
"API_TOKEN": "${cmd: aws ssm get-parameter --name /app/token --query Parameter.Value --output text}",
"DB_URL": "postgresql://user:${DB_PASSWORD}@localhost/myapp",
"DB_PASSWORD": "${cmd: op read op://vault/db/password}",
"FALLBACK_VAR": null
},
"dev": {
"enabled": true,
"watch": ["src/**/*.js", "**/*.json"],
"cwd": "/absolute/path/to/server/directory"
}
}
}
}{
"mcpServers": {
"remote-server": {
"url": "https://${PRIVATE_DOMAIN}/mcp",
"headers": {
"Authorization": "Bearer ${cmd: op read op://vault/api/token}",
"X-Custom-Header": "${CUSTOM_VALUE}"
}
}
}
}{
"connectionPool": {
"maxConnections": 50,
"keepAliveTimeout": 60000
},
"mcpServers": {
"high-traffic-server": {
"url": "https://api.example.com/mcp",
"headers": {
"Authorization": "Bearer ${API_TOKEN}"
},
"connectionPool": {
"maxConnections": 100,
"maxFreeConnections": 20,
"keepAliveTimeout": 30000
}
},
"default-pool-server": {
"url": "https://another-api.com/mcp",
"headers": {
"X-API-Key": "${API_KEY}"
}
// Uses global connectionPool settings (50 connections, 60s keep-alive)
},
"disabled-pool-server": {
"url": "https://legacy-api.com/mcp",
"connectionPool": {
"enabled": false
}
// Disables connection pooling for this specific server
}
}
}MCP Hub supports intelligent tool filtering to manage overwhelming tool counts from multiple MCP servers. With 25+ servers, you might have 3000+ tools consuming 50k+ tokens before any work begins. Tool filtering reduces this to 50-200 relevant tools, freeing 30-40k tokens for actual tasks.
Problem: Check your current token usage to see if filtering will help:
# In Claude Code or your MCP client, check context usage
# Look for "MCP tools: XXk tokens"
# If > 30k tokens β Filtering recommended
# If > 50k tokens β Filtering criticalSolution: Add minimal filtering configuration:
{
"toolFiltering": {
"enabled": true,
"mode": "server-allowlist",
"serverFilter": {
"mode": "allowlist",
"servers": ["filesystem", "github", "web-browser"]
}
}
}Result: Typical reduction from 3000+ tools β 20-30 tools (70-85% token reduction)
MCP Hub provides four filtering strategies:
Use when: You know which specific servers you need
Configuration:
{
"toolFiltering": {
"enabled": true,
"mode": "server-allowlist",
"serverFilter": {
"mode": "allowlist",
"servers": ["filesystem", "github"]
}
}
}Expected outcome: 10-30 tools | 70-85% token reduction
Best for: Focused workflows where you use 2-5 specific servers
Use when: You need tools by functional category (filesystem, web, search, etc.)
Configuration:
{
"toolFiltering": {
"enabled": true,
"mode": "category",
"categoryFilter": {
"categories": ["filesystem", "web", "search"]
}
}
}Expected outcome: 20-50 tools | 60-75% token reduction
Available categories: filesystem, web, search, code, communication, data, ai, system, custom
Custom mappings:
{
"toolFiltering": {
"enabled": true,
"mode": "category",
"categoryFilter": {
"categories": ["custom", "filesystem"],
"customMappings": {
"mytool__*": "custom",
"company__*": "custom"
}
}
}
}Use when: You need server filtering AND per-server tool filtering
Configuration:
{
"toolFiltering": {
"enabled": true,
"mode": "hybrid",
"serverFilter": {
"mode": "allowlist",
"servers": ["github", "filesystem"]
},
"categoryFilter": {
"categories": ["filesystem", "web"]
}
}
}Expected outcome: 30-80 tools | 50-70% token reduction
Best for: Power users with complex, multi-server workflows
Use when: You want dynamic, context-aware tool exposure based on user intent
How it works:
- Client starts with zero tools (or meta-tools only)
- User makes request: "Check my GitHub notifications"
- Client calls
hub__analyze_promptmeta-tool - Hub analyzes intent using LLM (Gemini/OpenAI/Anthropic)
- Hub exposes only relevant tools (e.g., GitHub tools)
- Client proceeds with correct tools available
Configuration:
{
"toolFiltering": {
"enabled": true,
"mode": "prompt-based",
"promptBasedFiltering": {
"enabled": true,
"defaultExposure": "meta-only",
"sessionIsolation": true
},
"llmCategorization": {
"enabled": true,
"provider": "gemini",
"apiKey": "${GEMINI_API_KEY}",
"model": "gemini-2.5-flash"
}
}
}Expected outcome: 5-20 tools initially β 20-50 tools after analysis | 70-90% token reduction
Best for:
- Large tool collections (25+ servers, 3000+ tools)
- Dynamic workflows with unpredictable tool needs
- Token-constrained environments
- Multi-tenant or multi-user scenarios
Requirements:
- LLM API key (Gemini, OpenAI, or Anthropic)
- MCP client that supports
tools/list_changednotifications
Environment Variables:
# Choose one provider
export GEMINI_API_KEY="your-gemini-key" # Recommended (fast, cost-effective)
export OPENAI_API_KEY="your-openai-key" # Alternative
export ANTHROPIC_API_KEY="your-anthropic-key" # Alternative
# Optional: Enable debug logging
export DEBUG_TOOL_FILTERING=trueSupported LLM Providers:
| Provider | Model | Speed | Cost | Recommended |
|---|---|---|---|---|
| Gemini | gemini-2.5-flash | β‘ Fast | π° Low | β Yes |
| OpenAI | gpt-4o-mini | β‘ Fast | π° Medium | |
| Anthropic | claude-3-5-haiku | β‘ Fast | π° Medium |
Testing:
# Set API key
export GEMINI_API_KEY="your-key-here"
# Start Hub
bun start
# Test with validation script
./scripts/test-analyze-prompt.sh "Check my GitHub issues"Advanced Configuration:
OpenAI Provider:
{
"llmCategorization": {
"provider": "openai",
"apiKey": "${OPENAI_API_KEY}",
"model": "gpt-4o-mini"
}
}Anthropic Provider:
{
"llmCategorization": {
"provider": "anthropic",
"apiKey": "${ANTHROPIC_API_KEY}",
"model": "claude-3-5-haiku-20241022"
}
}Default Exposure Options:
{
"promptBasedFiltering": {
"defaultExposure": "zero", // Start with no tools
"defaultExposure": "meta-only", // Start with meta-tools only (recommended)
"defaultExposure": "minimal", // Start with essential tools
"defaultExposure": "all" // Start with all tools (defeats purpose)
}
}See also: Prompt-Based Filtering Guide for complete documentation.
{
"toolFiltering": {
"enabled": true,
"mode": "server-allowlist",
"serverFilter": {
"mode": "allowlist",
"servers": ["filesystem", "playwright", "web-browser"]
}
}
}Tools: ~15 | Use case: React/Vue development with browser testing
{
"toolFiltering": {
"enabled": true,
"mode": "category",
"categoryFilter": {
"categories": ["filesystem", "data", "search", "code"]
}
}
}Tools: ~25 | Use case: API development with database and code search
{
"toolFiltering": {
"enabled": true,
"mode": "server-allowlist",
"serverFilter": {
"mode": "allowlist",
"servers": ["kubernetes", "docker", "filesystem", "github"]
}
}
}Tools: ~20 | Use case: Infrastructure management and deployments
Instead of allowlist, block specific servers:
{
"toolFiltering": {
"enabled": true,
"mode": "server-allowlist",
"serverFilter": {
"mode": "denylist",
"servers": ["experimental", "debug", "test"]
}
}
}Use when: You want most servers except a few problematic ones
Automatically enable filtering when tool count exceeds threshold:
{
"toolFiltering": {
"enabled": false,
"autoEnableThreshold": 100,
"mode": "category",
"categoryFilter": {
"categories": ["filesystem", "web", "search"]
}
}
}Behavior: If total tool count > 100, filtering automatically activates
The LLM categorization feature now uses official OpenAI and Anthropic SDKs for production-grade reliability:
New Features:
- β Automatic Retries: Transient failures (429, 5xx) automatically retried with exponential backoff
- β
Typed Errors: Detailed error information with
APIError,RateLimitError,ConnectionError - β
Request Tracking: Every API call tracked with
request_idfor debugging - β Better Observability: Enhanced logging with error context and retry information
Configuration (unchanged):
{
"toolFiltering": {
"enabled": true,
"mode": "category",
"categoryFilter": {
"categories": ["filesystem", "web", "search"]
},
"llmCategorization": {
"enabled": true,
"provider": "openai",
"apiKey": "${env:OPENAI_API_KEY}",
"model": "gpt-4o-mini"
}
}
}Error Handling Examples:
// Automatic retry on transient failures
// 429 Rate Limit β SDK retries with backoff
// 500 Server Error β SDK retries up to 3 times
// Connection timeout β SDK retries
// Detailed error logging
// β
Request ID: req_abc123
// β
Error Type: RateLimitError
// β
Retry After: 60 seconds
// β
Status Code: 429Observability:
Check LLM performance in stats API:
curl http://localhost:37373/api/filtering/statsResponse includes LLM metrics:
{
"llm": {
"cacheHits": 150,
"cacheMisses": 10,
"errorsByType": {
"RateLimitError": 2,
"APIError": 1
},
"totalRetries": 5
}
}Benefits: 10-20% accuracy improvement for edge cases
Cost: ~$0.01 per 100 tools (cached after first categorization)
Reliability: Automatic retry handles 80%+ of transient failures
The LLM categorization queue includes production-ready reliability features to handle API failures gracefully:
Built-in Resilience:
- β Automatic Retries: Transient failures (429, 503, timeout, network) automatically retried up to 3 times (configurable)
- β Exponential Backoff: Delay increases as 1s β 2s β 4s β 8s β 16s β 30s max with jitter to prevent thundering herd
- β Circuit Breaker: Detects persistent API failures and switches to heuristic fallback after 5 consecutive failures
- β Queue Monitoring: Real-time tracking of queue depth, latency percentiles (p95, p99), and success rates
- β Graceful Degradation: Always falls back to pattern-based heuristics if LLM unavailable
Configuration (optional - defaults work well):
{
"toolFiltering": {
"llmCategorization": {
"enabled": true,
"provider": "openai",
"apiKey": "${env:OPENAI_API_KEY}",
// Retry and Backoff Configuration
"retryCount": 3, // Max retry attempts on transient errors
"backoffBase": 1000, // Initial backoff delay (ms)
"maxBackoff": 30000, // Maximum backoff delay (ms)
// Circuit Breaker Configuration
"circuitBreakerThreshold": 5, // Failures before circuit opens
"circuitBreakerTimeout": 30000 // Time before half-open retry (ms)
}
}
}Queue Health Monitoring:
Check queue reliability metrics in stats API:
curl http://localhost:3000/api/filtering/stats | jq '.llm'Example response:
{
"llm": {
"enabled": true,
"queueDepth": 2,
"totalCalls": 150,
"successfulCalls": 145,
"failedCalls": 5,
"averageLatency": 245,
"p95Latency": 1200,
"p99Latency": 2100,
"timeouts": 1,
"totalRetries": 8,
"fallbacksUsed": 0,
"circuitBreakerTrips": 0,
"circuitBreakerState": "closed",
"circuitBreakerFailures": 0,
"successRate": 0.967
}
}Metrics Explanation:
successRate: Percentage of successful LLM calls (0.967 = 96.7%)totalRetries: API calls that required retry after initial failurefallbacksUsed: Times fallback to heuristics was invokedp95Latency: 95th percentile response time (95% of calls are faster)circuitBreakerState:closed(normal),open(failing, using fallback),half-open(recovering)
Common Scenarios:
-
High rate limit errors: Increase
backoffBaseandcircuitBreakerTimeout{ "backoffBase": 2000, "circuitBreakerTimeout": 60000 } -
Frequent timeouts: Extend retry count and max backoff
{ "retryCount": 5, "maxBackoff": 60000 } -
Graceful degradation: Reduce circuit breaker threshold to fail fast
{ "circuitBreakerThreshold": 3 }
Check filtering effectiveness via REST API:
# Get filtering statistics
curl http://localhost:3000/api/filtering/stats | jqExample response:
{
"enabled": true,
"mode": "server-allowlist",
"toolsEvaluated": 3469,
"toolsIncluded": 89,
"toolsFiltered": 3380,
"filterRate": 97.4,
"serversTotal": 25,
"serversActive": 3
}Diagnostic:
# 1. Verify config loaded
cat mcp.json | grep -A 10 "toolFiltering"
# 2. Check server names match exactly
npm start 2>&1 | grep "Connected to server"
# 3. Restart MCP Hub
npm restartSolution:
- Server names in
serverFilter.serversmust match exact names frommcpServersconfig - Always restart after configuration changes
- Check logs for "Tool filtering initialized" message
Root cause: Tool count still > 30 (LLM threshold for reliable selection)
Diagnostic:
# Check current tool count via filtering stats
curl http://localhost:3000/api/filtering/stats | jq '.toolsIncluded'Solution:
- If > 30 tools: Switch to more restrictive mode (server-allowlist)
- If using category mode: Reduce number of categories
- If using hybrid mode: Add per-server tool patterns
Root cause: Filters too restrictive, blocking everything
Diagnostic:
# Check if any tools included
curl http://localhost:3000/api/filtering/stats | jq '.toolsIncluded'
# If 0: Your filters blocked all toolsSolution:
- Start with minimal config (1-2 servers in allowlist)
- Add servers incrementally
- Verify server names with:
cat mcp.json | jq '.mcpServers | keys'
- Start Simple: Begin with server-allowlist mode, 2-3 servers
- Monitor Impact: Check token reduction via stats API
- Iterate: Add servers/categories incrementally
- Target 15-25 tools: Optimal range for LLM reliability
- Test Workflows: Verify your common tasks still work after filtering
- Document Config: Comment your filtering choices for team members
- Filtering overhead: < 10ms per tool check
- Memory usage: Negligible (~1MB for cache)
- Token reduction: 60-85% typical
- Context freed: 30-50k tokens for actual work
Phase 1: Baseline (No changes)
# Document current state
curl http://localhost:3000/api/servers | jq '.servers[] | .name'
# Note your most-used serversPhase 2: Experiment (Reversible)
# Backup config
cp mcp.json mcp.json.backup
# Add minimal filtering
# (server-allowlist with 1-2 servers)
# Test and monitor
curl http://localhost:3000/api/filtering/stats | jqPhase 3: Optimize (Iterative)
# Add servers incrementally
# Monitor token usage after each addition
# Adjust mode if neededRollback: mv mcp.json.backup mcp.json && npm restart
MCP Hub supports both STDIO servers and remote servers (streamable-http/SSE). The server type is automatically detected from the configuration. All fields support the universal ${} placeholder syntax.
For running script-based MCP servers locally:
- command: Command to start the MCP server executable (supports
${VARIABLE}and${cmd: command}) - args: Array of command line arguments (supports
${VARIABLE}and${cmd: command}placeholders) - env: Environment variables with placeholder resolution and system fallback
- cwd: The cwd for process spawning the MCP server
- dev: Development mode configuration (optional)
- enabled: Enable/disable dev mode (default: true)
- watch: Array of glob patterns to watch for changes (default: ["/*.js", "/.ts", "**/.json"])
- cwd: Required absolute path to the server's working directory for file watching
MCP Hub will look for the environment variable MCP_HUB_ENV (a JSON string) in its own process environment. If set, all key-value pairs from this variable will be injected into the environment of every managed MCP server (both stdio and remote). This is useful for passing secrets, tokens, or other shared configuration to all servers without repeating them in each server config.
- Server-specific
envfields always override values fromMCP_HUB_ENV. - Example usage:
MCP_HUB_ENV='{"DBUS_SESSION_BUS_ADDRESS":"/run/user/1000/bus","MY_TOKEN":"abc"}' mcp-hub --port 3000 --config path/to/config.json
For connecting to remote MCP servers:
- url: Server endpoint URL (supports
${VARIABLE}and${cmd: command}placeholders) - headers: Authentication headers (supports
${VARIABLE}and${cmd: command}placeholders) - connectionPool: HTTP connection pool configuration (optional, applies to SSE and streamable-http transports)
- enabled: Enable connection pooling (default:
true) - keepAliveTimeout: Keep-alive timeout in milliseconds (default:
60000- 60 seconds) - keepAliveMaxTimeout: Maximum socket lifetime in milliseconds (default:
600000- 10 minutes) - maxConnections: Maximum connections per host (default:
50) - maxFreeConnections: Maximum idle connections per host (default:
10) - timeout: Socket timeout in milliseconds (default:
30000- 30 seconds) - pipelining: Number of pipelined requests (default:
0- disabled for MCP request-response pattern)
- enabled: Enable connection pooling (default:
HTTP Connection Pooling Benefits:
- Reduces TLS handshake overhead through persistent connections
- Improves latency by 10-30% for remote MCP servers
- Optimizes resource usage with configurable connection limits
- Automatic connection reuse with undici Agent
- Can be configured globally or per-server with precedence rules
The server type is determined by:
- STDIO server β Has
commandfield - Remote server β Has
urlfield
Note: A server configuration cannot mix STDIO and remote server fields.
- Commands First:
${cmd: command args}are executed first - Environment Variables:
${VAR}are resolved fromenvobject, thenprocess.env - Fallback:
nullor""values fall back toprocess.env - Multi-pass: Dependencies between variables are resolved automatically
MCP Hub is developing a React-based web UI for managing servers, configuring settings, and monitoring real-time status.
Server Management (In Progress)
- View all connected servers with real-time status
- Start, stop, enable, and disable servers on demand
- Monitor server health and connection state
- View tool/resource counts per server
Configuration Editing (Planned)
- Visual JSON configuration editor with syntax highlighting
- Side-by-side diff preview before applying changes
- Destructive change warnings for removed servers
- Version tracking and validation
Dashboard & Monitoring (Planned)
- Real-time connection statistics
- Tool filtering status and impact metrics
- Server uptime tracking
- Active client connections
Tool Management (Planned)
- Browse available tools from all servers
- Search and filter tools by server/name
- View tool schemas and descriptions
The UI will be available at http://localhost:7000 when completed. Currently, all features are accessible via the REST API documented below.
coming...
Just add it to your NixOS flake.nix or home-manager:
inputs = {
mcp-hub.url = "github:ollieb89/mcp_hub";
...
}To integrate mcp-hub to your NixOS/Home Manager configuration, add the following to your environment.systemPackages or home.packages respectively:
inputs.mcp-hub.packages."${system}".defaultIf you want to use mcphub.nvim without having mcp-hub server in your PATH you can link the server under the hood adding
the mcp-hub nix store path to the cmd command in the plugin config like
Nixvim example:
{ mcphub-nvim, mcp-hub, ... }:
{
extraPlugins = [mcphub-nvim];
extraConfigLua = ''
require("mcphub").setup({
port = 3000,
config = vim.fn.expand("~/mcp-hub/mcp-servers.json"),
cmd = "${mcp-hub}/bin/mcp-hub"
})
'';
}
# where
{
# For nixpkgs (not available yet)
mcp-hub = pkgs.mcp-hub;
# For flakes
mcp-hub = inputs.mcp-hub.packages."${system}".default;
}This MCP Hub instance is currently running with the following connected servers:
| Server | Tools | Capabilities |
|---|---|---|
| shadcn-ui | 7 | UI component library with v4 blocks and components |
| gemini | 6 | AI analysis, brainstorming, and structured change mode |
| notion | 19 | Note-taking, databases, and document management |
| memory | 9 | Persistent knowledge graphs across sessions |
| time | 2 | Timezone operations and time conversion |
| sequential-thinking | 1 | Dynamic problem-solving with structured thoughts |
| fetch | 1 | Internet access and web content retrieval |
| git | 12 | Complete version control operations |
| pinecone | 9 | Vector search and document reranking |
| vercel | 11 | Web deployment and project management |
| hf-transformers | 9 | Hugging Face ML models and datasets |
| playwright | 21 | Browser automation and web testing |
Total: 108+ tools providing comprehensive development capabilities
A Neovim plugin provides seamless integration with Neovim, allowing direct interaction with MCP Hub from your editor:
- Execute MCP tools directly from Neovim
- Access MCP resources within your editing workflow
- Real-time status updates in Neovim
- Auto install mcp servers with marketplace addition
MCP Hub can be used to monitor ML training jobs through custom MCP servers. See examples/training-monitor for a complete example demonstrating:
- Real-time Training Status: Check training job progress, metrics, and logs
- Multi-Job Monitoring: Track multiple training runs simultaneously
- Custom Tools: Use
run_training_monitortool to query training status - Framework Integration: Extend to work with TensorBoard, Weights & Biases, or custom training frameworks
Example configuration:
{
"mcpServers": {
"pico-training-monitor": {
"command": "python",
"args": ["/path/to/pico_training_monitor.py"],
"env": {
"TRAINING_LOG_DIR": "${workspaceFolder}/training_logs"
}
}
}
}Use the tool through any MCP client:
// Check all training jobs
await client.callTool('pico-training-monitor', 'run_training_monitor', {});
// Check specific job
await client.callTool('pico-training-monitor', 'run_training_monitor', {
job_id: "experiment-123"
});GET /api/healthThe health endpoint provides comprehensive status information including:
- Current hub state (starting, ready, restarting, restarted, stopping, stopped, error)
- Connected server statuses and capabilities
- Active SSE connection details
- Detailed connection metrics
- Error state details if applicable
Response:
{
"status": "ok",
"state": "ready",
"server_id": "mcp-hub",
"version": "4.1.1",
"activeClients": 2,
"timestamp": "2024-02-20T05:55:00.000Z",
"servers": [],
"connections": {
"totalConnections": 2,
"connections": [
{
"id": "client-uuid",
"state": "connected",
"connectedAt": "2024-02-20T05:50:00.000Z",
"lastEventAt": "2024-02-20T05:55:00.000Z"
}
]
},
"workspaces": {
"current": "40123",
"allActive": {
"40123": {
"cwd": "/path/to/project-a",
"config_files": ["/home/user/.config/mcphub/global.json", "/path/to/project-a/.mcphub/project.json"],
"pid": 12345,
"port": 40123,
"startTime": "2025-01-17T10:00:00.000Z",
"state": "active",
"activeConnections": 2,
"shutdownStartedAt": null,
"shutdownDelay": null
}
}
}
}GET /api/serversPOST /api/servers/info
Content-Type: application/json
{
"server_name": "example-server"
}POST /api/servers/refresh
Content-Type: application/json
{
"server_name": "example-server"
}Response:
{
"status": "ok",
"server": {
"name": "example-server",
"capabilities": {
"tools": ["tool1", "tool2"],
"resources": ["resource1", "resource2"],
"resourceTemplates": []
}
},
"timestamp": "2024-02-20T05:55:00.000Z"
}POST /api/refreshResponse:
{
"status": "ok",
"servers": [
{
"name": "example-server",
"capabilities": {
"tools": ["tool1", "tool2"],
"resources": ["resource1", "resource2"],
"resourceTemplates": []
}
}
],
"timestamp": "2024-02-20T05:55:00.000Z"
}POST /api/servers/start
Content-Type: application/json
{
"server_name": "example-server"
}Response:
{
"status": "ok",
"server": {
"name": "example-server",
"status": "connected",
"uptime": 123
},
"timestamp": "2024-02-20T05:55:00.000Z"
}POST /api/servers/stop?disable=true|false
Content-Type: application/json
{
"server_name": "example-server"
}The optional disable query parameter can be set to true to disable the server in the configuration.
Response:
{
"status": "ok",
"server": {
"name": "example-server",
"status": "disconnected",
"uptime": 0
},
"timestamp": "2024-02-20T05:55:00.000Z"
}GET /api/workspacesResponse:
{
"workspaces": {
"40123": {
"cwd": "/path/to/project-a",
"config_files": ["/home/user/.config/mcphub/global.json", "/path/to/project-a/.mcphub/project.json"],
"pid": 12345,
"port": 40123,
"startTime": "2025-01-17T10:00:00.000Z",
"state": "active",
"activeConnections": 2,
"shutdownStartedAt": null,
"shutdownDelay": null
},
"40567": {
"cwd": "/path/to/project-b",
"config_files": ["/home/user/.config/mcphub/global.json"],
"pid": 54321,
"port": 40567,
"startTime": "2025-01-17T10:05:00.000Z",
"state": "shutting_down",
"activeConnections": 0,
"shutdownStartedAt": "2025-01-17T10:15:00.000Z",
"shutdownDelay": 600000
}
},
"timestamp": "2024-02-20T05:55:00.000Z"
}GET /api/marketplaceQuery Parameters:
search: Filter by name, description, or tagscategory: Filter by categorytags: Filter by comma-separated tagssort: Sort by "newest", "stars", or "name"
Response:
{
"servers": [
{
"id": "example-server",
"name": "Example Server",
"description": "Description here",
"author": "example-author",
"url": "https://example.com/mcp-server",
"category": "search",
"tags": ["search", "ai"],
"stars": 100,
"featured": true,
"verified": true,
"lastCommit": 1751257963,
"updatedAt": 1751265038
}
],
"timestamp": "2024-02-20T05:55:00.000Z"
}POST /api/marketplace/details
Content-Type: application/json
{
"mcpId": "example-server"
}Response:
{
"server": {
"id": "example-server",
"name": "Example Server",
"description": "Description here",
"author": "example-author",
"url": "https://example.com/mcp-server",
"category": "search",
"tags": ["search", "ai"],
"installations": [],
"stars": 100,
"featured": true,
"verified": true,
"lastCommit": 1751257963,
"updatedAt": 1751265038
},
"readmeContent": "# Server Documentation...",
"timestamp": "2024-02-20T05:55:00.000Z"
}POST /api/servers/tools
Content-Type: application/json
{
"server_name": "example-server",
"tool": "tool_name",
"arguments": {},
"request_options" : {}
}POST /api/servers/resources
Content-Type: application/json
{
"server_name": "example-server",
"uri": "resource://uri",
"request_options" : {}
}POST /api/servers/prompts
Content-Type: application/json
{
"server_name": "example-server",
"prompt": "prompt_name",
"arguments": {},
"request_options" : {}
}Response:
{
"result": {
"messages": [
{
"role": "assistant",
"content": {
"type": "text",
"text": "Text response example"
}
},
{
"role": "assistant",
"content": {
"type": "image",
"data": "base64_encoded_image_data",
"mimeType": "image/png"
}
}
]
},
"timestamp": "2024-02-20T05:55:00.000Z"
}POST /api/restartReloads the configuration file and restarts all MCP servers.
Response:
{
"status": "ok",
"timestamp": "2024-02-20T05:55:00.000Z"
}MCP Hub implements a comprehensive real-time events system using Server-Sent Events (SSE) at /api/events. This endpoint provides live updates about server status, configuration changes, capability updates, and more.
The hub server transitions through several states during its lifecycle:
| State | Description |
|---|---|
starting |
Initial startup, loading configuration |
ready |
Server is running and ready to handle requests |
restarting |
Reloading configuration/reconnecting servers |
restarted |
Configuration reload complete |
stopping |
Graceful shutdown in progress |
stopped |
Server has fully stopped |
error |
Error state (includes error details) |
You can monitor these states through the /health endpoint or SSE events.
MCP Hub emits several types of events:
- heartbeat - Periodic connection health check
{
"connections": 2,
"timestamp": "2024-02-20T05:55:00.000Z"
}- hub_state - Hub server state changes
{
"state": "ready",
"server_id": "mcp-hub",
"version": "1.0.0",
"pid": 12345,
"port": 3000,
"timestamp": "2024-02-20T05:55:00.000Z"
}- log - Server log messages
{
"type": "info",
"message": "Server started",
"data": {},
"timestamp": "2024-02-20T05:55:00.000Z"
}- config_changed - Configuration file changes detected
{
"type": "config_changed",
"newConfig": {},
"isSignificant": true,
"timestamp": "2024-02-20T05:55:00.000Z"
}- servers_updating - Server updates in progress
{
"type": "servers_updating",
"changes": {
"added": ["server1"],
"removed": [],
"modified": ["server2"],
"unchanged": ["server3"]
},
"timestamp": "2024-02-20T05:55:00.000Z"
}- servers_updated - Server updates completed
{
"type": "servers_updated",
"changes": {
"added": ["server1"],
"removed": [],
"modified": ["server2"],
"unchanged": ["server3"]
},
"timestamp": "2024-02-20T05:55:00.000Z"
}- tool_list_changed - Server's tools list updated
{
"type": "tool_list_changed",
"server": "example-server",
"tools": ["tool1", "tool2"],
"timestamp": "2024-02-20T05:55:00.000Z"
}- resource_list_changed - Server's resources/templates updated
{
"type": "resource_list_changed",
"server": "example-server",
"resources": ["resource1", "resource2"],
"resourceTemplates": [],
"timestamp": "2024-02-20T05:55:00.000Z"
}- prompt_list_changed - Server's prompts list updated
{
"type": "prompt_list_changed",
"server": "example-server",
"prompts": ["prompt1", "prompt2"],
"timestamp": "2024-02-20T05:55:00.000Z"
}- workspaces_updated - Active workspaces changed
{
"type": "workspaces_updated",
"workspaces": {
"40123": {
"cwd": "/path/to/project-a",
"config_files": ["/home/user/.config/mcphub/global.json", "/path/to/project-a/.mcphub/project.json"],
"pid": 12345,
"port": 40123,
"startTime": "2025-01-17T10:00:00.000Z",
"state": "active",
"activeConnections": 2,
"shutdownStartedAt": null,
"shutdownDelay": null
}
},
"timestamp": "2024-02-20T05:55:00.000Z"
}- Each SSE connection is assigned a unique ID
- Connections are automatically cleaned up on client disconnect
- Connection statistics available via
/healthendpoint - Optional auto-shutdown when no clients are connected
MCP Hub implements intelligent event batching to reduce SSE traffic and improve client-side processing efficiency. By default, capability change events (tools, resources, prompts) are batched within a configurable time window, reducing network overhead by 30-50% during high-change scenarios (e.g., hub restart, multiple server updates).
Time-Based Batching: Events are collected in batches and flushed after a configurable window (default: 100ms)
Size-Based Batching: Batches are automatically flushed when reaching a size limit (default: 50 events)
Critical Event Bypass: Critical events (hub_state, error) bypass batching for immediate delivery
Deduplication: Duplicate events from the same server within a batch are automatically deduplicated
Event batching is enabled by default and can be configured globally in the server startup options:
{
"sse": {
"batching": {
"enabled": true,
"batchWindow": 100, // Time window in milliseconds
"maxBatchSize": 50 // Maximum events per batch
}
}
}To disable batching:
{
"sse": {
"batching": {
"enabled": false
}
}
}When batching is enabled, clients receive batch events with a _batch suffix on the event type:
// Batched event format
{
"type": "tool_list_changed_batch",
"batchSize": 3,
"events": [
{ "server": "server1", "tools": [...], "timestamp": 1698765432100 },
{ "server": "server2", "tools": [...], "timestamp": 1698765432150 },
{ "server": "server3", "tools": [...], "timestamp": 1698765432180 }
],
"reason": "time_window", // or "size_limit", "critical", "manual"
"timestamp": 1698765432200
}Clients should handle both batched and non-batched events for backward compatibility:
// Handle batched events
eventSource.addEventListener('tool_list_changed_batch', (event) => {
const { events, batchSize } = JSON.parse(event.data);
// Process batch of tool changes
events.forEach(({ server, tools, timestamp }) => {
updateToolsForServer(server, tools);
});
console.log(`Processed batch of ${batchSize} tool changes`);
});
// Backward compatibility: still support non-batched events
eventSource.addEventListener('tool_list_changed', (event) => {
const { server, tools } = JSON.parse(event.data);
updateToolsForServer(server, tools);
});Expected Benefits:
- SSE Traffic: 30-50% reduction during hub restart and multi-server updates
- Network Overhead: Fewer HTTP/2 frames and reduced header overhead
- Client Processing: Enables efficient batch DOM updates
- Latency: Maximum +100ms latency (configurable trade-off)
When Batching Helps Most:
- Hub startup/restart with many servers
- Multiple servers updating capabilities simultaneously
- High-frequency capability changes
- Rapid configuration reloads
When to Disable Batching:
- Ultra-low latency requirements (real-time critical systems)
- Single server deployments with infrequent changes
- Debugging and development scenarios requiring immediate event visibility
MCP Hub uses structured JSON logging for all events. Logs are written to both console and file following XDG Base Directory Specification:
- XDG compliant:
$XDG_STATE_HOME/mcp-hub/logs/mcp-hub.log(typically~/.local/state/mcp-hub/logs/mcp-hub.log) - Legacy fallback:
~/.mcp-hub/logs/mcp-hub.log(for backward compatibility)
Example log entry:
{
"type": "error",
"code": "TOOL_ERROR",
"message": "Failed to execute tool",
"data": {
"server": "example-server",
"tool": "example-tool",
"error": "Invalid parameters"
},
"timestamp": "2024-02-20T05:55:00.000Z"
}Log levels include:
info: Normal operational messageswarn: Warning conditionsdebug: Detailed debug information (includes configuration changes)error: Error conditions (includes error code and stack trace)
Logs are rotated daily and kept for 30 days by default.
MCP Hub maintains a global workspace cache to track active instances across different working directories with real-time lifecycle management:
- Cache Location:
$XDG_STATE_HOME/mcp-hub/workspaces.json(typically~/.local/state/mcp-hub/workspaces.json) - Purpose: Prevents port conflicts, enables workspace discovery, and provides real-time lifecycle tracking
- Content: Maps port numbers (as keys) to hub process information with detailed lifecycle state
- Cleanup: Automatically removes stale entries when processes are no longer running
{
"40123": {
"cwd": "/path/to/project-a",
"config_files": ["/home/user/.config/mcphub/global.json", "/path/to/project-a/.mcphub/project.json"],
"pid": 12345,
"port": 40123,
"startTime": "2025-01-17T10:00:00.000Z",
"state": "active",
"activeConnections": 2,
"shutdownStartedAt": null,
"shutdownDelay": null
},
"40567": {
"cwd": "/path/to/project-b",
"config_files": ["/home/user/.config/mcphub/global.json"],
"pid": 54321,
"port": 40567,
"startTime": "2025-01-17T10:05:00.000Z",
"state": "shutting_down",
"activeConnections": 0,
"shutdownStartedAt": "2025-01-17T10:15:00.000Z",
"shutdownDelay": 600000
}
}MCP Hub implements a comprehensive error handling system with custom error classes for different types of errors:
- ConfigError: Configuration-related errors (invalid config, missing fields)
- ConnectionError: Server connection issues (failed connections, transport errors)
- ServerError: Server startup/initialization problems
- ToolError: Tool execution failures
- ResourceError: Resource access issues
- ValidationError: Request validation errors
Each error includes:
- Error code for easy identification
- Detailed error message
- Additional context in the details object
- Stack trace for debugging
Example error structure:
{
"code": "CONNECTION_ERROR",
"message": "Failed to communicate with server",
"details": {
"server": "example-server",
"error": "connection timeout"
},
"timestamp": "2024-02-20T05:55:00.000Z"
}sequenceDiagram
participant C as Client
participant H as Hub Server
participant M1 as MCP Server 1
participant M2 as MCP Server 2
Note over H: Server Start (state: starting)
activate H
Note over H: Config Loading
H->>H: Load & Validate Config
H->>H: Watch Config File
H->>H: Initialize SSE Manager
Note over H: Server Connections (state: ready)
H->>+M1: Connect
M1-->>-H: Connected + Capabilities
H->>+M2: Connect
M2-->>-H: Connected + Capabilities
H-->>C: hub_state (ready)
Note over C,H: Client Setup
C->>H: Connect to /api/events (SSE)
H-->>C: connection_opened
Note over C,H: Client Operations
C->>H: Execute Tool (HTTP)
H->>M1: Execute Tool
M1-->>H: Tool Result
H-->>C: HTTP Response
Note over H,C: Real-time Updates
H->>H: Detect Config Change
H-->>C: servers_updating (SSE)
H->>M1: Reconnect with New Config
M1-->>H: Updated Capabilities
H-->>C: servers_updated (SSE)
Note over H,C: Server Events
M2->>H: Tool List Changed
H-->>C: tool_list_changed (SSE)
Note over H: Shutdown Process
Note over C,H: Client Disconnects
H-->>C: hub_state (stopping) (SSE)
H->>M1: Disconnect
H->>M2: Disconnect
H-->>C: hub_state (stopped) (SSE)
deactivate H
The Hub Server coordinates communication between clients and MCP servers:
- Starts and connects to configured MCP servers
- Handles SSE client connections and events
- Routes tool and resource requests to appropriate servers
- Monitors server health and maintains capabilities
- Manages graceful startup/shutdown processes
flowchart TB
A[Hub Server Start] --> B{Config Available?}
B -->|Yes| C[Load Server Configs]
B -->|No| D[Use Default Settings]
C --> E[Initialize Connections]
D --> E
E --> F{For Each MCP Server}
F -->|Enabled| G[Attempt Connection]
F -->|Disabled| H[Skip Server]
G --> I{Connection Status}
I -->|Success| J[Fetch Capabilities]
I -->|Failure| K[Log Error]
J --> L[Store Server Info]
K --> M[Mark Server Unavailable]
L --> N[Monitor Health]
M --> N
N --> O{Health Check}
O -->|Healthy| P[Update Capabilities]
O -->|Unhealthy| Q[Attempt Reconnect]
Q -->|Success| P
Q -->|Failure| R[Update Status]
P --> N
R --> N
The Hub Server actively manages MCP servers through:
- Configuration-based server initialization
- Connection and capability discovery
- Health monitoring and status tracking
- Automatic reconnection attempts
- Server state management
sequenceDiagram
participant C as Client
participant H as Hub Server
participant M as MCP Server
Note over C,H: Tool Execution
C->>H: POST /api/servers/tools (HTTP)
H->>H: Validate Request & Server
alt Server Not Connected
H-->>C: 503 Server Unavailable (HTTP)
else Server Connected
H->>M: Execute Tool
alt Success
M-->>H: Tool Result
H-->>C: Result Response (HTTP)
else Error
M-->>H: Error Details
H-->>C: Error Response (HTTP)
H-->>C: log (SSE Event)
end
end
Note over C,H: Resource Access
C->>H: POST /api/servers/resources (HTTP)
H->>H: Validate URI & Template
alt Invalid Resource
H-->>C: 404 Not Found (HTTP)
else Server Not Connected
H-->>C: 503 Unavailable (HTTP)
else Valid Request
H->>M: Request Resource
alt Success
M-->>H: Resource Data
H-->>C: Resource Content (HTTP)
else Error
M-->>H: Error Details
H-->>C: Error Response (HTTP)
H-->>C: log (SSE Event)
end
end
Note over C,H: Prompt Execution
C->>H: POST /api/servers/prompts (HTTP)
H->>H: Validate Prompt & Args
alt Invalid Prompt
H-->>C: 404 Not Found (HTTP)
else Server Not Connected
H-->>C: 503 Unavailable (HTTP)
else Valid Request
H->>M: Execute Prompt
alt Success
M-->>H: Messages Array
H-->>C: Messages Response (HTTP)
else Error
M-->>H: Error Details
H-->>C: Error Response (HTTP)
H-->>C: log (SSE Event)
end
end
All client requests follow a standardized flow:
- Request validation
- Server status verification
- Request routing to appropriate MCP server
- Response handling and error management
- Node.js >= 18.0.0
MCP Hub maintains high code quality standards through comprehensive testing, documentation, and continuous improvement.
- Test Coverage: Strategic coverage across core modules (backend stable, UI in development)
- ESLint Compliance: 96%+ pass rate (1 intentional nested try-catch for transport fallback)
- JSDoc Documentation: 100% coverage for public APIs
- Code Style: Standardized across entire codebase
- Memory Leaks: Zero detected in production
- Core Stability: Production-tested with zero critical bugs in core functionality
- Test-Driven Development: All critical changes include tests
- Comprehensive Error Handling: Graceful degradation with detailed logging
- Resource Cleanup: Idempotent cleanup patterns prevent leaks
- Event Management: Proper handler lifecycle management
- Function Decomposition: Large functions broken into focused, testable units
- Centralized Constants: Single source of truth for configuration values
- Structured JSON Logging: XDG-compliant log files with rotation
- Memory Safety: Comprehensive null checks and defensive programming
- Promise Handling:
Promise.allSettledensures all servers start independently - Event-Driven Architecture: Clean separation of concerns with EventEmitter pattern
- Error Classes: Custom error types for different failure scenarios
MCP Hub employs a strategic two-tier coverage approach:
- Critical Components: 70-80%+ coverage (MCPConnection, MCPHub, core utilities)
- Global Baseline: 50-70% (infrastructure files require integration tests)
- Current Metrics: 530+ backend tests across 23 test files, strategic branch coverage on core modules
Resource-Efficient (Default - Recommended for CI/CD):
npm test # Sequential execution (~50-100MB memory, 30-60s)
npm run test:seq # Explicit sequential mode
npm run test:quality # Sequential + coverage for quality gatesFast Mode (When Resources Available):
npm run test:fast # Parallel execution (~200-300MB memory, 10-20s)Development:
npm run test:watch # Watch mode with sequential execution
npm run test:coverage # Generate coverage report (sequential)
npm run test:coverage:ui # Open HTML coverage reportNote: Tests run sequentially by default to minimize system resource usage. This is ideal for CI/CD, resource-constrained systems, or when running tests alongside development work. Use
npm run test:fastfor quicker results when you have available CPU/memory resources.For detailed configuration options and resource optimization, see
docs/testing-resource-optimization.md.
The project focuses on testing observable outcomes ("exit doors"):
- API response correctness and schema validation
- State changes (database/cache mutations)
- External service call validation
- Message queue interactions
- Observability (logging, error handling, metrics)
Coverage thresholds are strategically configured per-file rather than globally, following Vitest best practices for infrastructure-heavy projects. See vitest.config.js for details.
tests/*.test.js- Unit tests for core componentstests/*.integration.test.js- Integration tests for transports and connectionstests/helpers/- Shared test utilities and patterns
Tests follow the AAA (Arrange-Act-Assert) pattern with explicit comments for clarity. All tests validate observable behavior rather than implementation details, ensuring tests remain valuable as code evolves.
For detailed testing guidelines, see CONTRIBUTING.md.
Recent sprints focused on improving code quality through:
- Critical Bug Fixes: Resolved variable scope issues, added null checks throughout
- Error Handling: Enhanced with comprehensive try-catch blocks and logging
- Promise Management: Improved server startup with Promise.allSettled
- Constants Extraction: Centralized all magic numbers in
src/utils/constants.js - Resource Cleanup: Standardized cleanup patterns to prevent memory leaks
- JSDoc Documentation: 100% coverage for all public methods
- Function Decomposition: Split large functions following Single Responsibility Principle
- Memory Leak Prevention: Fixed event handler duplication issues
- Code Style Standardization: Fixed 26 of 27 ESLint violations
For detailed sprint retrospectives and development workflow, see IMP_WF.md.
MCP Hub uses a registry system for marketplace functionality. This provides:
- Server Discovery: Centralized registry for discovering available MCP servers
- Comprehensive Metadata: Server information including categories and installation instructions
- Caching: Improved cache system with 1-hour TTL for frequent updates
- Fallback Support: Automatic fallback to curl when fetch fails (useful for proxy/VPN environments)
The marketplace is updated regularly with new servers and improvements to existing entries.
- Custom marketplace integration (MCP Registry)
- HTTP connection pooling for remote servers
- Tool filtering system with LLM categorization
- VS Code configuration compatibility
- Multi-configuration file support
- Workspace management and tracking
- Real-time event streaming with batching
- Development mode with hot-reload
- OAuth 2.0 authentication with PKCE
- Comprehensive backend test suite (530+ tests, production-stable)
- Production Deployment: Stable operation with 12+ MCP servers and 108+ tools
- Docker Integration: Successfully configured and managed Docker MCP server connections
- Enhanced Web UI for server management
- Terminal UI (TUI) interface inspired by mcphub.nvim
- Advanced tool filtering patterns
- Extended marketplace features
- Plugin system for extensibility
- Built-in monitoring dashboard
- Advanced analytics and metrics
- Multi-user support with role-based access
MCP Hub is built on the Model Context Protocol specification and integrates with various MCP servers to provide a unified management interface.