- Node.js v20 or later
- pnpm v8 or later
- Redis v6 or later
- Docker and Docker Compose (optional)
-
Install Dependencies
# Install pnpm if you haven't already npm install -g pnpm # Install project dependencies pnpm install:all
-
Set Up Redis
# Using Docker docker run --name tunnel-redis -p 6379:6379 -d redis:6 -
Environment Configuration
Create
packages/server/.env:PORT=3000 REDIS_URL=redis://localhost:6379 JWT_SECRET=your-secret-key
Create
packages/client/.env:SERVER_URL=ws://localhost:3000 LOCAL_PORT=8080 AUTH_TOKEN=your-auth-token
-
Build Packages
# Build all packages in order pnpm build:common pnpm build:server pnpm build:client -
Start the Services
In one terminal, start the server:
pnpm dev:server
In another terminal, start the client:
pnpm dev:client
-
Check Server Status
curl http://localhost:3000/health # Should return: {"status": "ok"} -
Test Tunnel Connection
# Start a test server npx http-server -p 8080 # In another terminal, create a tunnel tunnel start -p 8080
-
Build Fails
# Clean and rebuild pnpm clean pnpm install:all pnpm build -
Redis Connection Error
# Check Redis status docker ps | grep redis # or redis-cli ping
-
Port Already in Use
# Find and kill process using port lsof -i :3000 kill -9 <PID>
# Development
pnpm dev:server # Start server in dev mode
pnpm dev:client # Start client in dev mode
# Building
pnpm build # Build all packages
pnpm build:common # Build common package
pnpm build:server # Build server package
pnpm build:client # Build client package
# Docker Setup
pnpm docker:build # Build Docker images
pnpm docker:up # Start all services in Docker
pnpm docker:down # Stop all Docker services
pnpm docker:logs # View Docker container logs
pnpm docker:clean # Remove all Docker resources
# Docker Individual Services
docker-compose up server # Start only the server
docker-compose up redis # Start only Redis
docker-compose up -d # Start in detached mode
docker-compose logs -f server # Follow server logs
# Maintenance
pnpm clean # Clean build files
pnpm install:all # Install dependencies
# Testing
pnpm test # Run all tests
pnpm lint # Run lintingtunnel-service/
├── packages/
│ ├── common/ # Shared types and utilities
│ ├── server/ # Server implementation
│ └── client/ # Client implementation
└── package.json
┌─────────┐ ┌──────────┐ ┌─────────┐
│ Client │◄────┤ Server │◄────┤ Target │
│ (CLI) │ │ (WS) │ │ (HTTP) │
└─────────┘ └──────────┘ └─────────┘
The service consists of three main components:
- Client: CLI tool that creates secure tunnels
- Server: WebSocket server handling tunnel connections
- Common: Shared types and utilities
- Client establishes WebSocket connection with server
- Server assigns unique tunnel ID
- External requests hit server endpoint
- Server forwards requests to appropriate client
- Client proxies to local service
- Response follows reverse path
Messages between client and server are secured using:
- AES encryption with a shared key
- Compression for messages larger than 1KB
- Base64 encoding for compressed messages
Environment variables required for security:
# Server
ENCRYPTION_KEY=your-32-character-encryption-key
# Client
ENCRYPTION_KEY=your-32-character-encryption-key # Must match server// Log levels: error, warn, info, debug
logger.error('Connection failed', { error, tunnelId });
logger.info('New tunnel created', { tunnelId, clientId });
logger.debug('Request received', { path, method, headers });Access logs:
# View server logs
pnpm docker:logs server
# Filter error logs
pnpm docker:logs server | grep ERRORLogs are stored in packages/client/logs/:
error.log: Error messagescombined.log: All log levelstunnel.log: Tunnel-specific events
View logs:
# Show last 100 lines
tunnel logs -n 100
# Follow log output
tunnel logs -f- CPU/Memory usage
- WebSocket connections
- Active tunnels
- Request latency
View metrics:
# Last hour metrics
tunnel metrics -t 1h
# Custom time range
tunnel metrics -t 24hMetrics endpoint: http://localhost:3000/metrics
Example metrics:
# HELP tunnel_active_connections Current number of active tunnel connections
# TYPE tunnel_active_connections gauge
tunnel_active_connections 5
# HELP tunnel_requests_total Total number of tunnel requests
# TYPE tunnel_requests_total counter
tunnel_requests_total{status="success"} 150
tunnel_requests_total{status="error"} 3
-
Check connection status:
curl http://localhost:3000/status
-
Monitor WebSocket traffic:
# Start Wireshark capture wireshark -i lo0 -f "port 3000"
-
Debug logs:
# Enable debug logging DEBUG=tunnel:* pnpm dev:server
-
Check server status:
curl http://localhost:3000/health
-
Verify WebSocket URL:
# Should start with ws:// or wss:// echo $SERVER_URL
-
Check for firewall issues:
nc -zv localhost 3000
-
Enable performance logging:
export TUNNEL_PERF_LOGS=true -
Monitor request timing:
tunnel metrics --type latency
-
Check Redis performance:
redis-cli --latency
- JWT-based authentication
- Token rotation
- Rate limiting
Configuration:
JWT_SECRET=your-secret
TOKEN_EXPIRY=24h
RATE_LIMIT=100- TLS for WebSocket connections
- End-to-end encryption for tunnel data
Enable TLS:
# Generate certificates
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365
# Start server with TLS
SSL_KEY=key.pem SSL_CERT=cert.pem pnpm dev:serverconst wss = new WebSocket.Server({
perMessageDeflate: true,
maxPayload: 50 * 1024 * 1024, // 50MB
backlog: 100
});const client = createClient({
url: process.env.REDIS_URL,
socket: {
keepAlive: 5000,
reconnectStrategy: retries => Math.min(retries * 100, 3000)
}
});# Install autocannon
npm i -g autocannon
# Run load test
autocannon -c 100 -d 30 http://localhost:3000/tunnel# Basic health check
curl http://localhost:3000/health
# Detailed status
curl http://localhost:3000/statusConfigure alert thresholds in config/alerts.json:
{
"memory_threshold": 85,
"cpu_threshold": 80,
"error_rate": 5,
"latency_threshold": 1000
}Access metrics dashboard:
# Start Grafana
docker-compose -f docker-compose.monitoring.yml up
# Visit http://localhost:3001
# Default credentials: admin/admin- Set up proper authentication
- Configure SSL/TLS
- Add monitoring
- Set up production deployment
For deployment instructions, refer to DEPLOYMENT.md (coming soon).