A comprehensive, production-ready Docker-based platform for building, testing, and deploying AI agents locally. This setup integrates multiple powerful tools to create a complete AI development environment.
- π§ LLM Gateway: Unified API access to local models via LiteLLM
- π€ Visual AI Agent Builder: Drag-and-drop interface with Flowise
- π Vector Database: PostgreSQL with pgvector for similarity search
- π Document Processing: Unstructured API for various file formats
- π§ͺ LLM Testing: Automated evaluation with Promptfoo
- βοΈ Workflow Automation: Business process automation with n8n
- π Monitoring: Real-time metrics and dashboards with Grafana
- πΎ Object Storage: S3-compatible storage with MinIO
- π Production Security: Rate limiting, health checks, and monitoring
graph TB
nginx[Nginx Reverse Proxy]
subgraph "AI Services"
litellm[LiteLLM Gateway]
flowise[Flowise AI Agents]
unstructured[Unstructured API]
promptfoo[Promptfoo Testing]
end
subgraph "Workflow & Automation"
n8n[n8n Workflows]
end
subgraph "Storage & Data"
postgres[(PostgreSQL + pgvector)]
redis[(Redis)]
minio[MinIO S3 Storage]
end
subgraph "Monitoring"
grafana[Grafana Dashboard]
end
nginx --> litellm
nginx --> flowise
nginx --> unstructured
nginx --> n8n
nginx --> grafana
nginx --> minio
litellm --> postgres
litellm --> redis
flowise --> postgres
n8n --> postgres
grafana --> postgres
- Docker (v20.10+)
- Docker Compose (v2.0+)
- At least 8GB RAM
- 20GB free disk space
git clone <this-repo>
cd local-ai-agents-setup
# Copy and configure environment
cp .env.example .env
# Review .env file settings# Full startup with health checks
./scripts/start.sh
# Or quick start without pulling images
./scripts/start.sh quick| Service | URL | Credentials |
|---|---|---|
| Main Dashboard | http://localhost:80 | - |
| LiteLLM Gateway | http://localhost:4000 | API Key: sk-litellm-master-key-2024 |
| Flowise (AI Agents) | http://localhost:3000 | - |
| n8n (Workflows) | http://localhost:5678 | admin / admin123 |
| Grafana (Monitoring) | http://localhost:3001 | admin / admin123 |
| MinIO Console | http://localhost:9001 | minioadmin / minioadmin123 |
| Unstructured API | http://localhost:8000 | - |
Models are configured through the LiteLLM web dashboard:
- Start the platform:
./scripts/start.sh - Open the LiteLLM dashboard at http://localhost:4000
- Add your models through the web interface
- Configure API keys, rate limits, and other settings
# Test via LiteLLM Gateway
curl -X POST http://localhost:4000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-litellm-master-key-2024" \
-d '{
"model": "your-model-name",
"messages": [{"role": "user", "content": "Hello!"}]
}'- Open http://localhost:3000
- Create a new chatflow
- Add LLM node pointing to
http://litellm:4000/v1 - Configure with API key:
sk-litellm-master-key-2024 - Add vector store using PostgreSQL connection
# Process a PDF document
curl -X POST http://localhost:8000/general/v0/general \
-F "[email protected]" \
-F "strategy=hi_res"# Run Promptfoo tests
docker exec ai-agents-promptfoo npm run eval
# View results
docker exec ai-agents-promptfoo npm run view- Access http://localhost:5678 (admin/admin123)
- Create workflows integrating:
- LiteLLM for AI processing
- Unstructured for document parsing
- PostgreSQL for data storage
- MinIO for file storage
# Start all services
./scripts/start.sh
# Stop services
./scripts/stop.sh
# Stop specific service
./scripts/stop.sh service postgres
# Restart specific service
./scripts/stop.sh restart litellm# Create full backup
./scripts/backup.sh
# List backups
./scripts/backup.sh list
# Restore from backup
./scripts/backup.sh restore backups/ai-agents-backup-20241201_120000.tar.gz
# Cleanup old backups (keep 30 days)
./scripts/backup.sh cleanup 30Access Grafana at http://localhost:3001 for:
- PostgreSQL performance metrics
- Redis cache statistics
- LiteLLM request/response metrics
- System resource usage
- Service health status
# Check all service health
./scripts/start.sh health
# View service status
./scripts/start.sh status- Change Default Passwords: Update all credentials in
.env - Enable HTTPS: Configure SSL certificates in Nginx
- Network Security: Use Docker networks and firewall rules
- API Authentication: Implement proper API key management
- Backup Encryption: Encrypt sensitive backup data
Key security settings in .env:
# Generate strong passwords
POSTGRES_PASSWORD=your-secure-password
REDIS_PASSWORD=your-redis-password
LITELLM_MASTER_KEY=your-api-key
# JWT secrets
JWT_SECRET=$(openssl rand -hex 32)
ENCRYPTION_KEY=$(openssl rand -hex 32)To scale services horizontally:
# Scale LiteLLM instances
docker-compose up -d --scale litellm=3
# Scale with load balancer
# Edit nginx configuration for additional upstreamsAdd new models to configs/litellm/config.yaml:
- model_name: "custom-model"
litellm_params:
model: "openai/custom-model"
api_base: "http://your-server:port/v1"
rpm: 60
tpm: 8000Configure external services in .env:
# OpenAI fallback
OPENAI_API_KEY=your-openai-key
# Cloud storage
AWS_ACCESS_KEY_ID=your-access-key
AWS_SECRET_ACCESS_KEY=your-secret-key-
Services Not Starting
# Check Docker status docker info # View service logs docker-compose logs service-name
-
Database Connection Issues
# Verify PostgreSQL is ready docker exec ai-agents-postgres pg_isready -U ai_user
-
LLM Gateway Errors
# Check LiteLLM logs docker logs ai-agents-litellm # Test model connectivity through LiteLLM curl http://localhost:4000/v1/models
-
Memory Issues
# Monitor resource usage docker stats # Adjust memory limits in docker-compose.yml
Service logs are available via:
# All services
docker-compose logs
# Specific service
docker-compose logs -f litellm
# System logs
./logs/- Fork the repository
- Create a feature branch
- Make your changes
- Test thoroughly
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.
- LiteLLM - LLM Gateway
- Flowise - AI Agent Builder
- pgvector - Vector Database Extension
- Unstructured - Document Processing
- Promptfoo - LLM Testing
- n8n - Workflow Automation
- π Documentation: This README and inline comments
- π Issues: GitHub Issues
- π¬ Discussions: GitHub Discussions
- π§ Email: [Your support email]
π Happy AI Agent Building!