Skip to content

rizalwfh/local-ai-agents-setup

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

5 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ€– AI Agents Platform - Production-Ready Local Setup

A comprehensive, production-ready Docker-based platform for building, testing, and deploying AI agents locally. This setup integrates multiple powerful tools to create a complete AI development environment.

🌟 Features

  • 🧠 LLM Gateway: Unified API access to local models via LiteLLM
  • πŸ€– Visual AI Agent Builder: Drag-and-drop interface with Flowise
  • πŸ“Š Vector Database: PostgreSQL with pgvector for similarity search
  • πŸ“„ Document Processing: Unstructured API for various file formats
  • πŸ§ͺ LLM Testing: Automated evaluation with Promptfoo
  • βš™οΈ Workflow Automation: Business process automation with n8n
  • πŸ“ˆ Monitoring: Real-time metrics and dashboards with Grafana
  • πŸ’Ύ Object Storage: S3-compatible storage with MinIO
  • πŸ”’ Production Security: Rate limiting, health checks, and monitoring

πŸ—οΈ Architecture

graph TB
    nginx[Nginx Reverse Proxy]

    subgraph "AI Services"
        litellm[LiteLLM Gateway]
        flowise[Flowise AI Agents]
        unstructured[Unstructured API]
        promptfoo[Promptfoo Testing]
    end

    subgraph "Workflow & Automation"
        n8n[n8n Workflows]
    end

    subgraph "Storage & Data"
        postgres[(PostgreSQL + pgvector)]
        redis[(Redis)]
        minio[MinIO S3 Storage]
    end

    subgraph "Monitoring"
        grafana[Grafana Dashboard]
    end

    nginx --> litellm
    nginx --> flowise
    nginx --> unstructured
    nginx --> n8n
    nginx --> grafana
    nginx --> minio

    litellm --> postgres
    litellm --> redis
    flowise --> postgres
    n8n --> postgres
    grafana --> postgres
Loading

πŸš€ Quick Start

Prerequisites

  • Docker (v20.10+)
  • Docker Compose (v2.0+)
  • At least 8GB RAM
  • 20GB free disk space

1. Clone and Setup

git clone <this-repo>
cd local-ai-agents-setup

# Copy and configure environment
cp .env.example .env
# Review .env file settings

2. Start the Platform

# Full startup with health checks
./scripts/start.sh

# Or quick start without pulling images
./scripts/start.sh quick

4. Access Services

Service URL Credentials
Main Dashboard http://localhost:80 -
LiteLLM Gateway http://localhost:4000 API Key: sk-litellm-master-key-2024
Flowise (AI Agents) http://localhost:3000 -
n8n (Workflows) http://localhost:5678 admin / admin123
Grafana (Monitoring) http://localhost:3001 admin / admin123
MinIO Console http://localhost:9001 minioadmin / minioadmin123
Unstructured API http://localhost:8000 -

πŸ“– Detailed Setup Guide

Configuring LiteLLM Models

Models are configured through the LiteLLM web dashboard:

  1. Start the platform: ./scripts/start.sh
  2. Open the LiteLLM dashboard at http://localhost:4000
  3. Add your models through the web interface
  4. Configure API keys, rate limits, and other settings

πŸ› οΈ Usage Examples

1. Testing LLM Connectivity

# Test via LiteLLM Gateway
curl -X POST http://localhost:4000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer sk-litellm-master-key-2024" \
  -d '{
    "model": "your-model-name",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

2. Building AI Agents in Flowise

  1. Open http://localhost:3000
  2. Create a new chatflow
  3. Add LLM node pointing to http://litellm:4000/v1
  4. Configure with API key: sk-litellm-master-key-2024
  5. Add vector store using PostgreSQL connection

3. Document Processing

# Process a PDF document
curl -X POST http://localhost:8000/general/v0/general \
  -F "[email protected]" \
  -F "strategy=hi_res"

4. Running LLM Evaluations

# Run Promptfoo tests
docker exec ai-agents-promptfoo npm run eval

# View results
docker exec ai-agents-promptfoo npm run view

5. Creating Workflows in n8n

  1. Access http://localhost:5678 (admin/admin123)
  2. Create workflows integrating:
    • LiteLLM for AI processing
    • Unstructured for document parsing
    • PostgreSQL for data storage
    • MinIO for file storage

πŸ”§ Management Scripts

Start/Stop Services

# Start all services
./scripts/start.sh

# Stop services
./scripts/stop.sh

# Stop specific service
./scripts/stop.sh service postgres

# Restart specific service
./scripts/stop.sh restart litellm

Backup and Restore

# Create full backup
./scripts/backup.sh

# List backups
./scripts/backup.sh list

# Restore from backup
./scripts/backup.sh restore backups/ai-agents-backup-20241201_120000.tar.gz

# Cleanup old backups (keep 30 days)
./scripts/backup.sh cleanup 30

πŸ“Š Monitoring and Observability

Grafana Dashboards

Access Grafana at http://localhost:3001 for:

  • PostgreSQL performance metrics
  • Redis cache statistics
  • LiteLLM request/response metrics
  • System resource usage
  • Service health status

Health Checks

# Check all service health
./scripts/start.sh health

# View service status
./scripts/start.sh status

πŸ”’ Security Considerations

Production Deployment

  1. Change Default Passwords: Update all credentials in .env
  2. Enable HTTPS: Configure SSL certificates in Nginx
  3. Network Security: Use Docker networks and firewall rules
  4. API Authentication: Implement proper API key management
  5. Backup Encryption: Encrypt sensitive backup data

Environment Variables

Key security settings in .env:

# Generate strong passwords
POSTGRES_PASSWORD=your-secure-password
REDIS_PASSWORD=your-redis-password
LITELLM_MASTER_KEY=your-api-key

# JWT secrets
JWT_SECRET=$(openssl rand -hex 32)
ENCRYPTION_KEY=$(openssl rand -hex 32)

πŸš€ Advanced Configuration

Scaling Services

To scale services horizontally:

# Scale LiteLLM instances
docker-compose up -d --scale litellm=3

# Scale with load balancer
# Edit nginx configuration for additional upstreams

Custom Models

Add new models to configs/litellm/config.yaml:

- model_name: "custom-model"
  litellm_params:
    model: "openai/custom-model"
    api_base: "http://your-server:port/v1"
    rpm: 60
    tpm: 8000

External Integrations

Configure external services in .env:

# OpenAI fallback
OPENAI_API_KEY=your-openai-key

# Cloud storage
AWS_ACCESS_KEY_ID=your-access-key
AWS_SECRET_ACCESS_KEY=your-secret-key

πŸ› Troubleshooting

Common Issues

  1. Services Not Starting

    # Check Docker status
    docker info
    
    # View service logs
    docker-compose logs service-name
  2. Database Connection Issues

    # Verify PostgreSQL is ready
    docker exec ai-agents-postgres pg_isready -U ai_user
  3. LLM Gateway Errors

    # Check LiteLLM logs
    docker logs ai-agents-litellm
    
    # Test model connectivity through LiteLLM
    curl http://localhost:4000/v1/models
  4. Memory Issues

    # Monitor resource usage
    docker stats
    
    # Adjust memory limits in docker-compose.yml

Log Files

Service logs are available via:

# All services
docker-compose logs

# Specific service
docker-compose logs -f litellm

# System logs
./logs/

🀝 Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Test thoroughly
  5. Submit a pull request

πŸ“ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ™ Acknowledgments

πŸ“ž Support

  • πŸ“š Documentation: This README and inline comments
  • πŸ› Issues: GitHub Issues
  • πŸ’¬ Discussions: GitHub Discussions
  • πŸ“§ Email: [Your support email]

πŸš€ Happy AI Agent Building!

About

πŸš€ AI Agents Platform Startup Script

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages