A comprehensive security monitoring solution built with FastAPI, PostgreSQL, Redis, and an integrated ELK (Elasticsearch, Logstash, Kibana) stack for advanced persistent threat (APT) detection and real-time security monitoring.
- Real-time APT Detection: Advanced persistent threat monitoring using machine learning patterns
- Brute Force Detection: Automated detection of credential stuffing and brute force attacks
- Data Exfiltration Monitoring: Detection of unusual outbound data transfers
- PowerShell Attack Detection: Monitoring for suspicious PowerShell command execution
- Cross-system Correlation: APT kill-chain correlation across multiple data sources
- Risk Scoring: Automated threat severity assessment (1-10 scale)
- Real-time Alerting: Slack, email, and system notifications for security incidents
- Elasticsearch: Distributed search and analytics for log data
- Logstash: Real-time log processing and enrichment pipeline
- Kibana: Interactive dashboards and security visualizations
- Filebeat: System and application log collection
- Metricbeat: System metrics and performance monitoring
- Winlogbeat: Windows Event Log collection for hybrid environments
- Authentication & Authorization: JWT-based authentication with secure password hashing
- CRUD Operations: Full create, read, update, delete operations for todos
- Filtering & Search: Filter todos by completion status, priority, and search in title/description
- Caching: Redis-based caching for improved performance
- Database: PostgreSQL with SQLAlchemy ORM and Alembic migrations
- Monitoring: Prometheus metrics and structured logging
- Error Handling: Comprehensive error handling with detailed logging
- Testing: Complete test suite with pytest
- Docker: Containerized deployment with Docker Compose
- Documentation: Auto-generated API documentation with FastAPI
- Elasticsearch: 8.11.0 - Search and analytics engine
- Logstash: 8.11.0 - Log processing pipeline
- Kibana: 8.11.0 - Data visualization and dashboards
- Filebeat: 8.11.0 - Log shipping agent
- Metricbeat: 8.11.0 - System metrics collection
- Winlogbeat: 8.11.0 - Windows Event Log collection
- Framework: FastAPI
- Database: PostgreSQL
- Cache: Redis
- ORM: SQLAlchemy
- Migration: Alembic
- Authentication: JWT with python-jose
- Password Hashing: bcrypt
- Logging: structlog
- Monitoring: Prometheus
- Testing: pytest
- Containerization: Docker & Docker Compose
- Clone the repository:
git clone https://github.com/rohansen856/elk-stack-monitoring
cd elk-stack-monitoring- Copy environment variables:
cp .env.example .env- Start the services:
docker-compose up -d- Run database migrations:
docker-compose exec app alembic upgrade headThe API will be available at http://localhost:8000
- Create and activate virtual environment:
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate- Install dependencies:
pip install -r requirements.txt- Set up environment variables:
cp .env.example .env
# Edit .env with your configuration-
Start PostgreSQL and Redis services
-
Run database migrations:
alembic upgrade head- Start the application:
uvicorn app.main:app --reload --host 0.0.0.0 --port 8000POST /api/v1/users/register- Register a new userPOST /api/v1/users/login- Login userGET /api/v1/users/me- Get current user info
GET /api/v1/todos/- List todos with optional filtersPOST /api/v1/todos/- Create a new todoGET /api/v1/todos/{id}- Get a specific todoPUT /api/v1/todos/{id}- Update a todoDELETE /api/v1/todos/{id}- Delete a todoGET /api/v1/todos/stats/summary- Get todo statistics
GET /health- Health check endpointGET /metrics- Prometheus metricsGET /docs- Interactive API documentation (development only)
GET /api/v1/security/threats/brute-force- Detect brute force attacksGET /api/v1/security/threats/data-exfiltration- Detect data exfiltration attemptsGET /api/v1/security/threats/powershell- Detect suspicious PowerShell activityGET /api/v1/security/threats/apt-correlation- APT kill-chain correlation analysisPOST /api/v1/security/alerts/test- Test security alerting system
curl -X POST "http://localhost:8000/api/v1/users/register" \
-H "Content-Type: application/json" \
-d '{
"email": "[email protected]",
"username": "testuser",
"password": "securepassword123"
}'curl -X POST "http://localhost:8000/api/v1/users/login" \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "[email protected]&password=securepassword123"curl -X POST "http://localhost:8000/api/v1/todos/" \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"title": "Buy groceries",
"description": "Milk, bread, and eggs",
"priority": "medium",
"due_date": "2024-12-31T10:00:00"
}'curl "http://localhost:8000/api/v1/todos/?completed=false&priority=high&search=urgent" \
-H "Authorization: Bearer YOUR_TOKEN"| Variable | Description | Default |
|---|---|---|
DATABASE_URL |
PostgreSQL connection string | Required |
REDIS_URL |
Redis connection string | Required |
SECRET_KEY |
JWT secret key | Required |
ALGORITHM |
JWT algorithm | HS256 |
ACCESS_TOKEN_EXPIRE_MINUTES |
Token expiration time | 30 |
ENVIRONMENT |
Environment (development/production) | development |
LOG_LEVEL |
Logging level | INFO |
id(Primary Key)email(Unique)username(Unique)hashed_passwordis_activecreated_atupdated_at
id(Primary Key)titledescriptioncompletedpriority(low, medium, high)due_datecreated_atupdated_atowner_id(Foreign Key to Users)
Run the test suite:
pytestRun tests with coverage:
pytest --cov=app --cov-report=htmlCreate a new migration:
alembic revision --autogenerate -m "Description of changes"Apply migrations:
alembic upgrade headFormat code:
black app/ tests/Lint code:
flake8 app/ tests/Type checking:
mypy app/The application exposes Prometheus metrics at /metrics including:
- Request count by method, endpoint, and status
- Request duration histograms
- Custom application metrics
Structured logging with the following fields:
timestamplevelmessagerequest_iduser_id(when applicable)- Additional context fields
The /health endpoint returns:
{
"status": "healthy",
"database": "ok",
"redis": "ok"
}- Use strong, unique
SECRET_KEY - Enable HTTPS in production
- Set up proper CORS policies
- Use environment-specific database credentials
- Enable rate limiting
- Set up proper logging and monitoring
- Use multiple application instances behind a load balancer
- Scale Redis for caching
- Use read replicas for PostgreSQL
- Consider connection pooling
This project features a comprehensive security monitoring solution built on the ELK stack, specifically designed for Advanced Persistent Threat (APT) detection and real-time security analytics.
- Port: 9200
- Purpose: Distributed search and analytics engine for security logs
- Indices: Specialized security indices (
security-*,windows-security-*,security-alerts-*) - Health Check:
http://localhost:9200/_cluster/health - Features: GeoIP enrichment, risk scoring, automated threat detection
- Ports:
- 5044 (Beats input - Filebeat, Metricbeat, Winlogbeat)
- 5000 (TCP input for application logs)
- 9600 (API/Monitoring)
- 514 (Syslog UDP/TCP for network devices)
- 12201 (GELF for Docker logs)
- Purpose: Real-time log processing, enrichment, and threat detection
- Configuration:
./logstash/pipeline/logstash.conf - Security Features: Automatic risk scoring, GeoIP lookups, threat correlation
- Port: 5601
- Purpose: Security dashboards and threat visualization
- Access:
http://localhost:5601 - Dashboards: APT detection, geographic threat maps, security event timelines
- Features: Real-time alerting, investigation tools, threat hunting queries
# ELK Stack Configuration
ELASTICSEARCH_URL=http://elasticsearch:9200
ELASTICSEARCH_HOST=elasticsearch
ELASTICSEARCH_PORT=9200
KIBANA_HOST=kibana
KIBANA_PORT=5601
LOGSTASH_HOST=logstash
LOGSTASH_PORT=5044
LOGSTASH_TCP_PORT=5000
# Elasticsearch Settings
ES_JAVA_OPTS=-Xms512m -Xmx512m
ELASTIC_PASSWORD=changeme
KIBANA_PASSWORD=changeme# Start all services including ELK stack and security monitoring
docker-compose up -d
# Start only ELK services
docker-compose up -d elasticsearch logstash kibana filebeat metricbeat
# Check service health
docker-compose ps
curl http://localhost:9200/_cluster/health
curl http://localhost:5601/api/status- Access Kibana: Open
http://localhost:5601 - Create Security Index Patterns:
security-*for general security eventssecurity-auth-logs-*for authentication eventssecurity-network-logs-*for network security eventssecurity-alerts-*for security alertswindows-security-logs-*for Windows security events
- Import Security Dashboards: Navigate to "Stack Management" > "Saved Objects"
- Start Threat Hunting: Go to "Discover" for real-time security log analysis
# Test the security monitoring system
curl "http://localhost:8000/api/v1/security/threats/brute-force"
curl "http://localhost:8000/api/v1/security/threats/data-exfiltration"
curl "http://localhost:8000/api/v1/security/threats/powershell"
curl "http://localhost:8000/api/v1/security/threats/apt-correlation"
# Test alerting system
curl -X POST "http://localhost:8000/api/v1/security/alerts/test"The security monitoring system automatically collects and analyzes:
- Login attempts (successful/failed)
- Privilege escalation events
- Account lockouts and password changes
- Off-hours access attempts
- Geographic access anomalies
- Firewall blocks and allows
- Network traffic patterns
- Outbound data transfers
- Command & Control communication attempts
- Lateral movement detection
- Process creation and execution
- PowerShell command monitoring
- Suspicious script execution
- Service creation and modification
- Registry changes (Windows)
- Sensitive file access
- Data staging activities
- Configuration file modifications
- Unauthorized data access attempts
Security events are structured in JSON format with enhanced fields:
@timestamp: ISO timestamplevel: Log level (INFO, ERROR, DEBUG, etc.)logger_name: Logger identifiermessage: Log messageservice: Service name
security_event: Event type (authentication_failure, network_block, etc.)src_ip: Source IP addressgeo.country: Geographic locationrisk_score: Automated threat score (1-10)user.name: Username involvedprocess.name: Process namenetwork.bytes_out: Outbound data transferthreat_indicators: Array of threat indicators
- Services not starting: Wait 2-3 minutes for all services to initialize
- Memory issues: Adjust
ES_JAVA_OPTSin .env for lower memory usage - Connection refused: Ensure all services are healthy with
docker-compose ps
# Check Elasticsearch
curl http://localhost:9200/_cluster/health
# Check Logstash
curl http://localhost:9600/_node/stats
# Check Kibana
curl http://localhost:5601/api/status# View all ELK logs
docker-compose logs -f elasticsearch logstash kibana
# View specific service logs
docker-compose logs -f elasticsearchtodo-api-logs-YYYY.MM.DD: Daily application logstest-logs: Test documents from integration tests
# Delete indices older than 30 days (example)
curl -X DELETE "localhost:9200/todo-api-logs-*" \
-H "Content-Type: application/json" \
-d '{"query":{"range":{"@timestamp":{"lt":"now-30d"}}}}'- Pattern: Multiple failed logins (5+) followed by successful authentication
- Time Window: 15-minute sliding window
- Risk Scoring: 2x failure count (max 10)
- Indicators: External IP, multiple usernames, off-hours attempts
- Pattern: Unusual outbound data transfers (>100MB/hour)
- Monitoring: Network traffic patterns and volume
- Risk Scoring: Based on data volume and destination
- Alerts: Real-time notifications for large transfers
- Suspicious Patterns: Encoded commands, download strings, bypass techniques
- Monitoring: PowerShell execution logs and command-line parameters
- Risk Score: 7/10 for suspicious patterns
- Coverage:
Invoke-Expression,IEX,DownloadString,EncodedCommand
- Cross-System Analysis: Correlates events across multiple data sources
- Kill-Chain Stages: Execution β Persistence β Exfiltration
- Risk Score: 9/10 for confirmed kill-chain patterns
- Response: Automated alerting and threat hunting queries
| Score | Level | Response | Examples |
|---|---|---|---|
| 1-2 | Low | Log only | Normal login, routine processes |
| 3-4 | Medium | Monitor | Failed login, blocked connection |
| 5-6 | High | Alert | Multiple failures, privilege escalation |
| 7-8 | Critical | Immediate response | External admin access, suspicious scripts |
| 9-10 | Emergency | Incident response | Confirmed APT activity, active breach |
- Slack Integration: Real-time threat notifications
- Email Alerts: Detailed threat summaries
- Elasticsearch Alerts: Stored for tracking and analysis
- Dashboard Alerts: Visual notifications in Kibana
- Automated: Log correlation, risk scoring, initial triage
- Semi-Automated: Alert generation, dashboard updates
- Manual: Threat investigation, incident response, remediation
- Elasticsearch security disabled for ease of development
- Default passwords in .env file (change for production)
- HTTP connections (upgrade to HTTPS for production)
- Open network access (restrict for production)
- Enable Elasticsearch security and authentication
- Configure SSL/TLS for all communications
- Set up role-based access control (RBAC)
- Implement network segmentation
- Enable audit logging
- Configure proper firewall rules
- Use strong encryption keys for Kibana saved objects
- Application Metrics: Request rates, response times, error rates
- Infrastructure Metrics: CPU, memory, disk usage
- ELK Stack Health: Cluster status, index health, ingestion rates
- Security Metrics: Threat detection rates, alert volumes, investigation times
- Elasticsearch: Query performance, index optimization, cluster performance
- Logstash: Processing rates, pipeline performance, error rates
- Kibana: Dashboard load times, user activity, visualization performance
- Application: Database connections, cache hit rates, authentication performance
- Security Overview: Real-time threat landscape
- Geographic Threat Map: Attack sources by location
- Authentication Monitoring: Login patterns and failures
- Network Security: Traffic analysis and blocks
- System Performance: Infrastructure health metrics
- Investigation Dashboard: Detailed threat analysis tools
# 1. Clone the repository
git clone https://github.com/rohansen856/elk-stack-monitoring
cd elk-stack-monitoring
# 2. Configure environment
cp .env.example .env
# Edit .env with your settings
# 3. Deploy the complete security stack
docker-compose up -d
# 4. Wait for services to initialize (2-3 minutes)
docker-compose ps
# 5. Access the security dashboard
open http://localhost:5601- Verify ELK Health: Check all services are running
- Create Index Patterns: Set up security index patterns in Kibana
- Import Dashboards: Load security visualization dashboards
- Configure Alerts: Set up Slack/email notification channels
- Test Detection: Run threat detection tests
- Review Logs: Ensure log collection is working properly
- Minimum: 8GB RAM, 4 CPU cores, 50GB storage
- Recommended: 16GB RAM, 8 CPU cores, 200GB SSD storage
- Enterprise: 32GB+ RAM, 16+ CPU cores, 1TB+ SSD storage
- Elasticsearch: Use multiple nodes for high availability
- Logstash: Scale horizontally for high log volumes
- Application: Deploy multiple instances behind load balancer
- Storage: Implement index lifecycle management for log retention
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests for security features
- Submit a pull request
For support, please create an issue in the repository or contact [[email protected]]
- Security Guide:
ENHANCED_SECURITY_SUMMARY.md - API Documentation:
http://localhost:8000/docs - Threat Detection:
/app/services/threat_detection.py - Alerting System:
/app/services/alerting.py