AI-powered agent that analyzes application logs in real time, detects anomalies, notifies users, suggests fixes, and helps solve problems. Supports both local (via Ollama) and API-based LLMs. Includes an interactive web interface and provides integration with monitoring tools to streamline issue diagnosis and resolution.
- Real-time log analysis powered by LLMs (OpenAI, Anthropic, DeepSeek, Ollama, etc.)
- Interactive chat UI (TypeScript/HTML frontend)
- REST API via FastAPI
- Persistent chat history in PostgreSQL
- Log validation using Pydantic & PydanticAI
- Asynchronous backend (FastAPI, asyncpg)
- Observability with Logfire
- Short-term log storage in Redis
- Grafana/Prometheus monitoring stack (optional)
BACKEND/
├── main.py # FastAPI app entry point
├── schemas.py # Pydantic models for chat/logs
├── utilslib.py # Log parsing, validation, and utility functions
├── pyproject.toml # Python project config
├── uv.lock # uv dependency lock file
├── sample.env # Example environment config
├── README.md # This file
├── LLM_Agents/
│ └── agentslib.py # LLM agent logic, system prompts, tools
├── Mock_UI/
│ ├── chat_app.html # HTML frontend
│ ├── chat_app.ts # TypeScript frontend logic
│ └── styles.css # UI styles
├── Mock_Services/
│ └── sent_logs.ipynb # Notebook for mock log sending
├── Postgres_DB/
│ ├── DB_PG17.py # Async PostgreSQL logic
│ └── initdb17/
│ ├── docker-compose.yml # Docker Compose for PostgreSQL
│ └── init_db.sql # DB initialization script
├── Redis_DB/
│ └── ST_DB_Redis.py # Async Redis logic for log storage
├── grafana/
│ ├── docker-compose.yml # Docker Compose for Grafana/Prometheus
│ ├── prometheus.yml # Prometheus config
│ ├── node_exporter/ # Node exporter for metrics
│ ├── LICENSE
│ └── NOTICE
├── static/
│ └── styles.css # Additional static styles
├── test_logs/
│ ├── deanonymized_server.log
│ └── deanonymized_server_backup.log
-
Clone the repository
git clone https://github.com/el-arma/AGH_diploma cd AGH_diploma -
Set up Python environment
uv venv .venv\Scripts\activate uv sync
-
Set up PostgreSQL
cd Postgres_DB\initdb17 docker compose up -d -
Set up Redis (short-term log storage)
docker run -d --name redis-stack -p 6379:6379 redis/redis-stack-server:latest
-
Configure environment variables
- Copy
sample.envto.envand fill in your API keys (e.g.,OPENAI_API_KEY,ANTHROPIC_API_KEY).
- Copy
-
(Optional) Run Grafana/Prometheus monitoring
cd grafana docker compose up -d -
Run the application
uvicorn main:app --host 127.0.0.1 --port 8000 --reloadOpen your browser: http://127.0.0.1:8000
| Method | Endpoint | Description |
|---|---|---|
| POST | /logs/ingest |
Submit logs for async LLM analysis |
| Method | Endpoint | Description |
|---|---|---|
| GET | / |
Redirect to UI (main chat page) |
| GET | /chat_app.ts |
Download TypeScript frontend logic |
| GET | /chat/ |
Retrieve chat history (main endpoint for frontend) |
| POST | /chat/ |
Send message to chat (streams LLM replies) |
| DELETE | /chat/delete |
Delete chat(s) by chatId |
| POST | /set_model/ |
Change LLM model (OpenAI, Anthropic, DeepSeek, Ollama) |
Explanation:
- External – endpoints for end users
- Internal – endpoints for system
- FastAPI – Async Python web framework
- PydanticAI – AI Agent Framework
- PostgreSQL 17 – Main DB
- asyncpg – Fast PostgreSQL driver
- Pydantic – Data validation & parsing
- Logfire – Observability and tracing
- OpenAI / Anthropic / DeepSeek / Ollama – LLM integrations
- Redis – Short-term log storage
- Grafana/Prometheus – Monitoring stack
- Change LLM model in
LLM_Agents/agentslib.py - DB settings in
Postgres_DB/DB_PG17.py - Extend schemas in
schemas.pyto match your data structures
- All DB operations are fully asynchronous
- Log analysis is handled as background tasks
- Logfire spans help trace DB and AI Agent actions