π€ An AI-powered Business Intelligence Assistant that converts natural language queries into SQL using Ollama/Llama 3.2. Built with Spring Boot, PostgreSQL, and modern web technologies for intuitive data analysis.
- AI-Powered SQL Generation: Natural language to SQL conversion using Llama 3.2
- Modern Web UI: Responsive Bootstrap 5 interface with 150+ query examples
- RESTful API: Full REST endpoints for programmatic access
- Real-time Analytics: Interactive dashboards and query execution
- Security First: SQL injection protection and input validation
- Production Ready: Comprehensive monitoring, caching, and error handling
Backend: Spring Boot 3.5.5, Java 21, PostgreSQL 17.4, Hibernate 6.6.26
AI: Ollama with Llama 3.2 model, Spring AI integration
Frontend: Thymeleaf, Bootstrap 5, jQuery, Font Awesome
Tools: Gradle, Docker, Spring Boot Actuator
Products: 24 items across Electronics, Appliances, Furniture, Accessories
Customers: 10 profiles with Premium/Standard/Basic segments
Sales: 40+ records across Q1-Q3 2025 with regional data
CREATE TABLE products (id, product_name, category, price, description, manufacturer);
CREATE TABLE customers (id, customer_name, email, segment, city, country);
CREATE TABLE sales (id, customer_id, product_id, quantity, sale_date, revenue, region);
# 1. Database
createdb bi_database
# 2. Install Ollama and pull model
ollama pull llama3.2
# 3. Run application
git clone https://github.com/rafiq15/nlq-to-sql-ai-assistant.git
cd nlq-to-sql-ai-assistant
./gradlew bootRun
Access: http://localhost:9080
Update application.properties
:
spring.datasource.url=jdbc:postgresql://localhost:5432/bi_database
spring.ai.ollama.base-url=http://localhost:11434
spring.ai.ollama.model=llama3.2
- GET /: Main UI interface
- GET /queries: Query examples page
- POST /api/query: REST API endpoint
- GET /actuator/health: Health check
curl -X POST http://localhost:9080/api/query \
-H "Content-Type: application/json" \
-d '{"query": "Show top 5 products by revenue"}'
Response:
{
"success": true,
"generatedSql": "SELECT p.product_name, SUM(s.revenue)...",
"data": [...],
"metadata": {"rowCount": 5, "executionTimeMs": 45}
}
Revenue Analysis:
- "Show me the top 5 products by revenue last quarter"
- "What is the total revenue by category this year?"
- "Show monthly revenue trends"
Customer Analytics:
- "Which customers bought the most products?"
- "What is the average order value by customer segment?"
- "Show customers from Premium segment"
Product Performance:
- "Best selling products in Electronics category"
- "Products with highest average selling price"
Regional & Sales Analysis:
- "Sales trends by region"
- "Which sales person has the highest revenue?"
Visit /queries
for 150+ categorized examples
# docker-compose.yml
version: '3.8'
services:
postgres:
image: postgres:17-alpine
environment:
POSTGRES_DB: bi_database
POSTGRES_USER: bi_user
POSTGRES_PASSWORD: your_password
ports: ["5432:5432"]
volumes: ["postgres_data:/var/lib/postgresql/data"]
ollama:
image: ollama/ollama:latest
ports: ["11434:11434"]
volumes: ["ollama_data:/root/.ollama"]
app:
build: .
ports: ["9080:9080"]
depends_on: [postgres, ollama]
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://postgres:5432/bi_database
- SPRING_AI_OLLAMA_BASE_URL=http://ollama:11434
volumes:
postgres_data:
ollama_data:
docker-compose up -d
docker exec ollama-container ollama pull llama3.2
# Run tests
./gradlew test
# Test API endpoint
curl -X POST http://localhost:9080/api/query \
-H "Content-Type: application/json" \
-d '{"query": "List all customers"}'
Response Times:
- Simple queries: 200-400ms
- Complex aggregations: 500-1200ms
- Cached results: 15-50ms
Health Monitoring:
/actuator/health
- Application health/actuator/metrics
- Performance metrics- Built-in Ollama connectivity checks
Key Properties:
# Database
spring.datasource.url=jdbc:postgresql://localhost:5432/bi_database
# AI Model
spring.ai.ollama.base-url=http://localhost:11434
spring.ai.ollama.model=llama3.2
spring.ai.ollama.options.temperature=0.1
# Caching
spring.cache.caffeine.spec=maximumSize=100,expireAfterWrite=1h
Docker Compose (recommended):
docker-compose up -d
docker exec ollama-container ollama pull llama3.2
Traditional Server:
- Ubuntu 20.04+ with Java 21
- PostgreSQL 17.4
- Ollama service with systemd
- Nginx reverse proxy (optional)
System Requirements:
- Development: 16GB RAM, 4 cores, 50GB storage
- Production: 32GB RAM, 8+ cores, 100GB SSD
- Fork the repository
- Create feature branch (
git checkout -b feature/amazing-feature
) - Commit changes (
git commit -m 'Add amazing feature'
) - Push to branch (
git push origin feature/amazing-feature
) - Open a Pull Request
MIT License - see LICENSE file for details.
Built with β€οΈ using Spring Boot, Ollama, and Llama 3.2