A domain-based framework for comparing Retrieval-Augmented Generation (RAG) systems with LLM evaluation support.
RAGDiff includes structured documentation for both humans and AI assistants:
We provide llmstxt files to help AI assistants understand the codebase:
- llms.txt - Quick project overview for AI assistants
- llms-full.txt - Comprehensive documentation including architecture, workflow, and implementation details
These files follow the llmstxt.org specification and enable AI assistants (like Claude, ChatGPT, or Cursor) to quickly understand how to use and contribute to RAGDiff. If you're using an AI assistant to work with this codebase, point it to these files first!
- GUIDE.md - Complete configuration guide explaining directory structure, YAML formats, and best practices
RAGDiff v2.0 introduces a domain-based architecture that organizes RAG system comparison around problem domains:
- Domains: Separate workspaces for different problem areas (e.g., tafsir, legal, medical)
- Systems: RAG system configurations that can be version-controlled
- Query Sets: Reusable collections of test queries
- Runs: Reproducible executions with config snapshots
- Comparisons: LLM-based evaluations with detailed analysis
This replaces the v1.x adapter-based approach with a more structured, reproducible workflow perfect for systematic RAG system development and A/B testing.
- Domain-Driven Organization: Separate workspaces for different problem domains
- Reproducible Runs: Config and query set snapshots for full reproducibility
- Multi-System Support: Compare Vectara, MongoDB, Agentset, and more
- LLM Evaluation: Subjective quality assessment via LiteLLM (GPT, Claude, Gemini)
- Rich CLI: Beautiful terminal output with progress bars and summary tables
- Multiple Output Formats: Table, JSON, and Markdown reports
- Comprehensive Testing: 78 tests ensuring reliability
- Parallel Execution: Fast query execution with configurable concurrency
RAGDiff is now available on PyPI and can be installed with either pip or uv:
# Using pip
pip install ragdiff
# Using uv (faster)
uv pip install ragdiff- Python 3.9+
- uv - Fast Python package installer and resolver (optional but recommended)
To install uv:
# On macOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | sh
# Or with Homebrew
brew install uv
# Or with pip
pip install uv# Clone the repository
git clone https://github.com/ansari-project/ragdiff.git
cd ragdiff
# Option 1: Install with uv (recommended)
uv sync --all-extras # Install all dependencies including dev tools
uv pip install -e . # Install in editable mode
# Option 2: Install with pip
pip install -e . # Install in editable mode
# Copy environment template
cp .env.example .env
# Edit .env and add your API keysThe easiest way to get started is using the init command:
# Initialize a new domain with default template
ragdiff init my-domain
# Use minimal template (fewer example files)
ragdiff init my-domain --template minimal
# Use complete template (includes JSONL examples)
ragdiff init my-domain --template complete
# Specify custom domains directory
ragdiff init my-domain --domains-dir ./custom-domains
# Overwrite existing domain
ragdiff init my-domain --forceAlternatively, you can create the structure manually:
# Create domain directory structure
mkdir -p domains/my-domain/{providers,query-sets,runs,comparisons}
# Create domain config
cat > domains/my-domain/domain.yaml <<EOF
name: my-domain
description: My RAG comparison domain
evaluator:
model: gpt-4
temperature: 0.0
prompt_template: |
Compare these RAG results for relevance and accuracy.
Query: {query}
Results:
{results}
Provide winner and analysis.
EOF# Create Vectara system config
cat > domains/my-domain/providers/vectara-default.yaml <<EOF
name: vectara-default
tool: vectara
config:
api_key: \${VECTARA_API_KEY}
corpus_id: \${VECTARA_CORPUS_ID}
timeout: 30
EOF
# Create MongoDB system config
cat > domains/my-domain/providers/mongodb-local.yaml <<EOF
name: mongodb-local
tool: mongodb
config:
connection_uri: \${MONGODB_URI}
database: my_db
collection: documents
index_name: vector_index
embedding_model: all-MiniLM-L6-v2
timeout: 60
EOF# Create test queries
cat > domains/my-domain/query-sets/test-queries.txt <<EOF
What is machine learning?
Explain neural networks
How does backpropagation work?
EOF# Execute query sets against different providers
uv run ragdiff run my-domain vectara-default test-queries
uv run ragdiff run my-domain mongodb-local test-queries
# Compare the runs (use run IDs from output or check domains/my-domain/runs/)
uv run ragdiff compare my-domain <run-id-1> <run-id-2>
# Export comparison to different formats
uv run ragdiff compare my-domain <run-id-1> <run-id-2> --format markdown --output report.md
uv run ragdiff compare my-domain <run-id-1> <run-id-2> --format json --output comparison.jsonRAGDiff generates comprehensive comparison reports in multiple formats. Here's what the output looks like:
- Table Format: Beautiful terminal output with colored statistics
- JSON Format: Machine-readable results for programmatic analysis
- Markdown Format: Human-readable reports with detailed evaluations
View Example Markdown Output - See a real comparison between FAISS providers with different embedding models.
The reports include:
- Provider win/loss/tie statistics
- Average quality scores
- Query-by-query evaluation details
- LLM reasoning for each comparison
- Performance metrics (latency, tokens used)
RAGDiff v2.0 provides three main CLI commands:
Create a new domain with directory structure and templates:
# Basic usage
ragdiff init <domain>
# Examples
ragdiff init my-domain # Default template
ragdiff init my-domain --template minimal # Minimal template
ragdiff init my-domain --template complete # Complete template with examples
ragdiff init my-domain --force # Overwrite existing
# With custom domains directory
ragdiff init my-domain --domains-dir ./projectsWhat it does:
- Creates domain directory structure (
providers/,query-sets/,runs/,comparisons/) - Generates
domain.yamlwith LLM evaluator configuration - Creates example system configurations (Vectara, MongoDB, OpenAPI)
- Adds sample query sets (basic-queries.txt, optionally JSONL)
- Creates
.env.exampleif it doesn't exist
Templates:
minimal: Basic structure with simple evaluation promptdefault: Includes example system configs and detailed evaluation criteriacomplete: Everything in default plus JSONL query examples
Execute a query set against a system:
# Basic usage
uv run ragdiff run <domain> <system> <query-set>
# Examples
uv run ragdiff run tafsir vectara-default test-queries
uv run ragdiff run tafsir mongodb-local test-queries --concurrency 5
# With options
uv run ragdiff run tafsir vectara-default test-queries \
--domains-dir ./domains \
--concurrency 10 \
--timeout 30 \
--quietWhat it does:
- Loads provider config from
domains/<domain>/providers/<provider>.yaml - Loads queries from
domains/<domain>/query-sets/<query-set>.txt - Executes all queries with progress bar
- Saves results to
domains/<domain>/runs/<run-id>.json - Displays summary table
Options:
--concurrency N: Max concurrent queries (default: 10)--timeout N: Timeout per query in seconds (default: 30.0)--domains-dir PATH: Custom domains directory (default: ./domains)--quiet: Suppress progress output
Compare multiple runs using LLM evaluation:
# Basic usage
uv run ragdiff compare <domain> <run-id-1> <run-id-2> [<run-id-3> ...]
# Examples
uv run ragdiff compare tafsir abc123 def456
uv run ragdiff compare tafsir abc123 def456 --format json --output comparison.json
# With options
uv run ragdiff compare tafsir abc123 def456 \
--model gpt-4 \
--temperature 0.0 \
--format markdown \
--output report.mdWhat it does:
- Loads runs from
domains/<domain>/runs/ - Uses LLM (via LiteLLM) for evaluation
- Saves comparison to
domains/<domain>/comparisons/<comparison-id>.json - Outputs in specified format
Output formats:
table: Rich console table (default)json: JSON outputmarkdown: Markdown report
Options:
--model MODEL: Override LLM model--temperature N: Override temperature--format FORMAT: Output format (table, json, markdown)--output PATH: Save to file--domains-dir PATH: Custom domains directory--quiet: Suppress progress output
domains/
├── tafsir/ # Domain: Islamic tafsir
│ ├── domain.yaml # Domain config (evaluator settings)
│ ├── providers/ # Provider configurations
│ │ ├── vectara-default.yaml
│ │ ├── mongodb-local.yaml
│ │ └── agentset-prod.yaml
│ ├── query-sets/ # Query collections
│ │ ├── test-queries.txt
│ │ └── production-queries.txt
│ ├── runs/ # Run results (auto-created)
│ │ ├── <run-id-1>.json
│ │ └── <run-id-2>.json
│ └── comparisons/ # Comparison results (auto-created)
│ └── <comparison-id>.json
└── legal/ # Domain: Legal documents
├── domain.yaml
├── providers/
└── query-sets/
domains/<domain>/domain.yaml:
name: tafsir
description: Islamic tafsir RAG providers
evaluator:
model: gpt-4 # LLM model for evaluation
temperature: 0.0 # Temperature for evaluation
prompt_template: | # Evaluation prompt template
Compare these RAG results for the query: {query}
Results:
{results}
Determine which system provided better results and explain why.domains/<domain>/providers/<provider>.yaml:
Vectara:
name: vectara-default
tool: vectara
config:
api_key: ${VECTARA_API_KEY}
corpus_id: ${VECTARA_CORPUS_ID}
timeout: 30MongoDB:
name: mongodb-local
tool: mongodb
config:
connection_uri: ${MONGODB_URI}
database: my_db
collection: documents
index_name: vector_index
embedding_model: all-MiniLM-L6-v2 # sentence-transformers model
timeout: 60Agentset:
name: agentset-prod
tool: agentset
config:
api_token: ${AGENTSET_API_TOKEN}
namespace_id: ${AGENTSET_NAMESPACE_ID}
rerank: true
timeout: 60domains/<domain>/query-sets/<name>.txt:
Simple text files with one query per line:
What is Islamic inheritance law?
Explain the concept of zakat
What are the five pillars of Islam?
Create a .env file with:
# Vectara
VECTARA_API_KEY=your_key
VECTARA_CORPUS_ID=your_corpus_id
# MongoDB Atlas
MONGODB_URI=mongodb+srv://username:[email protected]/
# Agentset
AGENTSET_API_TOKEN=your_token
AGENTSET_NAMESPACE_ID=your_namespace_id
# LLM Providers (for evaluation via LiteLLM)
OPENAI_API_KEY=your_key # For GPT models
ANTHROPIC_API_KEY=your_key # For Claude models
GEMINI_API_KEY=your_key # For Gemini modelsRAGDiff v2.0 supports the following RAG providers:
- Vectara: Enterprise RAG platform with built-in neural search
- MongoDB Atlas: Vector search with MongoDB Atlas and sentence-transformers
- Agentset: RAG-as-a-Service platform
- Create provider implementation in
src/ragdiff/providers/:
from ..core.models_v2 import RetrievedChunk
from ..core.errors import ConfigError, RunError
from .abc import System
class MySystem(System):
def __init__(self, config: dict):
super().__init__(config)
if "api_key" not in config:
raise ConfigError("Missing required field: api_key")
self.api_key = config["api_key"]
def search(self, query: str, top_k: int = 5) -> list[RetrievedChunk]:
# Implement search logic
results = self._call_api(query, top_k)
return [
RetrievedChunk(
content=r["text"],
score=r["score"],
metadata={"source": r["source"]}
)
for r in results
]
# Register the system
from .registry import register_tool
register_tool("mysystem", MySystem)- Import in
src/ragdiff/providers/__init__.py:
from . import mysystem # noqa: F401- Add tests in
tests/test_systems.py
# Create two MongoDB variants with different embedding models
cat > domains/ml/providers/mongodb-minilm.yaml <<EOF
name: mongodb-minilm
tool: mongodb
config:
connection_uri: \${MONGODB_URI}
database: ml_docs
collection: articles
index_name: vector_index
embedding_model: all-MiniLM-L6-v2
EOF
cat > domains/ml/providers/mongodb-mpnet.yaml <<EOF
name: mongodb-mpnet
tool: mongodb
config:
connection_uri: \${MONGODB_URI}
database: ml_docs
collection: articles
index_name: vector_index
embedding_model: all-mpnet-base-v2
EOF
# Run both providers
uv run ragdiff run ml mongodb-minilm test-queries
uv run ragdiff run ml mongodb-mpnet test-queries
# Compare results
uv run ragdiff compare ml <run-id-1> <run-id-2> --format markdown --output ab-test-results.md# 1. Create baseline run
uv run ragdiff run legal vectara-baseline prod-queries
# 2. Make improvements to your RAG system
# (update embeddings, indexing, etc.)
# 3. Create new run with improved system
uv run ragdiff run legal vectara-improved prod-queries
# 4. Compare baseline vs improved
uv run ragdiff compare legal <baseline-id> <improved-id> --format markdown --output improvements.md
# 5. If improved system is better, make it the new baseline
cp domains/legal/providers/vectara-improved.yaml domains/legal/providers/vectara-baseline.yaml# Run same query set across all providers
uv run ragdiff run tafsir vectara-default test-queries
uv run ragdiff run tafsir mongodb-local test-queries
uv run ragdiff run tafsir agentset-prod test-queries
# Compare all three
uv run ragdiff compare tafsir <vectara-id> <mongodb-id> <agentset-id> \
--format markdown \
--output three-way-comparison.md# Run all tests
uv run pytest tests/
# Run v2.0 tests only
uv run pytest tests/test_core_v2.py tests/test_systems.py tests/test_execution.py tests/test_cli_v2.py
# Run with coverage
uv run pytest tests/ --cov=src
# Run with verbose output
uv run pytest tests/ -vThe project uses pre-commit hooks:
rufffor linting and formattingpytestfor testing- Whitespace and YAML validation
# Install pre-commit hooks
pre-commit install
# Run manually
pre-commit run --all-filesragdiff/
├── src/ragdiff/ # Main package
│ ├── cli.py # Main CLI entry point
│ ├── cli_v2.py # v2.0 CLI implementation
│ ├── core/ # Core v2.0 models
│ │ ├── models_v2.py # Domain-based models
│ │ ├── loaders.py # File loading utilities
│ │ ├── storage.py # Persistence utilities
│ │ └── errors.py # Custom exceptions
│ ├── providers/ # Provider implementations
│ │ ├── abc.py # System abstract base class
│ │ ├── registry.py # System registration
│ │ ├── vectara.py # Vectara system
│ │ ├── mongodb.py # MongoDB system
│ │ └── agentset.py # Agentset system
│ ├── execution/ # Run execution engine
│ └── comparison/ # Comparison engine
├── tests/ # Test suite (78 v2.0 tests)
├── domains/ # Domain workspaces
└── pyproject.toml # Package configuration
RAGDiff v2.0 follows the SPIDER protocol for systematic development:
- Specification: Clear goals documented in codev/specs/
- Planning: Phased implementation (6 phases)
- Implementation: Clean domain-based architecture
- Defense: Comprehensive test coverage (78 v2.0 tests)
- Evaluation: Code reviews and validation
- Reflection: Architecture documentation
- Domain-Driven: Organize work around problem domains
- Reproducibility: Snapshot configs and queries in runs
- Fail Fast: Clear error messages, no silent failures
- Type Safety: Pydantic models with validation
- Testability: Every feature has tests
- Separation of Concerns: Clean module boundaries
MIT License - see LICENSE file for details
Contributions welcome! Please:
- Follow existing code style (ruff formatting)
- Add tests for new features
- Update documentation
- Ensure all tests pass
Built following the SPIDER protocol for systematic development.
Supported RAG platforms: Vectara, MongoDB Atlas, Agentset