Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
525b2bf
Add Dispute Prediction Strategy
Jan 22, 2026
a35b758
Add Polymarket market scanner for dispute prediction strategy
Jan 26, 2026
bc57217
Add git workflow documentation to AGENTS.md
Jan 26, 2026
11a68e5
Merge pull request #1 from Nate0-1999/update-workflow-docs
Nate0-1999 Jan 26, 2026
d2e31b2
Merge upstream/main into sync branch
Feb 4, 2026
d971b7b
Add resolution strategy data contracts and schema tests
Feb 4, 2026
9aa61cc
Bootstrap Tier 1 contract module and tests
Feb 4, 2026
5381dcf
Bootstrap Tier 2 contract module and tests
Feb 4, 2026
af70f3d
Add signal engine primitives and sizing tests
Feb 4, 2026
3adbf96
Add evaluation metric helpers and tests
Feb 4, 2026
c02baa2
Merge pull request #2 from Nate0-1999/codex/resolution-data-contracts
Nate0-1999 Feb 4, 2026
74a214e
Merge pull request #3 from Nate0-1999/codex/resolution-tier1
Nate0-1999 Feb 4, 2026
7089d02
Merge pull request #4 from Nate0-1999/codex/resolution-tier2
Nate0-1999 Feb 4, 2026
68701b2
Merge pull request #5 from Nate0-1999/codex/resolution-signals
Nate0-1999 Feb 4, 2026
b49c4de
Merge pull request #6 from Nate0-1999/codex/resolution-eval
Nate0-1999 Feb 4, 2026
d7e6b06
Add collaboration protocol for parallel worktrees
Feb 5, 2026
93ddefa
Merge pull request #7 from Nate0-1999/codex/resolution-baseline
Nate0-1999 Feb 5, 2026
147829b
Add Polymarket arb v1 M1/M2 plumbing and lifecycle
Feb 5, 2026
369abdd
Wire Tier1/Tier2 persistence to analysis run contracts
Feb 5, 2026
f389a69
Normalize small Tier2 probability drift before persistence
Feb 5, 2026
54af230
Merge pull request #8 from Nate0-1999/codex/resolution-baseline
Nate0-1999 Feb 6, 2026
0f2b491
Persist normalization metadata and add stop-loss signal params
Feb 6, 2026
81c1886
Wire paired-leg arb execution path and signal contract
Feb 6, 2026
049d628
Add combinatorial dependency detector with verifier hook
Feb 6, 2026
34e6ee5
Merge pull request #9 from Nate0-1999/codex/resolution-baseline
Nate0-1999 Feb 6, 2026
92bee68
Merge pull request #10 from Nate0-1999/codex/arb-m1m2
Nate0-1999 Feb 6, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 16 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -22,3 +22,19 @@ logs/
*.csv
*.json
!config/*.json

# Database
*.db
*.sqlite
*.sqlite3

# Local partner-specific files (each partner creates their own)
# Create a file named .gitignore.local with your personal ignores
.gitignore.local
local/

# Personal notes/scratch (convention: prefix with your initials)
# e.g., nate_notes.md, alex_scratch.py
*_notes.md
*_scratch.*
*_local.*
125 changes: 125 additions & 0 deletions AGENTS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,125 @@
# PR3DICT Agent Instructions

This document helps AI coding assistants understand and work with this codebase.

---

## Git Workflow (IMPORTANT - Follow This Process)

**Repository Structure:**
- **Your fork:** `https://github.com/Nate0-1999/pr3dict` (where you push)
- **Upstream:** `https://github.com/aerichmo/PR3DICT` (partner's repo, PRs go here)

**For Every Feature:**

```
1. CREATE BRANCH on your fork
git checkout main
git pull origin main
git checkout -b feature-name

2. WORK ON FEATURE
- Make changes
- Test thoroughly
- Commit with clear message

3. PUSH BRANCH to your fork
git push -u origin feature-name

4. CREATE PR to your fork's main
→ PR: Nate0-1999/pr3dict feature-name → Nate0-1999/pr3dict main
→ User manually reviews and approves in GitHub
→ WAIT for approval before proceeding

5. AFTER APPROVAL, PR to upstream
→ PR: Nate0-1999/pr3dict main → aerichmo/PR3DICT main
→ Partner reviews and merges
```

**Never push directly to main. Always use branches and PRs.**

---

## Project Overview

PR3DICT is a multi-strategy prediction market trading system. Current focus: **Dispute Prediction** on Polymarket.

**Important Principles:**
1. This is a multi-strategy repo - only modify dispute strategy related code
2. Arbitrage strategy and trading engine are separate - don't touch them
3. Only build features that have been discussed and verified
4. All credentials must be gitignored

## Architecture

```
src/
├── data/ # Market data ingestion (DISPUTE FOCUS)
│ ├── scanner.py # Fetches markets from Polymarket Gamma API
│ └── database.py # SQLite storage for markets & analyses
├── strategies/ # Trading strategies (arbitrage is separate)
├── platforms/ # API wrappers (Polymarket, Kalshi)
├── engine/ # Core trading engine (not dispute-related)
└── risk/ # Position sizing (not dispute-related)
```

## What's Been Built & Verified (Dispute Strategy)

### Market Scanner (`src/data/scanner.py`)
- Polls Polymarket Gamma API for markets in target liquidity range
- Stores markets in SQLite for tracking
- **Tested and working** - no auth needed for read-only access
- Run: `python -m src.data.scanner --show-unanalyzed`

### Database (`src/data/database.py`)
- SQLite schema for `markets` and `analyses` tables
- Tracks which markets have been analyzed
- **Tested and working**

### Strategy Documentation
- `docs/DISPUTE_PREDICTION_STRATEGY.md` - Strategy overview
- `docs/APPENDIX_KELLY_CRITERION.md` - Position sizing theory
- `docs/WORKTREE_COLLAB_PROTOCOL.md` - Parallel branch/worktree operating rules

## What's NOT Built Yet (Dispute Strategy)
- LLM analysis pipeline (Tier 1 screening, Tier 2 deep analysis)
- Dispute probability scoring
- Trade execution for dispute strategy
- RAG/Vector database for learning

## Setup & Commands

```bash
# Create virtual environment
python3 -m venv venv
source venv/bin/activate

# Install dependencies
pip install -r requirements.txt

# Run market scanner
python -m src.data.scanner --show-unanalyzed

# Query database
sqlite3 data/markets.db "SELECT question, liquidity FROM markets ORDER BY liquidity DESC LIMIT 10;"
```

## Polymarket API Notes

- **Gamma API** (read-only, no auth): `https://gamma-api.polymarket.com`
- Requires `User-Agent` header or returns 403
- Key fields: `question`, `description`, `resolutionSource`, `umaResolutionStatus`, `liquidityNum`

## Credentials

All credentials are gitignored. To set up:
```bash
cp config/example.env config/.env
# Edit with your credentials
```

## Development Guidelines

- **Don't over-build** - only implement features that have been discussed
- **Test before committing** - verify new code works
- **Update this doc** - after adding verified features
119 changes: 80 additions & 39 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,30 +1,69 @@
# PR3DICT

**Automated Prediction Market Trading System**
**Multi-Strategy Prediction Market Trading System**

---

## Executive Summary
## Quick Start

PR3DICT applies the battle-tested ST0CK methodology to prediction markets. It leverages the unified trading engine architecture, systematic risk management, and multi-platform API integration to exploit inefficiencies in this rapidly growing $200B+ industry.
```bash
# 1. Clone the repo
git clone https://github.com/aerichmo/PR3DICT.git
cd PR3DICT

### Target Platforms
- **Kalshi** — CFTC-regulated, REST/WebSocket/FIX APIs, Market Maker Program
- **Polymarket** — Blockchain-native (Polygon/USDC), high liquidity on political/crypto events
# 2. Create virtual environment
python3 -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate

### Core Strategy Edges
1. **Arbitrage** — Binary complement, cross-platform, latency
2. **Market Making** — Bid-ask spread capture, inventory management
3. **Behavioral** — Longshot bias exploitation, overreaction reversion
4. **Informational** — AI-driven probability forecasting
# 3. Install dependencies
pip install -r requirements.txt

### Architecture (from ST0CK)
| Component | Application |
|-----------|-------------|
| Unified Engine | Strategy pattern for parallel signal testing |
| Redis Cache | Multi-TTL for orderbooks, probability trends, metadata |
| Risk Management | Kelly Criterion + Portfolio Heat + Daily Loss Limits |
| API Layer | Unified wrappers for cross-platform operations |
# 4. Configure credentials (for trading - not needed for scanning)
cp config/example.env config/.env
# Edit .env with your API keys
```

---

## Strategies

| Strategy | Platform | Status |
|----------|----------|--------|
| **Arbitrage** | Kalshi, Polymarket | 🔵 Implemented |
| **Dispute Prediction** | Polymarket | 🔨 In Development |

### Arbitrage Strategy
Exploits price inefficiencies:
- Binary complement (YES + NO < $1.00)
- Cross-platform differentials

### Dispute Prediction Strategy (In Development)
Exploits Polymarket resolution mechanism:
- Identify markets likely to be disputed
- Position before resolution chaos
- See `docs/DISPUTE_PREDICTION_STRATEGY.md`

---

## Dispute Strategy: Current Progress

```bash
# Scan Polymarket for markets (no API key needed)
python -m src.data.scanner --show-unanalyzed

# View stored markets
sqlite3 data/markets.db "SELECT question, liquidity FROM markets ORDER BY liquidity DESC LIMIT 10;"
```

**What's working:**
- [x] Market scanner (Polymarket Gamma API)
- [x] SQLite database for tracking markets
- [x] Strategy documentation

**What's next:**
- [ ] LLM analysis pipeline
- [ ] Dispute probability scoring
- [ ] Trade execution

---

Expand All @@ -33,39 +72,41 @@ PR3DICT applies the battle-tested ST0CK methodology to prediction markets. It le
```
PR3DICT/
├── src/
│ ├── data/ # Market scanner & database
│ ├── strategies/ # Trading strategies
│ ├── platforms/ # Kalshi, Polymarket APIs
│ ├── engine/ # Core trading engine
│ ├── strategies/ # Arbitrage, market-making, behavioral
│ ├── platforms/ # Kalshi, Polymarket API wrappers
│ ├── data/ # Market data ingestion & caching
│ └── risk/ # Position sizing, kill-switches
├── config/ # Platform credentials, strategy params
├── tests/ # Unit + integration tests
│ └── risk/ # Position sizing
├── data/ # SQLite database (gitignored)
├── config/ # Environment config
└── docs/ # Strategy documentation
```

---

## Quick Start
## Documentation

```bash
# Clone and install
git clone <repo-url>
cd PR3DICT
pip install -r requirements.txt
| Document | Description |
|----------|-------------|
| `docs/DISPUTE_PREDICTION_STRATEGY.md` | Dispute strategy overview |
| `docs/APPENDIX_KELLY_CRITERION.md` | Position sizing theory |
| `AGENTS.md` | AI assistant instructions |

# Configure credentials
cp config/example.env config/.env
# Edit .env with Kalshi/Polymarket API keys
---

# Run (paper mode)
python -m src.engine.main --mode paper
```
## Collaboration

---
Multi-contributor repo. Each partner can:
- Create `.gitignore.local` for personal ignores
- Use `local/` directory for scratch files
- Prefix personal files with initials (e.g., `nate_notes.md`)

## Status
### Security: Credentials Are Local Only

🚧 **Phase 1: Foundation** — Building core engine and platform integrations.
**Never commit credentials.** These are gitignored:
- `config/.env` — API keys, wallet private keys
- `*.env` — All environment files
- `data/*.db` — Local database

---

Expand Down
Empty file added data/.gitkeep
Empty file.
85 changes: 85 additions & 0 deletions docs/APPENDIX_KELLY_CRITERION.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
# Appendix: Kelly Criterion

**Position Sizing From First Principles**

---

## The Problem Kelly Was Solving

In 1956, John Kelly was a physicist at Bell Labs working on information theory. He was interested in a practical question:

> **If you have an edge in a repeated bet, how much should you wager each time to maximize long-term wealth?**

The naive answer is "bet everything" — if you have an edge, more is better. But this leads to ruin: one loss and you're wiped out.

The opposite extreme — betting a tiny fixed amount — is safe but leaves money on the table.

Kelly wanted the optimal middle ground: **the bet size that maximizes the expected growth rate of your bankroll over many repeated bets.**

---

## The Insight

Kelly realized this was an information theory problem. He framed it as: you have a noisy channel (your edge) transmitting information (the correct bet). How do you maximize the information rate?

His key insight: **you should bet a fraction of your bankroll proportional to your edge.**

---

## The Formula

For a simple bet with probability `p` of winning and odds `b` (you get back `b` times your bet if you win):

```
f* = (bp - q) / b

where:
f* = fraction of bankroll to bet
p = probability of winning
q = probability of losing (1 - p)
b = decimal odds - 1 (what you win per dollar risked)
```

**Example:**
- You have 60% chance to win (p = 0.6)
- Odds are even money (b = 1, you win $1 for every $1 risked)

```
f* = (1 × 0.6 - 0.4) / 1 = 0.2
```

You should bet 20% of your bankroll.

---

## Why It Works

Kelly sizing has a special property: it maximizes the **geometric mean** of returns (equivalently, the expected log of wealth). This means:

1. **You never go broke** — You're always betting a fraction, never everything
2. **You grow faster than any other strategy** — In the long run, Kelly beats all other fixed-fraction approaches
3. **Bet size scales with edge** — Bigger edge = bigger bet, no edge = no bet

---

## Why We Use Fractional Kelly

Full Kelly is aggressive. In practice, we use a fraction (like 25%) because:
- Our probability estimates have uncertainty
- Drawdowns with full Kelly can be 50%+ (psychologically brutal)
- Model errors compound; fractional Kelly provides a buffer

For dispute trading specifically, we also discount for:
- Confidence in our dispute prediction
- Probability of INVALID resolution (both sides lose)

---

## Further Reading

- Kelly, J.L. (1956). "A New Interpretation of Information Rate" — The original paper
- Thorp, E.O. "The Kelly Criterion in Blackjack, Sports Betting, and the Stock Market" — Practical applications

---

*PR3DICT Documentation*
Loading