Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
220 changes: 19 additions & 201 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,202 +1,20 @@
trace_logs/

docker/.stack.env
docker/.stack.env.sh

# Python-generated files
# Auth storage state (contains session tokens)
frontend/e2e/.auth/
e2e/.auth/

# Playwright test artifacts
frontend/playwright-report/
frontend/test-results/
playwright-report/
test-results/
*.trace.zip

# Python
__pycache__/
*.py[oc]
build/
dist/
wheels/
*.egg-info

# Virtual environments
.venv

# Database files
*.db
*.sqlite
*.sqlite3

# MacOS X gitignore
# General
.DS_Store
.AppleDouble
.LSOverride

# Icon must end with two \r
Icon


# Thumbnails
._*

# Files that might appear in the root of a volume
.DocumentRevisions-V100
.fseventsd
.Spotlight-V100
.TemporaryItems
.Trashes
.VolumeIcon.icns
.com.apple.timemachine.donotpresent

# Directories potentially created on remote AFP share
.AppleDB
.AppleDesktop
Network Trash Folder
Temporary Items
.apdisk

# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
lerna-debug.log*
.pnpm-debug.log*

# Diagnostic reports (https://nodejs.org/api/report.html)
report.[0-9]*.[0-9]*.[0-9]*.[0-9]*.json

# Runtime data
pids
*.pid
*.seed
*.pid.lock

# Directory for instrumented libs generated by jscoverage/JSCover
lib-cov

# Coverage directory used by tools like istanbul
coverage
*.lcov

# nyc test coverage
.nyc_output

# Grunt intermediate storage (https://gruntjs.com/creating-plugins#storing-task-files)
.grunt

# Bower dependency directory (https://bower.io/)
bower_components

# node-waf configuration
.lock-wscript

# Compiled binary addons (https://nodejs.org/api/addons.html)
build/Release

# Dependency directories
node_modules/
jspm_packages/

# Snowpack dependency directory (https://snowpack.dev/)
web_modules/

# TypeScript cache
*.tsbuildinfo

# Optional npm cache directory
.npm

# Optional eslint cache
.eslintcache

# Optional stylelint cache
.stylelintcache

# Microbundle cache
.rpt2_cache/
.rts2_cache_cjs/
.rts2_cache_es/
.rts2_cache_umd/

# Optional REPL history
.node_repl_history

# Output of 'npm pack'
*.tgz

# Yarn Integrity file
.yarn-integrity

# dotenv environment variable files
.env
.env.development.local
.env.test.local
.env.production.local
.env.local
.env.tool

# parcel-bundler cache (https://parceljs.org/)
.cache
.parcel-cache

# Next.js build output
.next
out

# Nuxt.js build / generate output
.nuxt
dist

# Gatsby files
.cache/
# Comment in the public line in if your project uses Gatsby and not Next.js
# https://nextjs.org/blog/next-9-1#public-directory-support
# public

# vuepress build output
.vuepress/dist

# vuepress v2.x temp and cache directory
.temp
.cache

# vitepress build output
**/.vitepress/dist

# vitepress cache directory
**/.vitepress/cache

# Docusaurus cache and generated files
.docusaurus

# Serverless directories
.serverless/

# FuseBox cache
.fusebox/

# DynamoDB Local files
.dynamodb/

# TernJS port file
.tern-port

# Stores VSCode versions used for testing VSCode extensions
.vscode-test

# yarn v2
.yarn/cache
.yarn/unplugged
.yarn/build-state.yml
.yarn/install-state.gz
.pnp.*

agent_logs.txt
workspace/
tmp/
data/file_store
data/workspace
data/logs
data/events.db
output/

.vscode/
.envrc

# local only scripts
start_tool_server.sh
*.py[cod]
*$py.class
*.so
.Python
.venv/
venv/
ENV/
1 change: 1 addition & 0 deletions docker/.stack.env
161 changes: 161 additions & 0 deletions docker/.stack.env.local
Original file line number Diff line number Diff line change
@@ -0,0 +1,161 @@
# ============================================================================
# ii-agent Local-Only Environment Configuration
# ============================================================================
# This configuration is for running ii-agent with LOCAL Docker sandboxes
# instead of E2B cloud. All data stays on your machine - suitable for
# privileged/NDA-protected data.
#
# Copy this file to .stack.env.local and configure the required values.
# ============================================================================

# ============================================================================
# SANDBOX PROVIDER (NEW - Docker instead of E2B)
# ============================================================================
# Use "docker" for local sandboxes or "e2b" for E2B cloud
SANDBOX_PROVIDER=docker

# Docker image to use for local sandboxes (build with: docker build -t ii-agent-sandbox:latest -f e2b.Dockerfile .)
SANDBOX_DOCKER_IMAGE=ii-agent-sandbox:latest

# Optional: Docker network for sandboxes to join (useful if MCP server is in a container)
# SANDBOX_DOCKER_NETWORK=ii-agent-network

# ============================================================================
# DATABASE CONFIGURATION
# ============================================================================
# Use a different port if native PostgreSQL is running on 5432
POSTGRES_PORT=5433
POSTGRES_USER=iiagent
POSTGRES_PASSWORD=iiagent
POSTGRES_DB=iiagentdev

# Database URLs for services (using internal docker hostname)
# Note: Must use +asyncpg driver for SQLAlchemy async support
DATABASE_URL=postgresql+asyncpg://iiagent:iiagent@postgres:5432/iiagentdev

# Sandbox server database
SANDBOX_DB_NAME=ii_sandbox
SANDBOX_DATABASE_URL=postgresql+asyncpg://iiagent:iiagent@postgres:5432/ii_sandbox

# ============================================================================
# REDIS CONFIGURATION
# ============================================================================
REDIS_PORT=6379
REDIS_URL=redis://redis:6379/0
REDIS_SESSION_URL=redis://redis:6379/1

# ============================================================================
# SERVICE PORTS
# ============================================================================
FRONTEND_PORT=1420
BACKEND_PORT=8002
TOOL_SERVER_PORT=1236
SANDBOX_SERVER_PORT=8100

# Port for MCP server inside sandboxes
MCP_PORT=6060

# ============================================================================
# FRONTEND CONFIGURATION
# ============================================================================
FRONTEND_BUILD_MODE=production
VITE_API_URL=http://localhost:8002

# Auto-login using dev auth endpoint (for local development only)
# When enabled with DEV_AUTH_ENABLED=true, the frontend automatically logs in
# without showing the login screen. Set both DEV_AUTH_ENABLED=true and
# VITE_DEV_AUTH_AUTOLOGIN=true for a seamless local dev experience.
# WARNING: Never enable this in production
VITE_DEV_AUTH_AUTOLOGIN=true

# Disable Google OAuth for local setup (optional - set to enable)
VITE_GOOGLE_CLIENT_ID=

# Disable Stripe for local setup
VITE_STRIPE_PUBLISHABLE_KEY=

# Disable Sentry for local setup
VITE_SENTRY_DSN=

# ============================================================================
# AUTHENTICATION (Required)
# ============================================================================
# Generate with: openssl rand -hex 32
JWT_SECRET_KEY=79638ec26bc0031ca0a7d4ca50de85519212737f2aea7d8905e12e20d8ec5d3e

# Enable dev auth endpoint (for local development only)
# When enabled, the /auth/dev/login endpoint provides a quick login without OAuth
# WARNING: Never enable this in production
DEV_AUTH_ENABLED=true

# For local-only mode, you can use the demo user
# Enable demo mode to skip OAuth
DEMO_MODE=true

# ============================================================================
# LLM PROVIDER API KEYS (At least one required)
# ============================================================================
# OpenAI
OPENAI_API_KEY=
# Custom OpenAI-compatible base URL (for gemini-cli-openai worker)
OPENAI_BASE_URL=http://host.docker.internal:3888/v1

# Anthropic Claude
ANTHROPIC_API_KEY=

# Google Gemini
GEMINI_API_KEY=AIzaSyA_Z5mr5bu39-rpM26Zfcx1cH38GsF07Hw

# Groq
GROQ_API_KEY=

# Fireworks
FIREWORKS_API_KEY=

# OpenRouter (access to multiple models)
OPENROUTER_API_KEY=

# ============================================================================
# LLM CONFIG (Required for backend)
# ============================================================================
# LLM configuration in JSON format with model settings
LLM_CONFIGS={"default": {"api_type": "openai", "model": "gemini-3-pro-preview", "api_key": "sk-local", "base_url": "http://host.docker.internal:3888/v1", "max_retries": 3}}

# Researcher agent configuration
RESEARCHER_AGENT_CONFIG={"final_report_builder": {"model": "gemini-2.0-flash-exp", "application_model_name": "gemini-2.0-flash-exp", "api_key": "AIzaSyA_Z5mr5bu39-rpM26Zfcx1cH38GsF07Hw", "base_url": null, "max_retries": 3, "max_message_chars": 30000, "temperature": 0.0, "api_type": "gemini", "cot_model": false}, "report_builder": {"model": "gemini-2.0-flash-exp", "application_model_name": "gemini-2.0-flash-exp", "api_key": "AIzaSyA_Z5mr5bu39-rpM26Zfcx1cH38GsF07Hw", "base_url": null, "max_retries": 3, "max_message_chars": 30000, "temperature": 0.0, "api_type": "gemini", "cot_model": false}, "researcher": {"model": "gemini-2.0-flash-exp", "application_model_name": "gemini-2.0-flash-exp", "api_key": "AIzaSyA_Z5mr5bu39-rpM26Zfcx1cH38GsF07Hw", "base_url": null, "api_type": "gemini"}}

# ============================================================================
# MCP SERVER CONFIGURATION (Optional - for your local MCP server)
# ============================================================================
# If you have a local MCP server running, configure it here
# This URL is accessible from within sandbox containers

# For MCP server running on host machine:
# MCP_SERVER_URL=http://host.docker.internal:6060

# For MCP server running in a Docker container on the same network:
# MCP_SERVER_URL=http://mcp-server:6060

# ============================================================================
# OPTIONAL SERVICES
# ============================================================================
# These are not required for local-only mode

# Image search (Serper)
# SERPER_API_KEY=

# Web search (Tavily)
# TAVILY_API_KEY=

# Cloud storage (not needed for local mode, but required by code)
GCS_BUCKET_NAME=local-bucket
GOOGLE_APPLICATION_CREDENTIALS=
FILE_UPLOAD_PROJECT_ID=ii-agent-local
FILE_UPLOAD_BUCKET_NAME=local-uploads

# ============================================================================
# E2B CONFIGURATION (NOT NEEDED for local Docker mode)
# ============================================================================
# Leave these empty when using SANDBOX_PROVIDER=docker
# E2B_API_KEY=
# NGROK_AUTHTOKEN=
Loading