-
Notifications
You must be signed in to change notification settings - Fork 1
Expand file tree
/
Copy path.env.example
More file actions
41 lines (34 loc) · 1.56 KB
/
.env.example
File metadata and controls
41 lines (34 loc) · 1.56 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
# Flashlight LLM configuration
#
# Flashlight speaks the OpenAI Chat Completions protocol, so it works with
# any OpenAI-compatible endpoint — OpenAI, OpenRouter, vLLM, LM Studio,
# Ollama, Together, Groq, etc.
# ---------------------------------------------------------------------------
# Required: bearer token for the target endpoint.
OPENAI_API_KEY=your_api_key_here
# Optional: OpenAI-compatible base URL. Defaults to https://api.openai.com/v1.
# Examples:
# OpenAI https://api.openai.com/v1
# OpenRouter https://openrouter.ai/api/v1
# vLLM (local) http://localhost:8000/v1
# LM Studio http://localhost:1234/v1
# Ollama http://localhost:11434/v1
# OPENAI_BASE_URL=https://api.openai.com/v1
# Optional: model identifier. Defaults to gpt-4o-mini.
# Must be a model served by whichever endpoint you chose above.
# Examples: gpt-4o, gpt-4o-mini, anthropic/claude-sonnet-4 (OpenRouter),
# meta-llama/Llama-3.1-70B-Instruct (vLLM/Together), ...
# OPENAI_MODEL=gpt-4o-mini
# ---------------------------------------------------------------------------
# Logging (optional)
# ---------------------------------------------------------------------------
# Enable verbose logging to see detailed interactions in the terminal.
# AGENT_VERBOSE=true
# Enable debug logging for full trace-level logging.
# AGENT_DEBUG=true
# When verbose/debug mode is enabled, you'll see:
# - LLM API requests and responses
# - Subagent spawning and lifecycle
# - Tool calls with parameters
# - Tool results and success/failure status
# - Agent context and model information