Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/docs/installation/github.md
Original file line number Diff line number Diff line change
Expand Up @@ -409,8 +409,8 @@ If you encounter rate limiting:
env:
OPENAI_KEY: ${{ secrets.OPENAI_KEY }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Add fallback models for better reliability
config.fallback_models: '["gpt-4o", "gpt-3.5-turbo"]'
# Add a fallback model for better reliability
config.fallback_models: '["gpt-5.4-mini"]'
# Increase timeout for slower models
config.ai_timeout: "300"
github_action_config.auto_review: "true"
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/installation/locally.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
To run PR-Agent locally, you first need to acquire two keys:

1. An OpenAI key from [here](https://platform.openai.com/api-keys){:target="_blank"}, with access to GPT-4 and o4-mini (or a key for other [language models](../usage-guide/changing_a_model.md), if you prefer).
1. An OpenAI key from [here](https://platform.openai.com/api-keys){:target="_blank"}, with access to GPT-5.4 and gpt-5.4-mini (or a key for other [language models](../usage-guide/changing_a_model.md), if you prefer).
2. A personal access token from your Git platform (GitHub, GitLab, BitBucket, Gitea) with repo scope. GitHub token, for example, can be issued from [here](https://github.com/settings/tokens){:target="_blank"}

## Using Docker image
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/usage-guide/automations_and_usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -222,7 +222,7 @@ For detailed step-by-step examples of configuring different models (Gemini, Clau

**Common Model Configuration Patterns:**

- **OpenAI**: Set `config.model: "gpt-4o"` and `OPENAI_KEY`
- **OpenAI**: Set `config.model: "gpt-5.4"` and `OPENAI_KEY`
- **Gemini**: Set `config.model: "gemini/gemini-1.5-flash"` and `GOOGLE_AI_STUDIO.GEMINI_API_KEY` (no `OPENAI_KEY` needed)
- **Claude**: Set `config.model: "anthropic/claude-3-opus-20240229"` and `ANTHROPIC.KEY` (no `OPENAI_KEY` needed)
- **Azure OpenAI**: Set `OPENAI.API_TYPE: "azure"`, `OPENAI.API_BASE`, and `OPENAI.DEPLOYMENT_ID`
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/usage-guide/changing_a_model.md
Original file line number Diff line number Diff line change
Expand Up @@ -383,7 +383,7 @@ To bypass chat templates and temperature controls, set `config.custom_reasoning_
reasoning_effort = "medium" # "low", "medium", "high"
```

With the OpenAI models that support reasoning effort (eg: o4-mini), you can specify its reasoning effort via `config` section. The default value is `medium`. You can change it to `high` or `low` based on your usage.
With the OpenAI models that support reasoning effort (eg: gpt-5.4-mini), you can specify its reasoning effort via `config` section. The default value is `medium`. You can change it to `high` or `low` based on your usage.

### Anthropic models

Expand Down
6 changes: 3 additions & 3 deletions pr_agent/settings/configuration.toml
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,9 @@
[config]
# models
model="gpt-5.4-2026-03-05"
fallback_models=["o4-mini"]
#model_reasoning="o4-mini" # dedicated reasoning model for self-reflection
#model_weak="gpt-4o" # optional, a weaker model to use for some easier tasks
fallback_models=["gpt-5.4-mini"]
#model_reasoning="gpt-5.4-mini" # dedicated reasoning model for self-reflection
#model_weak="gpt-5.4-nano" # optional, a weaker model to use for some easier tasks
# CLI
git_provider="github"
publish_output=true
Expand Down
Loading