Skip to content
Open
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 0 additions & 12 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -163,16 +163,10 @@ jobs:
enable-cache: true
cache-suffix: ${{ matrix.install.name }}

- uses: denoland/setup-deno@v2
with:
deno-version: v2.5.x

- run: mkdir .coverage

- run: uv sync --only-dev

- run: uv run mcp-run-python example --deps=numpy

- name: cache HuggingFace models
uses: actions/cache@v4
with:
Expand Down Expand Up @@ -214,16 +208,10 @@ jobs:
enable-cache: true
cache-suffix: lowest-versions

- uses: denoland/setup-deno@v2
with:
deno-version: v2.5.x

- run: mkdir .coverage

- run: uv sync --group dev

- run: uv run mcp-run-python example --deps=numpy

- name: cache HuggingFace models
uses: actions/cache@v4
with:
Expand Down
4 changes: 0 additions & 4 deletions .github/workflows/claude.yml
Original file line number Diff line number Diff line change
Expand Up @@ -39,10 +39,6 @@ jobs:
enable-cache: true
cache-suffix: claude-code

- uses: denoland/setup-deno@v2
with:
deno-version: v2.5.x

- run: uv tool install pre-commit

- run: make install
Expand Down
11 changes: 1 addition & 10 deletions docs/contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,26 +9,17 @@ git clone [email protected]:<your username>/pydantic-ai.git
cd pydantic-ai
```

Install `uv` (version 0.4.30 or later), `pre-commit` and `deno`:
Install `uv` (version 0.4.30 or later) and `pre-commit`:

- [`uv` install docs](https://docs.astral.sh/uv/getting-started/installation/)
- [`pre-commit` install docs](https://pre-commit.com/#install)
- [`deno` install docs](https://docs.deno.com/runtime/getting_started/installation/)

To install `pre-commit` you can run the following command:

```bash
uv tool install pre-commit
```

For `deno`, you can run the following, or check
[their documentation](https://docs.deno.com/runtime/getting_started/installation/) for alternative
installation methods:

```bash
curl -fsSL https://deno.land/install.sh | sh
```

Install `pydantic-ai`, all dependencies and pre-commit hooks

```bash
Expand Down
18 changes: 7 additions & 11 deletions docs/mcp/client.md
Original file line number Diff line number Diff line change
Expand Up @@ -134,26 +134,22 @@ _(This example is complete, it can be run "as is" — you'll need to add `asynci

MCP also offers [stdio transport](https://spec.modelcontextprotocol.io/specification/2024-11-05/basic/transports/#stdio) where the server is run as a subprocess and communicates with the client over `stdin` and `stdout`. In this case, you'd use the [`MCPServerStdio`][pydantic_ai.mcp.MCPServerStdio] class.

In this example [mcp-run-python](https://github.com/pydantic/mcp-run-python) is used as the MCP server.
In this example we use a simple MCP server that provides weather tools.

```python {title="mcp_stdio_client.py"}
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerStdio

server = MCPServerStdio( # (1)!
'uv', args=['run', 'mcp-run-python', 'stdio'], timeout=10
)
server = MCPServerStdio('python', args=['mcp_server.py'], timeout=10)
agent = Agent('openai:gpt-5', toolsets=[server])


async def main():
result = await agent.run('How many days between 2000-01-01 and 2025-03-18?')
result = await agent.run('What is the weather in Paris?')
print(result.output)
#> There are 9,208 days between January 1, 2000, and March 18, 2025.
#> The weather in Paris is sunny and 26 degrees Celsius.
```

1. See [MCP Run Python](https://github.com/pydantic/mcp-run-python) for more information.

## Loading MCP Servers from Configuration

Instead of creating MCP server instances individually in code, you can load multiple servers from a JSON configuration file using [`load_mcp_servers()`][pydantic_ai.mcp.load_mcp_servers].
Expand All @@ -167,9 +163,9 @@ The configuration file should be a JSON file with an `mcpServers` object contain
```json {title="mcp_config.json"}
{
"mcpServers": {
"python-runner": {
"command": "uv",
"args": ["run", "mcp-run-python", "stdio"]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here we don't need to change it do we?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

readded that mcp server, left in the one I added bc it's the one used in the snippets

"weather": {
"command": "python",
"args": ["mcp_server.py"]
},
"weather-api": {
"url": "http://localhost:3001/sse"
Expand Down
10 changes: 5 additions & 5 deletions docs/mcp/fastmcp-client.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,12 +20,12 @@ A `FastMCPToolset` can then be created from:

- A FastMCP Server: `#!python FastMCPToolset(fastmcp.FastMCP('my_server'))`
- A FastMCP Client: `#!python FastMCPToolset(fastmcp.Client(...))`
- A FastMCP Transport: `#!python FastMCPToolset(fastmcp.StdioTransport(command='uvx', args=['mcp-run-python', 'stdio']))`
- A FastMCP Transport: `#!python FastMCPToolset(fastmcp.StdioTransport(command='python', args=['mcp_server.py']))`
- A Streamable HTTP URL: `#!python FastMCPToolset('http://localhost:8000/mcp')`
- An HTTP SSE URL: `#!python FastMCPToolset('http://localhost:8000/sse')`
- A Python Script: `#!python FastMCPToolset('my_server.py')`
- A Node.js Script: `#!python FastMCPToolset('my_server.js')`
- A JSON MCP Configuration: `#!python FastMCPToolset({'mcpServers': {'my_server': {'command': 'uvx', 'args': ['mcp-run-python', 'stdio']}}})`
- A JSON MCP Configuration: `#!python FastMCPToolset({'mcpServers': {'my_server': {'command': 'python', 'args': ['mcp_server.py']}}})`

If you already have a [FastMCP Server](https://gofastmcp.com/servers) in the same codebase as your Pydantic AI agent, you can create a `FastMCPToolset` directly from it and save agent a network round trip:

Expand Down Expand Up @@ -73,9 +73,9 @@ from pydantic_ai.toolsets.fastmcp import FastMCPToolset

mcp_config = {
'mcpServers': {
'time_mcp_server': {
'command': 'uvx',
'args': ['mcp-run-python', 'stdio']
'weather_server': {
'command': 'python',
'args': ['mcp_server.py']
}
}
}
Expand Down
6 changes: 6 additions & 0 deletions tests/example_modules/mcp_server.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,12 @@
mcp = FastMCP('Pydantic AI MCP Server')


@mcp.tool()
async def get_weather_forecast(location: str) -> str:
"""Get the weather forecast for a location."""
return f'The weather in {location} is sunny and 26 degrees Celsius.'


@mcp.tool()
async def echo_deps(ctx: Context[ServerSessionT, LifespanContextT, RequestT]) -> dict[str, Any]:
"""Echo the run context.
Expand Down
8 changes: 8 additions & 0 deletions tests/test_examples.py
Original file line number Diff line number Diff line change
Expand Up @@ -301,6 +301,9 @@ def rich_prompt_ask(prompt: str, *_args: Any, **_kwargs: Any) -> str:


class MockMCPServer(AbstractToolset[Any]):
def __init__(self, *args: Any, **kwargs: Any) -> None:
pass

@property
def id(self) -> str | None:
return None # pragma: no cover
Expand Down Expand Up @@ -336,6 +339,9 @@ async def call_tool(
'What will the weather be like in Paris on Tuesday?': ToolCallPart(
tool_name='weather_forecast', args={'location': 'Paris', 'forecast_date': '2030-01-01'}, tool_call_id='0001'
),
'What is the weather in Paris?': ToolCallPart(
tool_name='get_weather_forecast', args={'location': 'Paris'}, tool_call_id='0001'
),
'Tell me a joke.': 'Did you hear about the toothpaste scandal? They called it Colgate.',
'Tell me a different joke.': 'No.',
'Explain?': 'This is an excellent joke invented by Samuel Colvin, it needs no explanation.',
Expand Down Expand Up @@ -807,6 +813,8 @@ async def model_logic( # noqa: C901
return ModelResponse(parts=[TextPart('The current time is 10:45 PM on April 17, 2025.')])
elif isinstance(m, ToolReturnPart) and m.tool_name == 'get_user':
return ModelResponse(parts=[TextPart("The user's name is John.")])
elif isinstance(m, ToolReturnPart) and m.tool_name == 'get_weather_forecast':
return ModelResponse(parts=[TextPart(m.content)])
elif isinstance(m, ToolReturnPart) and m.tool_name == 'get_company_logo':
return ModelResponse(parts=[TextPart('The company name in the logo is "Pydantic."')])
elif isinstance(m, ToolReturnPart) and m.tool_name == 'get_document':
Expand Down
3 changes: 3 additions & 0 deletions tests/test_ui_web.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,8 @@
from pydantic_ai.builtin_tools import WebSearchTool
from pydantic_ai.ui._web import create_web_app

with try_import() as openai_import_successful:
import openai # noqa: F401 # pyright: ignore[reportUnusedImport]

pytestmark = [
pytest.mark.skipif(not starlette_import_successful(), reason='starlette not installed'),
Expand Down Expand Up @@ -168,6 +170,7 @@ def test_chat_app_configure_endpoint_empty():
)


@pytest.mark.skipif(not openai_import_successful(), reason='openai not installed')
def test_chat_app_configure_preserves_chat_vs_responses(monkeypatch: pytest.MonkeyPatch):
"""Test that openai-chat: and openai-responses: models are kept as separate entries."""
monkeypatch.setenv('OPENAI_API_KEY', 'test-key')
Expand Down
Loading