Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 0 additions & 12 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -163,16 +163,10 @@ jobs:
enable-cache: true
cache-suffix: ${{ matrix.install.name }}

- uses: denoland/setup-deno@v2
with:
deno-version: v2.5.x

- run: mkdir .coverage

- run: uv sync --only-dev

- run: uv run mcp-run-python example --deps=numpy

- name: cache HuggingFace models
uses: actions/cache@v4
with:
Expand Down Expand Up @@ -214,16 +208,10 @@ jobs:
enable-cache: true
cache-suffix: lowest-versions

- uses: denoland/setup-deno@v2
with:
deno-version: v2.5.x

- run: mkdir .coverage

- run: uv sync --group dev

- run: uv run mcp-run-python example --deps=numpy

- name: cache HuggingFace models
uses: actions/cache@v4
with:
Expand Down
4 changes: 0 additions & 4 deletions .github/workflows/claude.yml
Original file line number Diff line number Diff line change
Expand Up @@ -39,10 +39,6 @@ jobs:
enable-cache: true
cache-suffix: claude-code

- uses: denoland/setup-deno@v2
with:
deno-version: v2.5.x

- run: uv tool install pre-commit

- run: make install
Expand Down
11 changes: 1 addition & 10 deletions docs/contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,26 +9,17 @@ git clone [email protected]:<your username>/pydantic-ai.git
cd pydantic-ai
```

Install `uv` (version 0.4.30 or later), `pre-commit` and `deno`:
Install `uv` (version 0.4.30 or later) and `pre-commit`:

- [`uv` install docs](https://docs.astral.sh/uv/getting-started/installation/)
- [`pre-commit` install docs](https://pre-commit.com/#install)
- [`deno` install docs](https://docs.deno.com/runtime/getting_started/installation/)

To install `pre-commit` you can run the following command:

```bash
uv tool install pre-commit
```

For `deno`, you can run the following, or check
[their documentation](https://docs.deno.com/runtime/getting_started/installation/) for alternative
installation methods:

```bash
curl -fsSL https://deno.land/install.sh | sh
```

Install `pydantic-ai`, all dependencies and pre-commit hooks

```bash
Expand Down
20 changes: 10 additions & 10 deletions docs/mcp/client.md
Original file line number Diff line number Diff line change
Expand Up @@ -134,26 +134,22 @@ _(This example is complete, it can be run "as is" — you'll need to add `asynci

MCP also offers [stdio transport](https://spec.modelcontextprotocol.io/specification/2024-11-05/basic/transports/#stdio) where the server is run as a subprocess and communicates with the client over `stdin` and `stdout`. In this case, you'd use the [`MCPServerStdio`][pydantic_ai.mcp.MCPServerStdio] class.

In this example [mcp-run-python](https://github.com/pydantic/mcp-run-python) is used as the MCP server.
In this example we use a simple MCP server that provides weather tools.

```python {title="mcp_stdio_client.py"}
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerStdio

server = MCPServerStdio( # (1)!
'uv', args=['run', 'mcp-run-python', 'stdio'], timeout=10
)
server = MCPServerStdio('python', args=['mcp_server.py'], timeout=10)
agent = Agent('openai:gpt-5', toolsets=[server])


async def main():
result = await agent.run('How many days between 2000-01-01 and 2025-03-18?')
result = await agent.run('What is the weather in Paris?')
print(result.output)
#> There are 9,208 days between January 1, 2000, and March 18, 2025.
#> The weather in Paris is sunny and 26 degrees Celsius.
```

1. See [MCP Run Python](https://github.com/pydantic/mcp-run-python) for more information.

## Loading MCP Servers from Configuration

Instead of creating MCP server instances individually in code, you can load multiple servers from a JSON configuration file using [`load_mcp_servers()`][pydantic_ai.mcp.load_mcp_servers].
Expand All @@ -168,8 +164,12 @@ The configuration file should be a JSON file with an `mcpServers` object contain
{
"mcpServers": {
"python-runner": {
"command": "uv",
"args": ["run", "mcp-run-python", "stdio"]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here we don't need to change it do we?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

readded that mcp server, left in the one I added bc it's the one used in the snippets

"command": "uv",
"args": ["run", "mcp-run-python", "stdio"]
},
"weather": {
"command": "python",
"args": ["mcp_server.py"]
},
"weather-api": {
"url": "http://localhost:3001/sse"
Expand Down
8 changes: 6 additions & 2 deletions docs/mcp/fastmcp-client.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,12 +20,12 @@ A `FastMCPToolset` can then be created from:

- A FastMCP Server: `#!python FastMCPToolset(fastmcp.FastMCP('my_server'))`
- A FastMCP Client: `#!python FastMCPToolset(fastmcp.Client(...))`
- A FastMCP Transport: `#!python FastMCPToolset(fastmcp.StdioTransport(command='uvx', args=['mcp-run-python', 'stdio']))`
- A FastMCP Transport: `#!python FastMCPToolset(fastmcp.StdioTransport(command='python', args=['mcp_server.py']))`
- A Streamable HTTP URL: `#!python FastMCPToolset('http://localhost:8000/mcp')`
- An HTTP SSE URL: `#!python FastMCPToolset('http://localhost:8000/sse')`
- A Python Script: `#!python FastMCPToolset('my_server.py')`
- A Node.js Script: `#!python FastMCPToolset('my_server.js')`
- A JSON MCP Configuration: `#!python FastMCPToolset({'mcpServers': {'my_server': {'command': 'uvx', 'args': ['mcp-run-python', 'stdio']}}})`
- A JSON MCP Configuration: `#!python FastMCPToolset({'mcpServers': {'my_server': {'command': 'python', 'args': ['mcp_server.py']}}})`

If you already have a [FastMCP Server](https://gofastmcp.com/servers) in the same codebase as your Pydantic AI agent, you can create a `FastMCPToolset` directly from it and save agent a network round trip:

Expand Down Expand Up @@ -76,6 +76,10 @@ mcp_config = {
'time_mcp_server': {
'command': 'uvx',
'args': ['mcp-run-python', 'stdio']
},
'weather_server': {
'command': 'python',
'args': ['mcp_server.py']
}
}
}
Expand Down
2 changes: 1 addition & 1 deletion pydantic_ai_slim/pydantic_ai/models/cerebras.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
from openai import AsyncOpenAI

from .openai import OpenAIChatModel, OpenAIChatModelSettings
except ImportError as _import_error: # pragma: no cover
except ImportError as _import_error:
raise ImportError(
'Please install the `openai` package to use the Cerebras model, '
'you can use the `cerebras` optional group — `pip install "pydantic-ai-slim[cerebras]"'
Expand Down
6 changes: 6 additions & 0 deletions tests/example_modules/mcp_server.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,12 @@
mcp = FastMCP('Pydantic AI MCP Server')


@mcp.tool()
async def get_weather_forecast(location: str) -> str:
"""Get the weather forecast for a location."""
return f'The weather in {location} is sunny and 26 degrees Celsius.'


@mcp.tool()
async def echo_deps(ctx: Context[ServerSessionT, LifespanContextT, RequestT]) -> dict[str, Any]:
"""Echo the run context.
Expand Down
2 changes: 1 addition & 1 deletion tests/models/anthropic/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
from pydantic_ai.models.anthropic import AnthropicModel
from pydantic_ai.providers.anthropic import AnthropicProvider

AnthropicModelFactory = Callable[..., AnthropicModel]
AnthropicModelFactory = Callable[..., AnthropicModel]


# Model factory fixture for live API tests
Expand Down
10 changes: 6 additions & 4 deletions tests/models/anthropic/test_output.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
from __future__ import annotations as _annotations

from collections.abc import Callable
from typing import Annotated
from typing import TYPE_CHECKING, Annotated

import httpx
import pytest
Expand All @@ -33,6 +33,11 @@
from pydantic_ai.models.anthropic import AnthropicModel
from pydantic_ai.providers.anthropic import AnthropicProvider

if TYPE_CHECKING:
from pydantic_ai.models.anthropic import AnthropicModel

ANTHROPIC_MODEL_FIXTURE = Callable[..., AnthropicModel]

Comment on lines +36 to +40
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you're gonna wonder why I had to do these changes, I also have no idea, but perhaps the mcp-server example was installing a bunch of stuff that is now not being install in CI for pydantic-ai-slim

from ..test_anthropic import completion_message

pytestmark = [
Expand Down Expand Up @@ -231,9 +236,6 @@ async def verify_headers(request: httpx.Request):
return verify_headers


ANTHROPIC_MODEL_FIXTURE = Callable[..., AnthropicModel]


# =============================================================================
# Supported Model Tests (claude-sonnet-4-5)
# =============================================================================
Expand Down
5 changes: 3 additions & 2 deletions tests/models/test_bedrock.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,7 @@
from typing import Any

import pytest
from botocore.exceptions import ClientError
from inline_snapshot import snapshot
from mypy_boto3_bedrock_runtime.type_defs import MessageUnionTypeDef, SystemContentBlockTypeDef, ToolTypeDef
from typing_extensions import TypedDict

from pydantic_ai import (
Expand Down Expand Up @@ -49,6 +47,9 @@
from ..conftest import IsDatetime, IsInstance, IsStr, try_import

with try_import() as imports_successful:
from botocore.exceptions import ClientError
from mypy_boto3_bedrock_runtime.type_defs import MessageUnionTypeDef, SystemContentBlockTypeDef, ToolTypeDef

from pydantic_ai.models.bedrock import BedrockConverseModel, BedrockModelName, BedrockModelSettings
from pydantic_ai.models.openai import OpenAIResponsesModel, OpenAIResponsesModelSettings
from pydantic_ai.providers.bedrock import BedrockProvider
Expand Down
8 changes: 7 additions & 1 deletion tests/models/test_google.py
Original file line number Diff line number Diff line change
Expand Up @@ -94,6 +94,12 @@
from pydantic_ai.providers.google import GoogleProvider
from pydantic_ai.providers.openai import OpenAIProvider

if not imports_successful():
# Define placeholder errors module so parametrize decorators can be parsed
from types import SimpleNamespace

errors = SimpleNamespace(ServerError=Exception, ClientError=Exception, APIError=Exception)

pytestmark = [
pytest.mark.skipif(not imports_successful(), reason='google-genai not installed'),
pytest.mark.anyio,
Expand Down Expand Up @@ -4660,7 +4666,7 @@ async def test_google_api_errors_are_handled(
allow_model_requests: None,
google_provider: GoogleProvider,
mocker: MockerFixture,
error_class: type[errors.APIError],
error_class: Any,
error_response: dict[str, Any],
expected_status: int,
):
Expand Down
46 changes: 23 additions & 23 deletions tests/models/test_huggingface.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,25 +8,7 @@
from typing import Any, Literal, cast
from unittest.mock import Mock

import aiohttp
import pytest
from huggingface_hub import (
AsyncInferenceClient,
ChatCompletionInputMessage,
ChatCompletionOutput,
ChatCompletionOutputComplete,
ChatCompletionOutputFunctionDefinition,
ChatCompletionOutputMessage,
ChatCompletionOutputToolCall,
ChatCompletionOutputUsage,
ChatCompletionStreamOutput,
ChatCompletionStreamOutputChoice,
ChatCompletionStreamOutputDelta,
ChatCompletionStreamOutputDeltaToolCall,
ChatCompletionStreamOutputFunction,
ChatCompletionStreamOutputUsage,
)
from huggingface_hub.errors import HfHubHTTPError
from inline_snapshot import snapshot
from typing_extensions import TypedDict

Expand All @@ -50,8 +32,6 @@
VideoUrl,
)
from pydantic_ai.exceptions import ModelHTTPError
from pydantic_ai.models.huggingface import HuggingFaceModel
from pydantic_ai.providers.huggingface import HuggingFaceProvider
from pydantic_ai.result import RunUsage
from pydantic_ai.run import AgentRunResult, AgentRunResultEvent
from pydantic_ai.settings import ModelSettings
Expand All @@ -62,10 +42,30 @@
from .mock_async_stream import MockAsyncStream

with try_import() as imports_successful:
pass
import aiohttp
from huggingface_hub import (
AsyncInferenceClient,
ChatCompletionInputMessage,
ChatCompletionOutput,
ChatCompletionOutputComplete,
ChatCompletionOutputFunctionDefinition,
ChatCompletionOutputMessage,
ChatCompletionOutputToolCall,
ChatCompletionOutputUsage,
ChatCompletionStreamOutput,
ChatCompletionStreamOutputChoice,
ChatCompletionStreamOutputDelta,
ChatCompletionStreamOutputDeltaToolCall,
ChatCompletionStreamOutputFunction,
ChatCompletionStreamOutputUsage,
)
from huggingface_hub.errors import HfHubHTTPError

from pydantic_ai.models.huggingface import HuggingFaceModel
from pydantic_ai.providers.huggingface import HuggingFaceProvider

MockChatCompletion = ChatCompletionOutput | Exception
MockStreamEvent = ChatCompletionStreamOutput | Exception
MockChatCompletion = ChatCompletionOutput | Exception
MockStreamEvent = ChatCompletionStreamOutput | Exception

pytestmark = [
pytest.mark.skipif(not imports_successful(), reason='huggingface_hub not installed'),
Expand Down
6 changes: 6 additions & 0 deletions tests/models/test_model_names.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,12 @@
from pydantic_ai.providers.grok import GrokModelName
from pydantic_ai.providers.moonshotai import MoonshotAIModelName

if not imports_successful():
# Define placeholders so the module can be loaded for test collection
AnthropicModelName = BedrockModelName = CohereModelName = GoogleModelName = None
GroqModelName = HuggingFaceModelName = MistralModelName = OpenAIModelName = None
DeepSeekModelName = GrokModelName = MoonshotAIModelName = None

pytestmark = [
pytest.mark.skipif(not imports_successful(), reason='some model package was not installed'),
pytest.mark.vcr,
Expand Down
10 changes: 5 additions & 5 deletions tests/models/test_openai_responses.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,10 +45,6 @@
BuiltinToolResultEvent, # pyright: ignore[reportDeprecated]
)
from pydantic_ai.models import ModelRequestParameters
from pydantic_ai.models.openai import (
OpenAIResponsesModelSettings,
_resolve_openai_image_generation_size, # pyright: ignore[reportPrivateUsage]
)
from pydantic_ai.output import NativeOutput, PromptedOutput, TextOutput, ToolOutput
from pydantic_ai.profiles.openai import openai_model_profile
from pydantic_ai.tools import ToolDefinition
Expand All @@ -68,7 +64,11 @@
from openai.types.responses.response_usage import ResponseUsage

from pydantic_ai.models.anthropic import AnthropicModel, AnthropicModelSettings
from pydantic_ai.models.openai import OpenAIResponsesModel, OpenAIResponsesModelSettings
from pydantic_ai.models.openai import (
OpenAIResponsesModel,
OpenAIResponsesModelSettings,
_resolve_openai_image_generation_size, # pyright: ignore[reportPrivateUsage]
)
from pydantic_ai.providers.anthropic import AnthropicProvider
from pydantic_ai.providers.openai import OpenAIProvider

Expand Down
3 changes: 1 addition & 2 deletions tests/profiles/test_anthropic.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,12 +22,11 @@
from inline_snapshot import snapshot
from pydantic import BaseModel, Field

from pydantic_ai.providers.anthropic import AnthropicJsonSchemaTransformer

from ..conftest import try_import

with try_import() as imports_successful:
from pydantic_ai.profiles.anthropic import anthropic_model_profile
from pydantic_ai.providers.anthropic import AnthropicJsonSchemaTransformer

pytestmark = [
pytest.mark.skipif(not imports_successful(), reason='anthropic not installed'),
Expand Down
2 changes: 2 additions & 0 deletions tests/providers/test_bedrock.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,8 @@
from pydantic_ai.models.bedrock import LatestBedrockModelNames
from pydantic_ai.providers.bedrock import BEDROCK_GEO_PREFIXES, BedrockModelProfile, BedrockProvider

if not imports_successful():
BEDROCK_GEO_PREFIXES: tuple[str, ...] = () # type: ignore[no-redef]

pytestmark = pytest.mark.skipif(not imports_successful(), reason='bedrock not installed')

Expand Down
Loading