-
Notifications
You must be signed in to change notification settings - Fork 177
feat(mcp): mcp python context propagation #1524
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
CLA Assistant Lite bot All contributors have signed the CLA ✍️ ✅ |
|
I have read the CLA Document and I hereby sign the CLA |
| self.traces.clear() | ||
|
|
||
|
|
||
| class OTLPServer(HTTPServer): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to start up an HTTP collector mostly since the stdio transport starts a subprocess. I considered just returning the parent span context in the tool's response, but figured it's worth setting this up now so it can be extended to actually generating telemetry in the future without new test setup
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
agree this is the neatest way out I can think of, and also the MCP general discussion has some hints about OTLP proxying on the agent (MCP-client side), so this doubles as a hint of a potential future.
| await client.initialize() | ||
| yield client | ||
| case "sse": | ||
| proc = await asyncio.create_subprocess_exec( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While the SSE version could start the server in the same process, it seemed simplest to keep it consistent with stdio for these tests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense
|
probably after folks buy-in in general, the next step could be to find where arize examples live and make something like I did, except in typescript for the https://github.com/Arize-ai/phoenix/tree/main/js/packages/phoenix-mcp server. that would be pretty cool |
Good idea. Just one extra requirements dependency fixed the trace elastic/observability-examples#62
Makes sense, will work on Typescript next. |
mikeldking
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks so much for the PR @anuraaga and the intro @codefromthecrypt ! This seems like a great start to me but will get some folks on my team to take a look just for context building.
cc @axiomofjoy
| @@ -0,0 +1 @@ | |||
| __version__ = "1.0.0" | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a solid foundation to build on. Small comments and questions, but it's ready to merge after those are addressed.
High-level question: My understanding is that MCP also allows servers to invoke clients. Will we need to handle context propagation in the opposite direction?
| beeai: uv pip install --reinstall {toxinidir}/instrumentation/openinference-instrumentation-beeai[test] | ||
| mcp: uv pip uninstall -r test-requirements.txt | ||
| mcp: uv pip install --reinstall-package openinference-instrumentation-mcp . | ||
| mcp: python -c 'import openinference.instrumentation.mcp' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is this import needed for?
python/instrumentation/openinference-instrumentation-mcp/test-requirements.txt
Show resolved
Hide resolved
mikeldking
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
excited to try this out! Will merge and publish - just need to get the release-please in place.
|
It's out: https://pypi.org/project/openinference-instrumentation-mcp/ Keep cooking |
|
Thanks for the help! Especially 82b3972 sorry I had forgot to fix that minimum version, but you found it. |
This is a good point - I think the standard tools / resources wrappers on top of the protocol (what MCP SDK calls |
Awesome, excited to see what you find |
Co-authored-by: Alexander Song <[email protected]>
I am working with @codefromthecrypt on MCP instrumentation for context propagation and he suggested I can send a PR here for it. This adds instrumentation for MCP Python SDK as an implementation of the discussion in modelcontextprotocol/modelcontextprotocol#246.
The key point is picking
requst.params._metaas the carrier for context. If a different field was defined in the protocol in the future, it would be relatively simple to migrate to it as the instrumentation otherwise doesn't change at all.I have not included any examples / quickstart yet since it seemed a bit awkward to me because the instrumentation doesn't generate telemetry itself - it means either having examples that include other instrumentation, or manually creating spans, which both seemed out of scope for a example. In the future if generating MCP telemetry, this problem would be solved. But let me know if you have any advice on this point.