-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
LLM Langchain wrap call in chain to display it in sentry AI tab #13905
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
paris is already the capital
@Luke31 is attempting to deploy a commit to the Sentry Team on Vercel. A member of the Team first needs to authorize it. |
convert to chain
indentation
Add hint to enable tracing to view data in LLM tab
indentations
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, although I have a small question
import sentry_sdk | ||
|
||
sentry_sdk.init(...) # same as above | ||
|
||
llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0, api_key="bad API key") | ||
with sentry_sdk.start_transaction(op="ai-inference", name="The result of the AI inference"): | ||
response = llm.invoke([("system", "What is the capital of paris?")]) | ||
chain = (llm | StrOutputParser()).with_config({"run_name": "test-run"}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm admittedly not super familiar with Langchain, could you explain what this change does differently than what we had before? I get that the run_name
allows it to show up in Sentry, but what is the StrOutputParser()
for?
@@ -36,6 +36,8 @@ An additional dependency, `tiktoken`, is required to be installed if you want to | |||
|
|||
In addition to capturing errors, you can monitor interactions between multiple services or applications by [enabling tracing](/concepts/key-terms/tracing/). You can also collect and analyze performance profiles from real users with [profiling](/product/explore/profiling/). | |||
|
|||
Tracing is required to see AI pipelines in sentry's [AI tab](https://sentry.io/insights/ai/llm-monitoring/). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding this, definitely an oversight that we forgot to add this before!
The latest updates on your projects. Learn more about Vercel for Git ↗︎
1 Skipped Deployment
|
DESCRIBE YOUR PR
tackles part of issue #13167 and getsentry/sentry-python#3007 (comment)
Current documentation works great for traces and profiles. However if setting up Langchain as described in the documentation, the AI LLM-Monitoring tab in sentry remains empty.
Update Langchain example to introduce a chain with a custom run-name which will be displayed as a pipeline in the AI dashboard.
This will show with the error as
including the message
Also change Paris to France as Paris has no capital🇫🇷
IS YOUR CHANGE URGENT?
Help us prioritize incoming PRs by letting us know when the change needs to go live.
SLA
Thanks in advance for your help!
PRE-MERGE CHECKLIST
Make sure you've checked the following before merging your changes:
LEGAL BOILERPLATE
Look, I get it. The entity doing business as "Sentry" was incorporated in the State of Delaware in 2015 as Functional Software, Inc. and is gonna need some rights from me in order to utilize my contributions in this here PR. So here's the deal: I retain all rights, title and interest in and to my contributions, and by keeping this boilerplate intact I confirm that Sentry can use, modify, copy, and redistribute my contributions, under Sentry's choice of terms.
EXTRA RESOURCES