feat:Add token metrics support to autogen-agentchat instrumentation with streaming support #2447
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Closes #2258

Before
After

Note
Capture LLM model name and detailed token usage in
BaseOpenAIChatCompletionClient.createand.create_stream, with streaming usage injection and end-of-stream attribute setting._wrappers.py)CreateResult.usage(prompt, completion, total) with details (prompt: cache_read/audio/cache_input; completion: reasoning/audio) and set viaget_llm_token_count_attributes, plus explicit reasoning/audio completion detail attributes.get_llm_model_name_attributesto LLM spans.include_usage(viaextra_create_args.stream_optionsorinclude_usageparam); during streaming, accumulate output/tool-call/token attributes and set them after stream completion.Written by Cursor Bugbot for commit ab0a81a. This will update automatically on new commits. Configure here.