Skip to content

Releases: DataDog/dd-trace-py

3.11.2

31 Jul 16:55
4ddef1a
Compare
Choose a tag to compare

Estimated end-of-life date, accurate to within three months: 08-2026
See the support level definitions for more information.

Bug Fixes

  • CI Visibility: This fix resolves an issue where using the pytest skipif marker with the condition passed as a keyword argument (or not provided at all) would cause the test to be reported as failed, in particular when the flaky or pytest-rerunfailures were also used.

3.11.1

29 Jul 14:24
dc72fd3
Compare
Choose a tag to compare

Estimated end-of-life date, accurate to within three months: 08-2026
See the support level definitions for more information.

Bug Fixes

  • ddtrace_api: Fixes a bug in the ddtrace_api integration in which patch() with no arguments, and thus patch_all(), breaks the integration.

3.11.0

28 Jul 13:45
e581279
Compare
Choose a tag to compare

Estimated end-of-life date, accurate to within three months: 08-2026
See the support level definitions for more information.

Upgrade Notes

  • CI Visibility: Code coverage collection for Test Impact Analysis with pytest does not require coverage.py as a dependency anymore.

Deprecation Notes

  • CI Visibility: The freezegun integration is deprecated and will be removed in 4.0.0. The freezegun integration is not necessary anymore for the correct reporting of test durations and timestamps.

New Features

  • AAP: This introduces endpoint discovery for Django applications. It allows the collection of API endpoints of a Django application at startup.
  • aws: Set peer.service explictly to improve the accuracy of serverless service representation. Base_service defaults to unhelpful value "runtime" in serverless spans. Remove base_service to prevent unwanted service overrides in Lambda spans.
  • LLM Observability
    • Added support to submit_evaluation_for() for submitting boolean metrics in LLMObs evaluation metrics, using metric_type="boolean". This enables tracking binary evaluation results such as toxicity detection and content appropriateness in your LLM application workflow.
    • This introduces tagging agent-specific metadata on agent spans when using CrewAI, OpenAI Agents, or PydanticAI.
    • Bedrock Converse toolResult content blocks are formatted as tool messages on LLM Observability llm spans' inputs.
    • This introduces capturing the number of input tokens read and written to the cache for Anthropic prompt caching use cases.
    • This introduces the ability to track the number of tokens read and written to the cache for Bedrock Converse prompt caching.
    • Adds support to automatically submit Google GenAI calls to LLM Observability.
    • Introduces tracking cached input token counts for OpenAI chats/responses prompt caching.
    • Adds support to automatically submit PydanticAI request spans to LLM Observability.
    • mcp: Adds tracing support for mcp.client.session.ClientSession.call_tool and mcp.server.fastmcp.tools.tool_manager.ToolManager.call_tool methods in the MCP SDK.
  • otel: Adds experimental support for exporting OTLP metrics via the OpenTelemetry Metrics API. To enable, the environment variable DD_METRICS_OTEL_ENABLED must be set to true and the application must include its own OTLP metrics exporter.
  • asgi: Obfuscate resource names on 404 spans when DD_ASGI_OBFUSCATE_404_RESOURCE is enabled (disabled by default).
  • code origin: added support for in-product enablement.
  • logging: Automatic injection of trace attributes into logs is now enabled for the standard logging library when using either ddtrace-run or import ddtrace.auto. To disable this feature, set the environment variable DD_LOGS_INJECTION to False.
  • google_genai: Adds support for APM/LLM Observability tracing for Google GenAI's embed_content methods.

Bug Fixes

  • CI Visibility

    • This fix resolves an issue where freezegun would not work with tests defined in unittest classes.
    • This fix resolves an issue where using Test Optimization together with external retry plugins such as flaky or pytest-rerunfailures would cause the test results not to be reported correctly to Datadog. With this change, those plugins can be used with ddtrace, and test results will be reported to Datadog, but Test Optimization advanced features such as Early Flake Detection and Auto Test Retries will not be available when such plugins are used.
    • This fix resolves an issue where setting custom loggers during a test session could cause the tracer to emit logs to a logging stream handler that was already closed by pytest, leading to I/O operation on closed file errors at the end of the test session.
    • This fix resolves an issue where test retry numbers were not reported correctly when tests were run with pytest-xdist.
  • AAP: This fix resolves an issue where the FastAPI body extraction was not functioning correctly in asynchronous contexts for large bodies, leading to missing security events. The timeout for reading request body chunks has been set to 0.1 seconds to ensure timely processing without blocking the event loop. This can be configured using the DD_FASTAPI_ASYNC_BODY_TIMEOUT_SECONDS environment variable.

  • litellm: This fix resolves an issue where potentially sensitive parameters were being tagged as metadata on LLM Observability spans. Now, metadata tags are based on an allowlist instead of a denylist.

  • lib-injection: Fix a bug preventing the Single Step Instrumentation (SSI) telemetry forwarder from completing when debug logging was enabled.

  • litellm: This fix resolves an issue where potentially sensitive parameters were being tagged as metadata on LLM Observability spans. Now, metadata tags are based on an allowlist instead of a denylist.

  • LLM Observability

    • Addresses an upstream issue in Anthropic prompt caching, which reports input tokens as the number of non-cached tokens instead of the total tokens sent to the model. With this fix, LLM Observability correctly counts input tokens to include cached read/write prompt tokens.
    • openai
      • This fix resolves an issue where openai tracing caused an AttributeError while parsing NoneType streamed chunk deltas.
      • fixes an issue where parsing token metrics for streamed reasoning responses from the responses api threw an AttributeError.
      • This fix resolves an issue where openai tracing caused an AttributeError while parsing NoneType streamed chunk deltas.
    • Addresses an upstream issue in Bedrock prompt caching, which reports input tokens as the number of non-cached tokens instead of the total tokens sent to the model. With this fix, LLM Observability correctly counts input tokens to include cached read/write prompt tokens.
    • fixes an issue where input messages for tool messages were not being captured properly.
    • This fix resolves an issue where incomplete streamed responses returned from OpenAI responses API caused an index error with LLM Observability tracing.
    • Fixes an issue where LangGraph span links for execution flows were broken for langgraph>=0.3.22.
    • This fix resolves an issue where tool choice input messages for OpenAI Chat Completions were not being captured in LLM Observability tracing.
    • Fixed an issue where grabbing token values for some providers through langchain libraries raised a ValueError.
    • This fix resolves an issue where passing back tool call results to OpenAI Chat Completions caused an error with LLM Observability tracing enabled.
  • dynamic instrumentation: improve support for function probes with frameworks and applications that interact with the Python garbage collector (e.g. synapse).

  • logging: Fix issue when dd.* properties were not injected onto logging records unless DD_LOGS_ENABLED=true env var was set (default value is structured). This issue causes problems for non-structured loggers which set their own format string instead of having ddtrace set the logging format string for you.

  • azure_functions: This fix resolves an issue where a function that consumes a list of service bus messages throws an exception when instrumented.

  • profiling

    • Fix an issue with greenlet support that could cause greenlet spawning to fail in some rare cases.
    • Fix a bug where profile frames from the package specified by DD_MAIN_PACKAGE were marked as "library" code in the profiler UI
  • tracing

    • This fix resolves an issue where programmatically set span services names would not get reported to Remote Configuration.
    • This fix resolves an issue where the @tracer.wrap() decorator failed to preserve the decorated function's return type, returning Any instead of the original return type.
    • This fix resolves an issue where spans would have incorrect timestamps and durations when freezegun was in use. With this change, the freezegun integration is not necessary anymore.
    • Fixes an issue in which span durations or start timestamps exceeding the platform's LONG_MAX caused traces to fail to send.
    • sampling: Trace sampling rules now require all specified tags to be present for a match, instead of ignoring missing tags. Additionally, all glob pattern that do not contain digits (e.g., *, ?, [ ]) now work with numeric tags, including decimals.
  • Code Security (IAST)

    • Improved compatibility with eval() when used with custom globals and locals. When instrumenting eval(), Python behaves differently depending on whether locals is passed. If both globals and locals are provided, new functions are stored in the locals dictionary. This fix ensures any dynamically defined functions (e.g., via eval(code, globals, locals)) are accessible by copying them from locals to globals when necessary. This resolves issues with third-party libraries (like babel) that rely on this behavior.
    • AST analysis may fail or behave unexpectedly in cases where code overrides Python built-ins or globals at runtime, e.g., mysqlsh (MySQL Shell) reassigns globals with a custom object. This can interfere with analysis or instrumentation logic.

Other Changes

  • openai: Removes I/O and request/response attribute tags from the APM spans for OpenAI LLM traced completion/chat/response requests and responses, which is duplicated in LLM Observability. openai.request.client has been retained and renamed to openai.request.provider.
  • anthropic: Removes the IO data from the APM spans for Anthropic LLM requests and responses, which is duplicated in the LL...
Read more

3.11.0rc3

25 Jul 13:46
e581279
Compare
Choose a tag to compare
3.11.0rc3 Pre-release
Pre-release

Estimated end-of-life date, accurate to within three months: 08-2026
See the support level definitions for more information.

Upgrade Notes

  • CI Visibility: Code coverage collection for Test Impact Analysis with pytest does not require coverage.py as a dependency anymore.

Deprecation Notes

  • CI Visibility: The freezegun integration is deprecated and will be removed in 4.0.0. The freezegun integration is not necessary anymore for the correct reporting of test durations and timestamps.

New Features

  • AAP: This introduces endpoint discovery for Django applications. It allows the collection of API endpoints of a Django application at startup.
  • aws: Set peer.service explictly to improve the accuracy of serverless service representation. Base_service defaults to unhelpful value "runtime" in serverless spans. Remove base_service to prevent unwanted service overrides in Lambda spans.
  • LLM Observability
    • Added support to submit_evaluation_for() for submitting boolean metrics in LLMObs evaluation metrics, using metric_type="boolean". This enables tracking binary evaluation results such as toxicity detection and content appropriateness in your LLM application workflow.
    • This introduces tagging agent-specific metadata on agent spans when using CrewAI, OpenAI Agents, or PydanticAI.
    • Bedrock Converse toolResult content blocks are formatted as tool messages on LLM Observability llm spans' inputs.
    • This introduces capturing the number of input tokens read and written to the cache for Anthropic prompt caching use cases.
    • This introduces the ability to track the number of tokens read and written to the cache for Bedrock Converse prompt caching.
    • Adds support to automatically submit Google GenAI calls to LLM Observability.
    • Introduces tracking cached input token counts for OpenAI chats/responses prompt caching.
    • Adds support to automatically submit PydanticAI request spans to LLM Observability.
    • mcp: Adds tracing support for mcp.client.session.ClientSession.call_tool and mcp.server.fastmcp.tools.tool_manager.ToolManager.call_tool methods in the MCP SDK.
  • otel: Adds experimental support for exporting OTLP metrics via the OpenTelemetry Metrics API. To enable, the environment variable DD_METRICS_OTEL_ENABLED must be set to true and the application must include its own OTLP metrics exporter.
  • asgi: Obfuscate resource names on 404 spans when DD_ASGI_OBFUSCATE_404_RESOURCE is enabled (disabled by default).
  • code origin: added support for in-product enablement.
  • logging: Automatic injection of trace attributes into logs is now enabled for the standard logging library when using either ddtrace-run or import ddtrace.auto. To disable this feature, set the environment variable DD_LOGS_INJECTION to False.
  • google_genai: Adds support for APM/LLM Observability tracing for Google GenAI's embed_content methods.

Bug Fixes

  • CI Visibility

    • This fix resolves an issue where freezegun would not work with tests defined in unittest classes.
    • This fix resolves an issue where using Test Optimization together with external retry plugins such as flaky or pytest-rerunfailures would cause the test results not to be reported correctly to Datadog. With this change, those plugins can be used with ddtrace, and test results will be reported to Datadog, but Test Optimization advanced features such as Early Flake Detection and Auto Test Retries will not be available when such plugins are used.
    • This fix resolves an issue where setting custom loggers during a test session could cause the tracer to emit logs to a logging stream handler that was already closed by pytest, leading to I/O operation on closed file errors at the end of the test session.
    • This fix resolves an issue where test retry numbers were not reported correctly when tests were run with pytest-xdist.
  • AAP: This fix resolves an issue where the FastAPI body extraction was not functioning correctly in asynchronous contexts for large bodies, leading to missing security events. The timeout for reading request body chunks has been set to 0.1 seconds to ensure timely processing without blocking the event loop. This can be configured using the DD_FASTAPI_ASYNC_BODY_TIMEOUT_SECONDS environment variable.

  • litellm: This fix resolves an issue where potentially sensitive parameters were being tagged as metadata on LLM Observability spans. Now, metadata tags are based on an allowlist instead of a denylist.

  • lib-injection: Fix a bug preventing the Single Step Instrumentation (SSI) telemetry forwarder from completing when debug logging was enabled.

  • litellm: This fix resolves an issue where potentially sensitive parameters were being tagged as metadata on LLM Observability spans. Now, metadata tags are based on an allowlist instead of a denylist.

  • LLM Observability

    • Addresses an upstream issue in Anthropic prompt caching, which reports input tokens as the number of non-cached tokens instead of the total tokens sent to the model. With this fix, LLM Observability correctly counts input tokens to include cached read/write prompt tokens.
    • openai
      • This fix resolves an issue where openai tracing caused an AttributeError while parsing NoneType streamed chunk deltas.
      • fixes an issue where parsing token metrics for streamed reasoning responses from the responses api threw an AttributeError.
      • This fix resolves an issue where openai tracing caused an AttributeError while parsing NoneType streamed chunk deltas.
    • Addresses an upstream issue in Bedrock prompt caching, which reports input tokens as the number of non-cached tokens instead of the total tokens sent to the model. With this fix, LLM Observability correctly counts input tokens to include cached read/write prompt tokens.
    • fixes an issue where input messages for tool messages were not being captured properly.
    • This fix resolves an issue where incomplete streamed responses returned from OpenAI responses API caused an index error with LLM Observability tracing.
    • Fixes an issue where LangGraph span links for execution flows were broken for langgraph>=0.3.22.
    • This fix resolves an issue where tool choice input messages for OpenAI Chat Completions were not being captured in LLM Observability tracing.
    • Fixed an issue where grabbing token values for some providers through langchain libraries raised a ValueError.
    • This fix resolves an issue where passing back tool call results to OpenAI Chat Completions caused an error with LLM Observability tracing enabled.
  • dynamic instrumentation: improve support for function probes with frameworks and applications that interact with the Python garbage collector (e.g. synapse).

  • logging: Fix issue when dd.* properties were not injected onto logging records unless DD_LOGS_ENABLED=true env var was set (default value is structured). This issue causes problems for non-structured loggers which set their own format string instead of having ddtrace set the logging format string for you.

  • azure_functions: This fix resolves an issue where a function that consumes a list of service bus messages throws an exception when instrumented.

  • profiling

    • Fix an issue with greenlet support that could cause greenlet spawning to fail in some rare cases.
    • Fix a bug where profile frames from the package specified by DD_MAIN_PACKAGE were marked as "library" code in the profiler UI
  • tracing

    • This fix resolves an issue where programmatically set span services names would not get reported to Remote Configuration.
    • This fix resolves an issue where the @tracer.wrap() decorator failed to preserve the decorated function's return type, returning Any instead of the original return type.
    • This fix resolves an issue where spans would have incorrect timestamps and durations when freezegun was in use. With this change, the freezegun integration is not necessary anymore.
    • Fixes an issue in which span durations or start timestamps exceeding the platform's LONG_MAX caused traces to fail to send.
    • sampling: Trace sampling rules now require all specified tags to be present for a match, instead of ignoring missing tags. Additionally, all glob pattern that do not contain digits (e.g., *, ?, [ ]) now work with numeric tags, including decimals.
  • Code Security (IAST)

    • Improved compatibility with eval() when used with custom globals and locals. When instrumenting eval(), Python behaves differently depending on whether locals is passed. If both globals and locals are provided, new functions are stored in the locals dictionary. This fix ensures any dynamically defined functions (e.g., via eval(code, globals, locals)) are accessible by copying them from locals to globals when necessary. This resolves issues with third-party libraries (like babel) that rely on this behavior.
    • AST analysis may fail or behave unexpectedly in cases where code overrides Python built-ins or globals at runtime, e.g., mysqlsh (MySQL Shell) reassigns globals with a custom object. This can interfere with analysis or instrumentation logic.

Other Changes

  • openai: Removes I/O and request/response attribute tags from the APM spans for OpenAI LLM traced completion/chat/response requests and responses, which is duplicated in LLM Observability. openai.request.client has been retained and renamed to openai.request.provider.
  • anthropic: Removes the IO data from the APM spans for Anthropic LLM requests and responses, which is duplicated in the LL...
Read more

3.10.3

25 Jul 15:48
8d3e0d3
Compare
Choose a tag to compare

Bug Fixes

  • dynamic instrumentation: improve support for function probes with frameworks and applications that interact with the Python garbage collector (e.g. synapse).
  • tracing
    • This fix resolves an issue where the @tracer.wrap() decorator failed to preserve the decorated function's return type, returning Any instead of the original return type.
    • This fix resolves an issue where programmatically set span services names would not get reported to Remote Configuration.
  • Code Security: AST analysis may fail or behave unexpectedly in cases where code overrides Python built-ins or globals at runtime, e.g., mysqlsh (MySQL Shell) reassigns globals with a custom object. This can interfere with analysis or instrumentation logic.
  • litellm: This fix resolves an issue where potentially sensitive parameters were being tagged as metadata on LLM Observability spans. Now, metadata tags are based on an allowlist instead of a denylist.
  • django: fix incorrect component tag being set for django orm spans

3.11.0rc2

23 Jul 20:47
f4a14fd
Compare
Choose a tag to compare
3.11.0rc2 Pre-release
Pre-release

Estimated end-of-life date, accurate to within three months: 08-2026
See the support level definitions for more information.

Upgrade Notes

  • CI Visibility: Code coverage collection for Test Impact Analysis with pytest does not require coverage.py as a dependency anymore.

Deprecation Notes

  • CI Visibility: The freezegun integration is deprecated and will be removed in 4.0.0. The freezegun integration is not necessary anymore for the correct reporting of test durations and timestamps.

New Features

  • LLM Observability
    • Added support to submit_evaluation_for() for submitting boolean metrics in LLMObs evaluation metrics, using metric_type="boolean". This enables tracking binary evaluation results such as toxicity detection and content appropriateness in your LLM application workflow.
    • Bedrock Converse toolResult content blocks are formatted as tool messages on LLM Observability llm spans' inputs.
    • This introduces capturing the number of input tokens read and written to the cache for Anthropic prompt caching use cases.
    • This introduces the ability to track the number of tokens read and written to the cache for Bedrock Converse prompt caching.
    • Adds support to automatically submit Google GenAI calls to LLM Observability.
    • Introduces tracking cached input token counts for OpenAI chats/responses prompt caching.
    • Adds support to automatically submit PydanticAI request spans to LLM Observability.
    • mcp: Adds tracing support for mcp.client.session.ClientSession.call_tool and mcp.server.fastmcp.tools.tool_manager.ToolManager.call_tool methods in the MCP SDK.
  • otel: Adds experimental support for exporting OTLP metrics via the OpenTelemetry Metrics API. To enable, the environment variable DD_METRICS_OTEL_ENABLED must be set to true and the application must include its own OTLP metrics exporter.
  • asgi: Obfuscate resource names on 404 spans when DD_ASGI_OBFUSCATE_404_RESOURCE is enabled (disabled by default).
  • code origin: added support for in-product enablement.
  • logging: Automatic injection of trace attributes into logs is now enabled for the standard logging library when using either ddtrace-run or import ddtrace.auto. To disable this feature, set the environment variable DD_LOGS_INJECTION to False.
  • google_genai: Adds support for APM/LLM Observability tracing for Google GenAI's embed_content methods.

Bug Fixes

  • CI Visibility
    • This fix resolves an issue where freezegun would not work with tests defined in unittest classes.
    • This fix resolves an issue where using Test Optimization together with external retry plugins such as flaky or pytest-rerunfailures would cause the test results not to be reported correctly to Datadog. With this change, those plugins can be used with ddtrace, and test results will be reported to Datadog, but Test Optimization advanced features such as Early Flake Detection and Auto Test Retries will not be available when such plugins are used.
    • This fix resolves an issue where setting custom loggers during a test session could cause the tracer to emit logs to a logging stream handler that was already closed by pytest, leading to I/O operation on closed file errors at the end of the test session.
    • This fix resolves an issue where test retry numbers were not reported correctly when tests were run with pytest-xdist.
  • AAP: This fix resolves an issue where the FastAPI body extraction was not functioning correctly in asynchronous contexts for large bodies, leading to missing security events. The timeout for reading request body chunks has been set to 0.1 seconds to ensure timely processing without blocking the event loop. This can be configured using the DD_FASTAPI_ASYNC_BODY_TIMEOUT_SECONDS environment variable.
  • litellm: This fix resolves an issue where potentially sensitive parameters were being tagged as metadata on LLM Observability spans. Now, metadata tags are based on an allowlist instead of a denylist.
  • lib-injection: Fix a bug preventing the Single Step Instrumentation (SSI) telemetry forwarder from completing when debug logging was enabled.
  • litellm: This fix resolves an issue where potentially sensitive parameters were being tagged as metadata on LLM Observability spans. Now, metadata tags are based on an allowlist instead of a denylist.
  • LLM Observability
    • Addresses an upstream issue in Anthropic prompt caching, which reports input tokens as the number of non-cached tokens instead of the total tokens sent to the model. With this fix, LLM Observability correctly counts input tokens to include cached read/write prompt tokens.
    • openai
      • This fix resolves an issue where openai tracing caused an AttributeError while parsing NoneType streamed chunk deltas.
      • fixes an issue where parsing token metrics for streamed reasoning responses from the responses api threw an AttributeError.
      • This fix resolves an issue where openai tracing caused an AttributeError while parsing NoneType streamed chunk deltas.
    • Addresses an upstream issue in Bedrock prompt caching, which reports input tokens as the number of non-cached tokens instead of the total tokens sent to the model. With this fix, LLM Observability correctly counts input tokens to include cached read/write prompt tokens.
    • fixes an issue where input messages for tool messages were not being captured properly.
    • This fix resolves an issue where incomplete streamed responses returned from OpenAI responses API caused an index error with LLM Observability tracing.
    • Fixes an issue where LangGraph span links for execution flows were broken for langgraph>=0.3.22.
    • This fix resolves an issue where tool choice input messages for OpenAI Chat Completions were not being captured in LLM Observability tracing.
    • Fixed an issue where grabbing token values for some providers through langchain libraries raised a ValueError.
    • This fix resolves an issue where passing back tool call results to OpenAI Chat Completions caused an error with LLM Observability tracing enabled.
  • dynamic instrumentation: improve support for function probes with frameworks and applications that interact with the Python garbage collector (e.g. synapse).
  • logging: Fix issue when dd.* properties were not injected onto logging records unless DD_LOGS_ENABLED=true env var was set (default value is structured). This issue causes problems for non-structured loggers which set their own format string instead of having ddtrace set the logging format string for you.
  • azure_functions: This fix resolves an issue where a function that consumes a list of service bus messages throws an exception when instrumented.
  • profiling
    • Fix an issue with greenlet support that could cause greenlet spawning to fail in some rare cases.
    • Fix a bug where profile frames from the package specified by DD_MAIN_PACKAGE were marked as "library" code in the profiler UI
  • tracing
    • This fix resolves an issue where programmatically set span services names would not get reported to Remote Configuration.
    • This fix resolves an issue where the @tracer.wrap() decorator failed to preserve the decorated function's return type, returning Any instead of the original return type.
    • This fix resolves an issue where spans would have incorrect timestamps and durations when freezegun was in use. With this change, the freezegun integration is not necessary anymore.
    • Fixes an issue in which span durations or start timestamps exceeding the platform's LONG_MAX caused traces to fail to send.
  • Code Security (IAST)
    • Improved compatibility with eval() when used with custom globals and locals. When instrumenting eval(), Python behaves differently depending on whether locals is passed. If both globals and locals are provided, new functions are stored in the locals dictionary. This fix ensures any dynamically defined functions (e.g., via eval(code, globals, locals)) are accessible by copying them from locals to globals when necessary. This resolves issues with third-party libraries (like babel) that rely on this behavior.
    • AST analysis may fail or behave unexpectedly in cases where code overrides Python built-ins or globals at runtime, e.g., mysqlsh (MySQL Shell) reassigns globals with a custom object. This can interfere with analysis or instrumentation logic.

Other Changes

  • openai: Removes I/O and request/response attribute tags from the APM spans for OpenAI LLM traced completion/chat/response requests and responses, which is duplicated in LLM Observability. openai.request.client has been retained and renamed to openai.request.provider.
  • anthropic: Removes the IO data from the APM spans for Anthropic LLM requests and responses, which is duplicated in the LLM Observability span.
  • gemini: Removes the IO data from the APM spans for Gemini LLM requests and responses, which is duplicated in the LLM Observability span.
  • vertexai: Removes the IO data from the APM spans for VertexAI LLM requests and responses, which is duplicated in the LLM Observability span.
  • langchain: Removes I/O tags from APM spans for LangChain LLM requests and responses, which is duplicated in LLM Observability.
  • Sampling rules now only support glob matchers; regex and callable matchers are no longer supported. This simplifies the code and removes functionality that was removed from the public API in ddtrace v3.0.0.

2.21.11

28 Jul 13:43
a85f22c
Compare
Choose a tag to compare

Estimated end-of-life date: 10-2025
See the support level definitions for more information.

Bug Fixes

  • dynamic instrumentation
    • fixed an issue with the instrumentation of generators with Python 3.10.
    • fixed an issue with the instrumentation of the first line of an iteration block (e.g. for loops) that could have caused undefined behavior.
    • prevent an exception when trying to remove a probe that did not resolve to a valid source code location.

3.11.0rc1

22 Jul 15:12
356d591
Compare
Choose a tag to compare
3.11.0rc1 Pre-release
Pre-release

Estimated end-of-life date, accurate to within three months: 08-2026
See the support level definitions for more information.

Upgrade Notes

  • CI Visibility: Code coverage collection for Test Impact Analysis with pytest does not require coverage.py as a dependency anymore.

Deprecation Notes

  • CI Visibility: The freezegun integration is deprecated and will be removed in 4.0.0. The freezegun integration is not necessary anymore for the correct reporting of test durations and timestamps.

New Features

  • LLM Observability
    • Added support to submit_evaluation_for() for submitting boolean metrics in LLMObs evaluation metrics, using metric_type="boolean". This enables tracking binary evaluation results such as toxicity detection and content appropriateness in your LLM application workflow.
    • Bedrock Converse toolResult content blocks are formatted as tool messages on LLM Observability llm spans' inputs.
    • This introduces capturing the number of input tokens read and written to the cache for Anthropic prompt caching use cases.
    • This introduces the ability to track the number of tokens read and written to the cache for Bedrock Converse prompt caching.
    • Adds support to automatically submit Google GenAI calls to LLM Observability.
    • Introduces tracking cached input token counts for OpenAI chats/responses prompt caching.
    • Adds support to automatically submit PydanticAI request spans to LLM Observability.
    • mcp: Adds tracing support for mcp.client.session.ClientSession.call_tool and mcp.server.fastmcp.tools.tool_manager.ToolManager.call_tool methods in the MCP SDK.
  • otel: Adds experimental support for exporting OTLP metrics via the OpenTelemetry Metrics API. To enable, the environment variable DD_METRICS_OTEL_ENABLED must be set to true and the application must include its own OTLP metrics exporter.
  • asgi: Obfuscate resource names on 404 spans when DD_ASGI_OBFUSCATE_404_RESOURCE is enabled (disabled by default).
  • code origin: added support for in-product enablement.
  • logging: Automatic injection of trace attributes into logs is now enabled for the standard logging library when using either ddtrace-run or import ddtrace.auto. To disable this feature, set the environment variable DD_LOGS_INJECTION to False.
  • google_genai: Adds support for APM/LLM Observability tracing for Google GenAI's embed_content methods.

Bug Fixes

  • CI Visibility
    • This fix resolves an issue where freezegun would not work with tests defined in unittest classes.
    • This fix resolves an issue where using Test Optimization together with external retry plugins such as flaky or pytest-rerunfailures would cause the test results not to be reported correctly to Datadog. With this change, those plugins can be used with ddtrace, and test results will be reported to Datadog, but Test Optimization advanced features such as Early Flake Detection and Auto Test Retries will not be available when such plugins are used.
    • This fix resolves an issue where test retry numbers were not reported correctly when tests were run with pytest-xdist.
  • AAP: This fix resolves an issue where the FastAPI body extraction was not functioning correctly in asynchronous contexts for large bodies, leading to missing security events. The timeout for reading request body chunks has been set to 0.1 seconds to ensure timely processing without blocking the event loop. This can be configured using the DD_FASTAPI_ASYNC_BODY_TIMEOUT_SECONDS environment variable.
  • lib-injection: Fix a bug preventing the Single Step Instrumentation (SSI) telemetry forwarder from completing when debug logging was enabled.
  • litellm: This fix resolves an issue where potentially sensitive parameters were being tagged as metadata on LLM Observability spans. Now, metadata tags are based on an allowlist instead of a denylist.
  • LLM Observability
    • Addresses an upstream issue in Anthropic prompt caching, which reports input tokens as the number of non-cached tokens instead of the total tokens sent to the model. With this fix, LLM Observability correctly counts input tokens to include cached read/write prompt tokens.
    • openai
      • This fix resolves an issue where openai tracing caused an AttributeError while parsing NoneType streamed chunk deltas.
      • fixes an issue where parsing token metrics for streamed reasoning responses from the responses api threw an AttributeError.
    • Addresses an upstream issue in Bedrock prompt caching, which reports input tokens as the number of non-cached tokens instead of the total tokens sent to the model. With this fix, LLM Observability correctly counts input tokens to include cached read/write prompt tokens.
    • fixes an issue where input messages for tool messages were not being captured properly.
    • This fix resolves an issue where incomplete streamed responses returned from OpenAI responses API caused an index error with LLM Observability tracing.
    • Fixes an issue where LangGraph span links for execution flows were broken for langgraph>=0.3.22.
    • This fix resolves an issue where tool choice input messages for OpenAI Chat Completions were not being captured in LLM Observability tracing.
    • Fixed an issue where grabbing token values for some providers through langchain libraries raised a ValueError.
    • This fix resolves an issue where passing back tool call results to OpenAI Chat Completions caused an error with LLM Observability tracing enabled.
  • dynamic instrumentation: improve support for function probes with frameworks and applications that interact with the Python garbage collector (e.g. synapse).
  • logging: Fix issue when dd.* properties were not injected onto logging records unless DD_LOGS_ENABLED=true env var was set (default value is structured). This issue causes problems for non-structured loggers which set their own format string instead of having ddtrace set the logging format string for you.
  • profiling
    • Fix an issue with greenlet support that could cause greenlet spawning to fail in some rare cases.
    • Fix a bug where profile frames from the package specified by DD_MAIN_PACKAGE were marked as "library" code in the profiler UI
  • tracing
    • This fix resolves an issue where programmatically set span services names would not get reported to Remote Configuration.
    • This fix resolves an issue where the @tracer.wrap() decorator failed to preserve the decorated function's return type, returning Any instead of the original return type.
    • This fix resolves an issue where spans would have incorrect timestamps and durations when freezegun was in use. With this change, the freezegun integration is not necessary anymore.
    • Fixes an issue in which span durations or start timestamps exceeding the platform's LONG_MAX caused traces to fail to send.
  • Code Security (IAST)
    • Improved compatibility with eval() when used with custom globals and locals. When instrumenting eval(), Python behaves differently depending on whether locals is passed. If both globals and locals are provided, new functions are stored in the locals dictionary. This fix ensures any dynamically defined functions (e.g., via eval(code, globals, locals)) are accessible by copying them from locals to globals when necessary. This resolves issues with third-party libraries (like babel) that rely on this behavior.
    • AST analysis may fail or behave unexpectedly in cases where code overrides Python built-ins or globals at runtime, e.g., mysqlsh (MySQL Shell) reassigns globals with a custom object. This can interfere with analysis or instrumentation logic.

Other Changes

  • openai: Removes I/O and request/response attribute tags from the APM spans for OpenAI LLM traced completion/chat/response requests and responses, which is duplicated in LLM Observability. openai.request.client has been retained and renamed to openai.request.provider.
  • anthropic: Removes the IO data from the APM spans for Anthropic LLM requests and responses, which is duplicated in the LLM Observability span.
  • gemini: Removes the IO data from the APM spans for Gemini LLM requests and responses, which is duplicated in the LLM Observability span.
  • vertexai: Removes the IO data from the APM spans for VertexAI LLM requests and responses, which is duplicated in the LLM Observability span.

2.21.10

22 Jul 18:14
eeb528b
Compare
Choose a tag to compare

Estimated end-of-life date: 10-2025
See the support level definitions for more information.

Bug Fixes

  • tracing
    • This resolves a TypeError in encoding when truncating a large bytes object.
    • This fix resolves an issue where the library fails to decode a supported sampling mechanism, resulting in the log line: "failed to decode _dd.p.dm: ..."
    • Fixes an issue where span attributes were not truncated before encoding, leading to runtime error and causing spans to be dropped. Spans with resource name, tag key or value larger than 25000 characters will be truncated to 2500 characters.
    • Fixes an issue where truncation of span attributes longer than 25000 characters would not consistently count the size of UTF-8 multibyte characters, leading to a unicode string is too large error.
  • dynamic instrumentation: fixed an incompatibility issue with code origin that caused line probes on the entry point functions to fail to instrument.

3.10.2

09 Jul 18:52
68d1e96
Compare
Choose a tag to compare

Bug Fixes

  • logging: Fix issue when dd.* properties were not injected onto logging records unless DD_LOGS_ENABLED=true env var was set (default value is structured). This issue causes problems for non-structured loggers which set their own format string instead of having ddtrace set the logging format string for you.