Estimated end-of-life date, accurate to within three months: 07-2027 See the support level definitions for more information.
Hooks class (config.<integration>.hooks) is deprecated and will be removed in v5.0. All hook methods (register(), on(), deregister(), emit()) are now no-op and no longer affect span behavior. To interact with spans, use ddtrace.trace_utils.get_current_span() or ddtrace.trace_utils.get_current_root_span() instead.initialize requests and their responses on modelcontextprotocol/python-sdk servers.source:otel tagging for evaluations when OpenTelemetry (OTel) tracing is enabled when DD_TRACE_OTEL_ENABLED=true is set. This tag allows the backend to wait for OTel span conversion before processing evaluations.asyncio.BoundedSemaphore lock type profiling in Python Lock Profiler.asyncio.Condition locking type profiling in Python. The Lock profiler now provides visibility into asyncio.Condition usage, helping identify contention in async applications using condition variables.asyncio.Semaphore lock type profiling in Python Lock Profiler.aws.httpapi span name for v2 apis when the API Gateway sets the x-dd-proxy header to aws-httpapi. Additionally, the tag http.route and the resource name of the span now contains the api resource path instead of the path when propagated with the x-dd-proxy-resource-path header.DD_TRACE_REMOVE_INTEGRATION_SERVICE_NAMES_ENABLED support, which was previously ignored.https:// scheme prefix as part of the http.url tag; this caused the entire url to be parsed as the http path.isDefined to result in an evaluation error.DD_EXPERIMENTAL_FLAGGING_PROVIDER_ENABLED=true), ensuring remote configuration is received before process forking occurs.None metadata in Ray job submission caused a crash.AttributeError when calling tag_agent_manifest.TypeError during profiling. One example of this is neo4j's AsyncRLock, which inherits from asyncio.Lock: https://github.com/neo4j/neo4j-python-driver/blob/6.x/src/neo4j/_async_compat/concurrency.py#L45uvloop and forking has been resolved.Estimated end-of-life date, accurate to within three months: 07-2027 See the support level definitions for more information.
None metadata in Ray job submission caused a crash.Estimated end-of-life date, accurate to within three months: 05-2027 See the support level definitions for more information.
DD_CODE_ORIGIN_FOR_SPANS_ENABLED=true is set⚠️ An issue was detected with memory profiling in this release. Please consider upgrading to v4.1.3 or newer
Estimated end-of-life date, accurate to within three months: 05-2027 See the support level definitions for more information.
DD_CODE_ORIGIN_FOR_SPANS_ENABLED=true is setdataset_name, project_name, project_id, experiment_name.ExperimentResult class' rows and summary_evaluations attributes are deprecated and will be removed in the next major release. ExperimentResult.rows/summary_evaluations attributes will only store the results of the first run iteration for multi-run experiments. Use the ExperimentResult.runs attribute instead to access experiment results and summary evaluations.threading.BoundedSemaphore locking type profiling in Python. The implementation follows the same approach as threading.Semaphore, properly handling internal lock detection to prevent double-counting of the underlying threading.Lock object.threading.Semaphore locking type profiling in Python. The Lock profiler now detects and marks "internal" Lock objects, i.e. those that are part of implementation of higher-level locking types. One example of such higher-level primitive is threading.Semaphore, which is implemented with threading.Condition, which itself uses threading.Lock internally. Marking internal lock as "internal" will prevent it from being sampled, ensuring that the high-level (e.g. Semaphore) sample is processed.process_id tag to profiles. The value of this tag is the current process ID (PID).asyncio.wait.codeobject.co_qualname in memory profiler and lock profiler flamegraphs for Python 3.11+. Stack profiler has already been using this. This aligns the user experience across different profile types.asyncio.as_completed util in the Profiler.asyncio.wait in the Profiler. This makes it possible to track dependencies between Tasks/Coroutines that await/are awaited through asyncio.wait.client.beta.messages.create() and client.beta.messages.stream()). This feature requires Anthropic client version 0.37.0 or higher.aiokafka>=0.9.0. See the aiokafka<https://ddtrace.readthedocs.io/en/stable/integrations.html#aiokafka> documentation for more information.thread=False is no longer required when performing monkey-patching with gevent via gevent.monkey.patch_all.responses endpoint (available in OpenAI SDK >= 1.87.0).runs argument, to assess the true performance of an experiment in the face of the non determinism of LLMs. Use the new ExperimentResult class' runs attribute to access the results and summary evaluations by run iteration.RunnableLambda instances._Py_DumpTracebackThreads function is not available.IndexError in partial flush when the finished span counter was out of sync with actual finished spans.DD_TRACE_PARTIAL_FLUSH_MIN_SPANS values less than 1 now default to 1 with a warning.ray.init() at the top of their scripts were not properly instrumented, resulting in incomplete traces. To ensure full tracing capabilities, use ddtrace-run when starting your Ray cluster: DD_PATCH_MODULES="ray:true,aiohttp:false,grpc:false,requests:false" ddtrace-run ray start --head.gsutil toolLLMObs.export_span() would raise when LLMObs is disabled.self was being annotated as an input parameter using LLM Observability function decorators.LLMObs.annotation_context() properties (tags, prompt, and name) were not applied to subsequent LLM operations within the same context block. This occurred when multiple sequential operations (such as Langchain batch calls with structured outputs) were performed, causing only the first operation to receive the annotations.AttributeError when trying to access the name or description attributes of a tool.opentelemetry.trace.get_current_span() or NonRecordingSpan. Spans are now kept and appear in the UI unless explicitly dropped by the Agent or sampling rules.frame.f_locals while trying to retrieve class name of a PyFrameObject.asyncio.gather).Estimated end-of-life date, accurate to within three months: 05-2027 See the support level definitions for more information.
dataset_name, project_name, project_id, experiment_name.ddtrace.contrib.tornado module. Configure tracing using environment variables and import ddtrace.auto instead.ExperimentResult class' rows and summary_evaluations attributes are deprecated and will be removed in the next major release. ExperimentResult.rows/summary_evaluations attributes will only store the results of the first run iteration for multi-run experiments. Use the ExperimentResult.runs attribute instead to access experiment results and summary evaluations.threading.BoundedSemaphore locking type profiling in Python. The implementation follows the same approach as threading.Semaphore, properly handling internal lock detection to prevent double-counting of the underlying threading.Lock object.threading.Semaphore locking type profiling in Python. The Lock profiler now detects and marks "internal" Lock objects, i.e. those that are part of implementation of higher-level locking types. One example of such higher-level primitive is threading.Semaphore, which is implemented with threading.Condition, which itself uses threading.Lock internally. Marking internal lock as "internal" will prevent it from being sampled, ensuring that the high-level (e.g. Semaphore) sample is processed.process_id tag to profiles. The value of this tag is the current process ID (PID).asyncio.wait.codeobject.co_qualname in memory profiler and lock profiler flamegraphs for Python 3.11+. Stack profiler has already been using this. This aligns the user experience across different profile types.asyncio.as_completed util in the Profiler.asyncio.wait in the Profiler. This makes it possible to track dependencies between Tasks/Coroutines that await/are awaited through asyncio.wait.client.beta.messages.create() and client.beta.messages.stream()). This feature requires Anthropic client version 0.37.0 or higher.aiokafka>=0.9.0. See the aiokafka<https://ddtrace.readthedocs.io/en/stable/integrations.html#aiokafka> documentation for more information.thread=False is no longer required when performing monkey-patching with gevent via gevent.monkey.patch_all.responses endpoint (available in OpenAI SDK >= 1.87.0).runs argument, to assess the true performance of an experiment in the face of the non determinism of LLMs. Use the new ExperimentResult class' runs attribute to access the results and summary evaluations by run iteration.RunnableLambda instances._Py_DumpTracebackThreads function is not available.IndexError in partial flush when the finished span counter was out of sync with actual finished spans.DD_TRACE_PARTIAL_FLUSH_MIN_SPANS values less than 1 now default to 1 with a warning.ray.init() at the top of their scripts were not properly instrumented, resulting in incomplete traces. To ensure full tracing capabilities, use ddtrace-run when starting your Ray cluster: DD_PATCH_MODULES="ray:true,aiohttp:false,grpc:false,requests:false" ddtrace-run ray start --head.gsutil toolLLMObs.export_span() would raise when LLMObs is disabled.self was being annotated as an input parameter using LLM Observability function decorators.LLMObs.annotation_context() properties (tags, prompt, and name) were not applied to subsequent LLM operations within the same context block. This occurred when multiple sequential operations (such as Langchain batch calls with structured outputs) were performed, causing only the first operation to receive the annotations.AttributeError when trying to access the name or description attributes of a tool.opentelemetry.trace.get_current_span() or NonRecordingSpan. Spans are now kept and appear in the UI unless explicitly dropped by the Agent or sampling rules.frame.f_locals while trying to retrieve class name of a PyFrameObject.asyncio.gather).Estimated end-of-life date, accurate to within three months: 05-2027 See the support level definitions for more information.
dataset_name, project_name, project_id, experiment_name.ddtrace.contrib.tornado module. Configure tracing using environment variables and import ddtrace.auto instead.ExperimentResult class' rows and summary_evaluations attributes are deprecated and will be removed in the next major release. ExperimentResult.rows/summary_evaluations attributes will only store the results of the first run iteration for multi-run experiments. Use the ExperimentResult.runs attribute instead to access experiment results and summary evaluations.threading.BoundedSemaphore locking type profiling in Python. The implementation follows the same approach as threading.Semaphore, properly handling internal lock detection to prevent double-counting of the underlying threading.Lock object.threading.Semaphore locking type profiling in Python. The Lock profiler now detects and marks "internal" Lock objects, i.e. those that are part of implementation of higher-level locking types. One example of such higher-level primitive is threading.Semaphore, which is implemented with threading.Condition, which itself uses threading.Lock internally. Marking internal lock as "internal" will prevent it from being sampled, ensuring that the high-level (e.g. Semaphore) sample is processed.process_id tag to profiles. The value of this tag is the current process ID (PID).asyncio.wait.codeobject.co_qualname in memory profiler and lock profiler flamegraphs for Python 3.11+. Stack profiler has already been using this. This aligns the user experience across different profile types.asyncio.as_completed util in the Profiler.asyncio.wait in the Profiler. This makes it possible to track dependencies between Tasks/Coroutines that await/are awaited through asyncio.wait.client.beta.messages.create() and client.beta.messages.stream()). This feature requires Anthropic client version 0.37.0 or higher.aiokafka>=0.9.0. See the aiokafka<https://ddtrace.readthedocs.io/en/stable/integrations.html#aiokafka> documentation for more information.thread=False is no longer required when performing monkey-patching with gevent via gevent.monkey.patch_all.responses endpoint (available in OpenAI SDK >= 1.87.0).runs argument, to assess the true performance of an experiment in the face of the non determinism of LLMs. Use the new ExperimentResult class' runs attribute to access the results and summary evaluations by run iteration.RunnableLambda instances._Py_DumpTracebackThreads function is not available.IndexError in partial flush when the finished span counter was out of sync with actual finished spans.DD_TRACE_PARTIAL_FLUSH_MIN_SPANS values less than 1 now default to 1 with a warning.ray.init() at the top of their scripts were not properly instrumented, resulting in incomplete traces. To ensure full tracing capabilities, use ddtrace-run when starting your Ray cluster: DD_PATCH_MODULES="ray:true,aiohttp:false,grpc:false,requests:false" ddtrace-run ray start --head.gsutil toolLLMObs.export_span() would raise when LLMObs is disabled.self was being annotated as an input parameter using LLM Observability function decorators.LLMObs.annotation_context() properties (tags, prompt, and name) were not applied to subsequent LLM operations within the same context block. This occurred when multiple sequential operations (such as Langchain batch calls with structured outputs) were performed, causing only the first operation to receive the annotations.AttributeError when trying to access the name or description attributes of a tool.opentelemetry.trace.get_current_span() or NonRecordingSpan. Spans are now kept and appear in the UI unless explicitly dropped by the Agent or sampling rules.frame.f_locals while trying to retrieve class name of a PyFrameObject.asyncio.gather).Code Security:
tracing:
LLM Observability:
AttributeError when trying to access the name or description attributes of a tool.AAP:
lib-injection:
gsutil toolprofiling:
frame.f_locals while trying to retrieve class name of a PyFrameObject.Estimated end-of-life date, accurate to within three months: 08-2026 See the support level definitions for more information.
Estimated end-of-life date, accurate to within three months: 05-2027 See the support level definitions for more information.
dd-trace-py now includes an OpenFeature provider implementation, enabling feature flag evaluation through the OpenFeature API. This integration is under active design and development. Functionality and APIs are experimental and may change without notice. For more information, see the Datadog documentation at https://docs.datadoghq.com/feature_flags/#overview
dataset_name, project_name, project_id, experiment_name.ddtrace.contrib.tornado module. Configure tracing using environment variables and import ddtrace.auto instead.ExperimentResult class' rows and summary_evaluations attributes are deprecated and will be removed in the next major release. ExperimentResult.rows/summary_evaluations attributes will only store the results of the first run iteration for multi-run experiments. Use the ExperimentResult.runs attribute instead to access experiment results and summary evaluations.threading.BoundedSemaphore locking type profiling in Python. The implementation follows the same approach as threading.Semaphore, properly handling internal lock detection to prevent double-counting of the underlying threading.Lock object.threading.Semaphore locking type profiling in Python. The Lock profiler now detects and marks "internal" Lock objects, i.e. those that are part of implementation of higher-level locking types. One example of such higher-level primitive is threading.Semaphore, which is implemented with threading.Condition, which itself uses threading.Lock internally. Marking internal lock as "internal" will prevent it from being sampled, ensuring that the high-level (e.g. Semaphore) sample is processed.process_id tag to profiles. The value of this tag is the current process ID (PID).asyncio.wait.codeobject.co_qualname in memory profiler and lock profiler flamegraphs for Python 3.11+. Stack profiler has already been using this. This aligns the user experience across different profile types.asyncio.as_completed util in the Profiler.asyncio.wait in the Profiler. This makes it possible to track dependencies between Tasks/Coroutines that await/are awaited through asyncio.wait.client.beta.messages.create() and client.beta.messages.stream()). This feature requires Anthropic client version 0.37.0 or higher.aiokafka>=0.9.0. See the aiokafka<https://ddtrace.readthedocs.io/en/stable/integrations.html#aiokafka> documentation for more information.thread=False is no longer required when performing monkey-patching with gevent via gevent.monkey.patch_all.responses endpoint (available in OpenAI SDK >= 1.87.0).runs argument, to assess the true performance of an experiment in the face of the non determinism of LLMs. Use the new ExperimentResult class' runs attribute to access the results and summary evaluations by run iteration.RunnableLambda instances._Py_DumpTracebackThreads function is not available.IndexError in partial flush when the finished span counter was out of sync with actual finished spans.DD_TRACE_PARTIAL_FLUSH_MIN_SPANS values less than 1 now default to 1 with a warning.ray.init() at the top of their scripts were not properly instrumented, resulting in incomplete traces. To ensure full tracing capabilities, use ddtrace-run when starting your Ray cluster: DD_PATCH_MODULES="ray:true,aiohttp:false,grpc:false,requests:false" ddtrace-run ray start --head.gsutil toolLLMObs.export_span() would raise when LLMObs is disabled.self was being annotated as an input parameter using LLM Observability function decorators.LLMObs.annotation_context() properties (tags, prompt, and name) were not applied to subsequent LLM operations within the same context block. This occurred when multiple sequential operations (such as Langchain batch calls with structured outputs) were performed, causing only the first operation to receive the annotations.AttributeError when trying to access the name or description attributes of a tool.opentelemetry.trace.get_current_span() or NonRecordingSpan. Spans are now kept and appear in the UI unless explicitly dropped by the Agent or sampling rules.frame.f_locals while trying to retrieve class name of a PyFrameObject.asyncio.gather).Estimated end-of-life date, accurate to within three months: 08-2026 See the support level definitions for more information.
ddtrace.contrib.tornado module. Configure tracing using environment variables and import ddtrace.auto instead.AAP: This fix resolves an issue where the appsec layer was not compatible anymore with the lambda/serverless version of the tracer.
Code Security: Fixes critical memory safety issue in IAST when used with forked worker processes (MCP servers with Gunicorn and Uvicorn). Workers previously crashed with segmentation faults due to stale PyObject pointers in native taint maps after fork.
Dynamic instrumentation: fix issue with line probes matching the wrong source file when multiple source files from different Python path entries share the same name.
Exception replay: ensure exception information is captured when exceptions are raised by the GraphQL client library.
Lib-injection: do not inject into the gsutil tool
LLM Observability: Fixes an issue where the Google ADK integration would throw an AttributeError when trying to access the name or description attributes of a tool.
Profiling:
frame.f_locals while trying to retrieve class name of a PyFrameObject.Tracing:
IndexError in partial flush when the finished span counter was out of sync with actual finished spans.DD_TRACE_PARTIAL_FLUSH_MIN_SPANS values less than 1 now default to 1 with a warning.Estimated end-of-life date, accurate to within three months: 08-2026 See the support level definitions for more information.
Span.finished setter is deprecated, use Span.finish() method instead.Span.finish_with_ancestors() is deprecated with no alternative.ExperimentResult class' rows and summary_evaluations attributes are deprecated and will be removed in the next major release. ExperimentResult.rows/summary_evaluations attributes will only store the results of the first run iteration for multi-run experiments. Use the ExperimentResult.runs attribute instead to access experiment results and summary evaluations.runs argument, to assess the true performance of an experiment in the face of the non determinism of LLMs. Use the new ExperimentResult class' runs attribute to access the results and summary evaluations by run iteration.Lock, RLock, Event).ResourceWarning in multiprocess scenarios.wrapt library dependency from the Lock Profiler implementation, improving performance and reducing overhead during lock instrumentation.Estimated end-of-life date, accurate to within three months: 05-2027 See the support level definitions for more information.
This is a major-version release that contains many backwards-incompatible changes to public APIs. To find which of these your code relies on, follow the "deprecation warnings" instructions here.
dd-trace-py now includes an OpenFeature provider implementation, enabling feature flag evaluation through the OpenFeature API.
This integration is under active design and development. Functionality and APIs are experimental and may change without notice. For more information, see the Datadog documentation at https://docs.datadoghq.com/feature_flags/#overview
ddtrace.Pin object with mongoengine. With this change, the ddtrace library no longer directly supports mongoengine. Mongoengine will be supported through the pymongo integration.pytest_benchmark and pytest_bdd integrations. These plugins are now supported by the regular pytest integration.DD_DYNAMIC_INSTRUMENTATION_UPLOAD_FLUSH_INTERVAL variable.DD_EXCEPTION_DEBUGGING_ENABLED variable.Span.set_tag_str has been removed, use Span.set_tag instead.Span.set_struct_tag has been removed.Span.get_struct_tag has been removed.Span._pprint has been removedSpan.finished setter was removed, please use Span.finish() method instead.Tracer.on_start_span method has been removed.Tracer.deregister_on_start_span method has been removed.ddtrace.trace.Pin has been removed.Span.finish_with_ancestors was removed with no replacement.Span.set_tag typing is now set_tag(key: str, value: Optional[str] = None) -> NoneSpan.get_tag typing is now get_tag(key: str) -> Optional[str]Span.set_tags typing is now set_tags(tags: dict[str, str]) -> NoneSpan.get_tags typing is now get_tags() -> dict[str, str]Span.set_metric typing is now set_metric(key: str, value: int | float) -> NoneSpan.get_metric typing is now get_metric(key: str) -> Optional[int | float]Span.set_metrics typing is now set_metrics(metrics: Dict[str, int | float]) -> NoneSpan.get_metrics typing is now get_metrics() -> dict[str, int | float]Span.record_exception's timestamp and escaped parameters are removedLLMObs.annotate(), LLMObs.export_span(), LLMObs.submit_evaluation(), LLMObs.inject_distributed_headers(), and LLMObs.activate_distributed_headers() now raise exceptions instead of logging. LLM Observability auto-instrumentation is not affected.LLMObs.submit_evaluation_for() has been removed. Please use LLMObs.submit_evaluation() instead for submitting evaluations. To migrate:
LLMObs.submit_evaluation_for(...) users: rename to LLMObs.submit_evaluation(...)LLMObs.submit_evaluation_for(...) users: rename the span_context argument to span, i.e. LLMObs.submit_evaluation(span_context={"span_id": ..., "trace_id": ...}, ...) to LLMObs.submit_evaluation(span={"span_id": ..., "trace_id": ...}, ...)DD_PROFILING_STACK_V2_ENABLED is now removed.freezegun integration is now removed.opentracer packagegoogle_generativeai integration has been removed as the google_generativeai library has reached end-of-life.
As an alternative, you can use the recommended google_genai library and corresponding integration instead.tiktoken library, and instead
will default to having their token counts estimated if not explicitly provided in the OpenAI response object. To guarantee accurate streamed token metrics, set stream_options={"include_usage": True} in the OpenAI request.DD_DJANGO_TRACING_MINIMAL now defaults to true). Django ORM, cache, and template instrumentation are disabled by default to eliminate duplicate span creation since library integrations for database drivers (psycopg, MySQLdb, sqlite3), cache clients (redis, memcached), template renderers (Jinja2), and other supported libraries continue to be traced. This reduces performance overhead by removing redundant Django-layer instrumentation. To restore all Django instrumentation, set DD_DJANGO_TRACING_MINIMAL=false, or enable individual features using DD_DJANGO_INSTRUMENT_DATABASES=true, DD_DJANGO_INSTRUMENT_CACHES=true, and DD_DJANGO_INSTRUMENT_TEMPLATES=true.DD_DJANGO_INSTRUMENT_DATABASES=true (default false), database instrumentation now merges Django-specific tags into database driver spans created by supported integrations (psycopg, sqlite3, MySQLdb, etc.) instead of creating duplicate Django database spans. If the database cursor is not already wrapped by a supported integration, Django wraps it and creates a span. This change reduces overhead and duplicate spans while preserving visibility into database operations.ddtrace.settings package. Environment variables should be used to adjust settings.HttpPropagator.injectDEFAULT_RUNTIME_METRICS_INTERVAL.ddtrace.contrib.tornado module. Configure tracing using environment variables and import ddtrace.auto instead.urllib3 and requests. It does not require enabling APM instrumentation for urllib3 anymore.threading.RLock (reentrant lock) profiling. The Lock profiler now tracks both threading.Lock and threading.RLock usage, providing comprehensive lock contention visibility for Python applications.version argument to LLMObs.pull_datasetversion and latest_version to provide information on the version of the dataset that is being worked with and the latest global version of the dataset, respectivelyLock, RLock, Event).HTTPS_PROXY.assessment argument in submit_evaluation().langchain integration would incorrectly mark Azure OpenAI calls as duplicate llm operations even if the openai integration was enabled.DD_PROFILING_API_TIMEOUT doesn't have any effect, and is marked to be removed in upcoming 4.0 release. New environment variable DD_PROFILING_API_TIMEOUT_MS is introduced to configure timeout for uploading profiles to the backend. The default value is 10000 ms (10 seconds)_acquire method of the Lock profiler (note: this only occurs when assertions are enabled.)version field. The version field is now omitted unless explicitly set by the user.
assessment now refers to whether the evaluation itself passes or fails according to your application, rather than the validity of the evaluation result.
The langchain integration will trace Azure OpenAI spans as workflow spans if there is an equivalent llm span from the openai integration.IndexError.async for model in client.models.list()) caused a TypeError: 'async for' requires an object with __aiter__ method, got coroutine. See issue #14574.KeyError exceptions in test runs when gevent is detected within the environment.ray.init().ray.data._internal to the module denylist.ValueError: Formatting field not found in record: 'dd.service'.ResourceWarning in multiprocess scenarios.wrapt library dependency from the Lock Profiler implementation, improving performance and reducing overhead during lock instrumentation.Estimated end-of-life date, accurate to within three months: 05-2027 See the support level definitions for more information.
This is a major-version release that contains many backwards-incompatible changes to public APIs. To find which of these your code relies on, follow the "deprecation warnings" instructions here.
dd-trace-py now includes an OpenFeature provider implementation, enabling feature flag evaluation through the OpenFeature API.
This integration is under active design and development. Functionality and APIs are experimental and may change without notice. For more information, see the Datadog documentation at https://docs.datadoghq.com/feature_flags/#overview
ddtrace.Pin object with mongoengine. With this change, the ddtrace library no longer directly supports mongoengine. Mongoengine will be supported through the pymongo integration.pytest_benchmark and pytest_bdd integrations. These plugins are now supported by the regular pytest integration.DD_DYNAMIC_INSTRUMENTATION_UPLOAD_FLUSH_INTERVAL variable.DD_EXCEPTION_DEBUGGING_ENABLED variable.Span.set_tag_str has been removed, use Span.set_tag instead.Span.set_struct_tag has been removed.Span.get_struct_tag has been removed.Span._pprint has been removedSpan.finished setter was removed, please use Span.finish() method instead.Tracer.on_start_span method has been removed.Tracer.deregister_on_start_span method has been removed.ddtrace.trace.Pin has been removed.Span.finish_with_ancestors was removed with no replacement.Span.set_tag typing is now set_tag(key: str, value: Optional[str] = None) -> NoneSpan.get_tag typing is now get_tag(key: str) -> Optional[str]Span.set_tags typing is now set_tags(tags: dict[str, str]) -> NoneSpan.get_tags typing is now get_tags() -> dict[str, str]Span.set_metric typing is now set_metric(key: str, value: int | float) -> NoneSpan.get_metric typing is now get_metric(key: str) -> Optional[int | float]Span.set_metrics typing is now set_metrics(metrics: Dict[str, int | float]) -> NoneSpan.get_metrics typing is now get_metrics() -> dict[str, int | float]Span.record_exception's timestamp and escaped parameters are removedLLMObs.annotate(), LLMObs.export_span(), LLMObs.submit_evaluation(), LLMObs.inject_distributed_headers(), and LLMObs.activate_distributed_headers() now raise exceptions instead of logging. LLM Observability auto-instrumentation is not affected.LLMObs.submit_evaluation_for() has been removed. Please use LLMObs.submit_evaluation() instead for submitting evaluations. To migrate:
LLMObs.submit_evaluation_for(...) users: rename to LLMObs.submit_evaluation(...)LLMObs.submit_evaluation_for(...) users: rename the span_context argument to span, i.e. LLMObs.submit_evaluation(span_context={"span_id": ..., "trace_id": ...}, ...) to LLMObs.submit_evaluation(span={"span_id": ..., "trace_id": ...}, ...)DD_PROFILING_STACK_V2_ENABLED is now removed.freezegun integration is now removed.opentracer packagegoogle_generativeai integration has been removed as the google_generativeai library has reached end-of-life.
As an alternative, you can use the recommended google_genai library and corresponding integration instead.tiktoken library, and instead
will default to having their token counts estimated if not explicitly provided in the OpenAI response object. To guarantee accurate streamed token metrics, set stream_options={"include_usage": True} in the OpenAI request.DD_DJANGO_TRACING_MINIMAL now defaults to true). Django ORM, cache, and template instrumentation are disabled by default to eliminate duplicate span creation since library integrations for database drivers (psycopg, MySQLdb, sqlite3), cache clients (redis, memcached), template renderers (Jinja2), and other supported libraries continue to be traced. This reduces performance overhead by removing redundant Django-layer instrumentation. To restore all Django instrumentation, set DD_DJANGO_TRACING_MINIMAL=false, or enable individual features using DD_DJANGO_INSTRUMENT_DATABASES=true, DD_DJANGO_INSTRUMENT_CACHES=true, and DD_DJANGO_INSTRUMENT_TEMPLATES=true.DD_DJANGO_INSTRUMENT_DATABASES=true (default false), database instrumentation now merges Django-specific tags into database driver spans created by supported integrations (psycopg, sqlite3, MySQLdb, etc.) instead of creating duplicate Django database spans. If the database cursor is not already wrapped by a supported integration, Django wraps it and creates a span. This change reduces overhead and duplicate spans while preserving visibility into database operations.ddtrace.settings package. Environment variables should be used to adjust settings.HttpPropagator.injectDEFAULT_RUNTIME_METRICS_INTERVAL.ddtrace.contrib.tornado module. Configure tracing using environment variables and import ddtrace.auto instead.urllib3 and requests. It does not require enabling APM instrumentation for urllib3 anymore.threading.RLock (reentrant lock) profiling. The Lock profiler now tracks both threading.Lock and threading.RLock usage, providing comprehensive lock contention visibility for Python applications.version argument to LLMObs.pull_datasetversion and latest_version to provide information on the version of the dataset that is being worked with and the latest global version of the dataset, respectivelyLock, RLock, Event).HTTPS_PROXY.assessment argument in submit_evaluation().langchain integration would incorrectly mark Azure OpenAI calls as duplicate llm operations even if the openai integration was enabled.DD_PROFILING_API_TIMEOUT doesn't have any effect, and is marked to be removed in upcoming 4.0 release. New environment variable DD_PROFILING_API_TIMEOUT_MS is introduced to configure timeout for uploading profiles to the backend. The default value is 10000 ms (10 seconds)_acquire method of the Lock profiler (note: this only occurs when assertions are enabled.)version field. The version field is now omitted unless explicitly set by the user.
assessment now refers to whether the evaluation itself passes or fails according to your application, rather than the validity of the evaluation result.
The langchain integration will trace Azure OpenAI spans as workflow spans if there is an equivalent llm span from the openai integration.IndexError.async for model in client.models.list()) caused a TypeError: 'async for' requires an object with __aiter__ method, got coroutine. See issue #14574.KeyError exceptions in test runs when gevent is detected within the environment.ray.init().ray.data._internal to the module denylist.ValueError: Formatting field not found in record: 'dd.service'.ResourceWarning in multiprocess scenarios.wrapt library dependency from the Lock Profiler implementation, improving performance and reducing overhead during lock instrumentation.Estimated end-of-life date, accurate to within three months: 08-2026 See the support level definitions for more information.
Span.finished setter is deprecated, use Span.finish() method instead.Span.finish_with_ancestors() is deprecated with no alternative.ExperimentResult class' rows and summary_evaluations attributes are deprecated and will be removed in the next major release. ExperimentResult.rows/summary_evaluations attributes will only store the results of the first run iteration for multi-run experiments. Use the ExperimentResult.runs attribute instead to access experiment results and summary evaluations.runs argument, to assess the true performance of an experiment in the face of the non determinism of LLMs. Use the new ExperimentResult class' runs attribute to access the results and summary evaluations by run iteration.Lock, RLock, Event).ResourceWarning in multiprocess scenarios.wrapt library dependency from the Lock Profiler implementation, improving performance and reducing overhead during lock instrumentation.Estimated end-of-life date, accurate to within three months: 05-2027 See the support level definitions for more information.
This is a major-version release that contains many backwards-incompatible changes to public APIs. To find which of these your code relies on, follow the "deprecation warnings" instructions here.
dd-trace-py now includes an OpenFeature provider implementation, enabling feature flag evaluation through the OpenFeature API.
This integration is under active design and development. Functionality and APIs are experimental and may change without notice. For more information, see the Datadog documentation at https://docs.datadoghq.com/feature_flags/#overview
ddtrace.Pin object with mongoengine. With this change, the ddtrace library no longer directly supports mongoengine. Mongoengine will be supported through the pymongo integration.pytest_benchmark and pytest_bdd integrations. These plugins are now supported by the regular pytest integration.DD_DYNAMIC_INSTRUMENTATION_UPLOAD_FLUSH_INTERVAL variable.DD_EXCEPTION_DEBUGGING_ENABLED variable.Span.set_tag_str has been removed, use Span.set_tag instead.Span.set_struct_tag has been removed.Span.get_struct_tag has been removed.Span._pprint has been removedSpan.finished setter was removed, please use Span.finish() method instead.Tracer.on_start_span method has been removed.Tracer.deregister_on_start_span method has been removed.ddtrace.trace.Pin has been removed.Span.finish_with_ancestors was removed with no replacement.Span.set_tag typing is now set_tag(key: str, value: Optional[str] = None) -> NoneSpan.get_tag typing is now get_tag(key: str) -> Optional[str]Span.set_tags typing is now set_tags(tags: dict[str, str]) -> NoneSpan.get_tags typing is now get_tags() -> dict[str, str]Span.set_metric typing is now set_metric(key: str, value: int | float) -> NoneSpan.get_metric typing is now get_metric(key: str) -> Optional[int | float]Span.set_metrics typing is now set_metrics(metrics: Dict[str, int | float]) -> NoneSpan.get_metrics typing is now get_metrics() -> dict[str, int | float]Span.record_exception's timestamp and escaped parameters are removedLLMObs.annotate(), LLMObs.export_span(), LLMObs.submit_evaluation(), LLMObs.inject_distributed_headers(), and LLMObs.activate_distributed_headers() now raise exceptions instead of logging. LLM Observability auto-instrumentation is not affected.LLMObs.submit_evaluation_for() has been removed. Please use LLMObs.submit_evaluation() instead for submitting evaluations. To migrate:
LLMObs.submit_evaluation_for(...) users: rename to LLMObs.submit_evaluation(...)LLMObs.submit_evaluation_for(...) users: rename the span_context argument to span, i.e. LLMObs.submit_evaluation(span_context={"span_id": ..., "trace_id": ...}, ...) to LLMObs.submit_evaluation(span={"span_id": ..., "trace_id": ...}, ...)DD_PROFILING_STACK_V2_ENABLED is now removed.freezegun integration is now removed.opentracer packagegoogle_generativeai integration has been removed as the google_generativeai library has reached end-of-life.
As an alternative, you can use the recommended google_genai library and corresponding integration instead.tiktoken library, and instead
will default to having their token counts estimated if not explicitly provided in the OpenAI response object. To guarantee accurate streamed token metrics, set stream_options={"include_usage": True} in the OpenAI request.DD_DJANGO_TRACING_MINIMAL now defaults to true). Django ORM, cache, and template instrumentation are disabled by default to eliminate duplicate span creation since library integrations for database drivers (psycopg, MySQLdb, sqlite3), cache clients (redis, memcached), template renderers (Jinja2), and other supported libraries continue to be traced. This reduces performance overhead by removing redundant Django-layer instrumentation. To restore all Django instrumentation, set DD_DJANGO_TRACING_MINIMAL=false, or enable individual features using DD_DJANGO_INSTRUMENT_DATABASES=true, DD_DJANGO_INSTRUMENT_CACHES=true, and DD_DJANGO_INSTRUMENT_TEMPLATES=true.DD_DJANGO_INSTRUMENT_DATABASES=true (default false), database instrumentation now merges Django-specific tags into database driver spans created by supported integrations (psycopg, sqlite3, MySQLdb, etc.) instead of creating duplicate Django database spans. If the database cursor is not already wrapped by a supported integration, Django wraps it and creates a span. This change reduces overhead and duplicate spans while preserving visibility into database operations.ddtrace.settings package. Environment variables should be used to adjust settings.HttpPropagator.injectDEFAULT_RUNTIME_METRICS_INTERVAL.urllib3 and requests. It does not require enabling APM instrumentation for urllib3 anymore.threading.RLock (reentrant lock) profiling. The Lock profiler now tracks both threading.Lock and threading.RLock usage, providing comprehensive lock contention visibility for Python applications.version argument to LLMObs.pull_datasetversion and latest_version to provide information on the version of the dataset that is being worked with and the latest global version of the dataset, respectivelyLock, RLock, Event).HTTPS_PROXY.assessment argument in submit_evaluation().langchain integration would incorrectly mark Azure OpenAI calls as duplicate llm operations even if the openai integration was enabled.DD_PROFILING_API_TIMEOUT doesn't have any effect, and is marked to be removed in upcoming 4.0 release. New environment variable DD_PROFILING_API_TIMEOUT_MS is introduced to configure timeout for uploading profiles to the backend. The default value is 10000 ms (10 seconds)_acquire method of the Lock profiler (note: this only occurs when assertions are enabled.)version field. The version field is now omitted unless explicitly set by the user.
assessment now refers to whether the evaluation itself passes or fails according to your application, rather than the validity of the evaluation result.
The langchain integration will trace Azure OpenAI spans as workflow spans if there is an equivalent llm span from the openai integration.IndexError.async for model in client.models.list()) caused a TypeError: 'async for' requires an object with __aiter__ method, got coroutine. See issue #14574.KeyError exceptions in test runs when gevent is detected within the environment.ray.init().ray.data._internal to the module denylist.ValueError: Formatting field not found in record: 'dd.service'.ResourceWarning in multiprocess scenarios.wrapt library dependency from the Lock Profiler implementation, improving performance and reducing overhead during lock instrumentation.Estimated end-of-life date, accurate to within three months: 05-2027 See the support level definitions for more information.
This is a major-version release that contains many backwards-incompatible changes to public APIs. To find which of these your code relies on, follow the "deprecation warnings" instructions here.
ddtrace.Pin object with mongoengine. With this change, the ddtrace library no longer directly supports mongoengine. Mongoengine will be supported through the pymongo integration.pytest_benchmark and pytest_bdd integrations. These plugins are now supported by the regular pytest integration.DD_DYNAMIC_INSTRUMENTATION_UPLOAD_FLUSH_INTERVAL variable.DD_EXCEPTION_DEBUGGING_ENABLED variable.Span.set_tag_str has been removed, use Span.set_tag instead.Span.set_struct_tag has been removed.Span.get_struct_tag has been removed.Span._pprint has been removedSpan.finished setter was removed, please use Span.finish() method instead.Tracer.on_start_span method has been removed.Tracer.deregister_on_start_span method has been removed.ddtrace.trace.Pin has been removed.Span.finish_with_ancestors was removed with no replacement.Span.set_tag typing is now set_tag(key: str, value: Optional[str] = None) -> NoneSpan.get_tag typing is now get_tag(key: str) -> Optional[str]Span.set_tags typing is now set_tags(tags: dict[str, str]) -> NoneSpan.get_tags typing is now get_tags() -> dict[str, str]Span.set_metric typing is now set_metric(key: str, value: int | float) -> NoneSpan.get_metric typing is now get_metric(key: str) -> Optional[int | float]Span.set_metrics typing is now set_metrics(metrics: Dict[str, int | float]) -> NoneSpan.get_metrics typing is now get_metrics() -> dict[str, int | float]Span.record_exception's timestamp and escaped parameters are removedLLMObs.annotate(), LLMObs.export_span(), LLMObs.submit_evaluation(), LLMObs.inject_distributed_headers(), and LLMObs.activate_distributed_headers() now raise exceptions instead of logging. LLM Observability auto-instrumentation is not affected.LLMObs.submit_evaluation_for() has been removed. Please use LLMObs.submit_evaluation() instead for submitting evaluations. To migrate:
LLMObs.submit_evaluation_for(...) users: rename to LLMObs.submit_evaluation(...)LLMObs.submit_evaluation_for(...) users: rename the span_context argument to span, i.e. LLMObs.submit_evaluation(span_context={"span_id": ..., "trace_id": ...}, ...) to LLMObs.submit_evaluation(span={"span_id": ..., "trace_id": ...}, ...)DD_PROFILING_STACK_V2_ENABLED is now removed.freezegun integration is now removed.opentracer packagegoogle_generativeai integration has been removed as the google_generativeai library has reached end-of-life.
As an alternative, you can use the recommended google_genai library and corresponding integration instead.tiktoken library, and instead
will default to having their token counts estimated if not explicitly provided in the OpenAI response object. To guarantee accurate streamed token metrics, set stream_options={"include_usage": True} in the OpenAI request.DD_DJANGO_TRACING_MINIMAL now defaults to true). Django ORM, cache, and template instrumentation are disabled by default to eliminate duplicate span creation since library integrations for database drivers (psycopg, MySQLdb, sqlite3), cache clients (redis, memcached), template renderers (Jinja2), and other supported libraries continue to be traced. This reduces performance overhead by removing redundant Django-layer instrumentation. To restore all Django instrumentation, set DD_DJANGO_TRACING_MINIMAL=false, or enable individual features using DD_DJANGO_INSTRUMENT_DATABASES=true, DD_DJANGO_INSTRUMENT_CACHES=true, and DD_DJANGO_INSTRUMENT_TEMPLATES=true.DD_DJANGO_INSTRUMENT_DATABASES=true (default false), database instrumentation now merges Django-specific tags into database driver spans created by supported integrations (psycopg, sqlite3, MySQLdb, etc.) instead of creating duplicate Django database spans. If the database cursor is not already wrapped by a supported integration, Django wraps it and creates a span. This change reduces overhead and duplicate spans while preserving visibility into database operations.ddtrace.settings package. Environment variables should be used to adjust settings.HttpPropagator.injectDEFAULT_RUNTIME_METRICS_INTERVAL.urllib3 and requests. It does not require enabling APM instrumentation for urllib3 anymore.threading.RLock (reentrant lock) profiling. The Lock profiler now tracks both threading.Lock and threading.RLock usage, providing comprehensive lock contention visibility for Python applications.version argument to LLMObs.pull_datasetversion and latest_version to provide information on the version of the dataset that is being worked with and the latest global version of the dataset, respectivelyLock, RLock, Event).HTTPS_PROXY.assessment argument in submit_evaluation().langchain integration would incorrectly mark Azure OpenAI calls as duplicate llm operations even if the openai integration was enabled.DD_PROFILING_API_TIMEOUT doesn't have any effect, and is marked to be removed in upcoming 4.0 release. New environment variable DD_PROFILING_API_TIMEOUT_MS is introduced to configure timeout for uploading profiles to the backend. The default value is 10000 ms (10 seconds)_acquire method of the Lock profiler (note: this only occurs when assertions are enabled.)version field. The version field is now omitted unless explicitly set by the user.
assessment now refers to whether the evaluation itself passes or fails according to your application, rather than the validity of the evaluation result.
The langchain integration will trace Azure OpenAI spans as workflow spans if there is an equivalent llm span from the openai integration.IndexError.async for model in client.models.list()) caused a TypeError: 'async for' requires an object with __aiter__ method, got coroutine. See issue #14574.KeyError exceptions in test runs when gevent is detected within the environment.ray.init().ray.data._internal to the module denylist.ValueError: Formatting field not found in record: 'dd.service'.ResourceWarning in multiprocess scenarios.wrapt library dependency from the Lock Profiler implementation, improving performance and reducing overhead during lock instrumentation.Estimated end-of-life date, accurate to within three months: 08-2026 See the support level definitions for more information.
CI Visibility: this fix resolves an issue where repo tags would be fetched while unshallowing to extract commit metadata, causing performance issues for repos with a large number of tags.
LLM Observability: fix support for HTTPS_PROXY.
Error Tracking: Modifies the way exception events are stored such that the exception id is stored instead of the exception object, to prevent TypeErrors with custom exception objects.
Estimated end-of-life date, accurate to within three months: 08-2026 See the support level definitions for more information.
tiktoken library, and insteadstream_options={"include_usage": True} in the OpenAI request.Span.set_struct_tag is deprecated and will be removed in v4.0.0 with no direct replacement.Span.get_struct_tag is deprecated and will be removed in v4.0.0 with no direct replacement.Span.set_tag_str is deprecated and will be removed in version 4.0.0.
As an alternative to Span.set_tag_str, you can use Span.set_tag instead.DD_PROFILING_STACK_V2_ENABLED=false will no longer have an effect starting in 4.0.urllib3 and requests. It does not require enabling APM instrumentation for urllib3 anymore.threading.RLock (reentrant lock) profiling. The Lock profiler now tracks both threading.Lock and threading.RLock usage, providing comprehensive lock contention visibility for Python applications.version argument to LLMObs.pull_datasetversion and latest_version to provide information on the version of the dataset that is being worked with and the latest global version of the dataset, respectivelyversion field. The version field is now omitted unless explicitly set by the user.IndexError.assessment argument in submit_evaluation().
assessment now refers to whether the evaluation itself passes or fails according to your application, rather than the validity of the evaluation result.langchain integration would incorrectly mark Azure OpenAI calls as duplicate llm operations even if the openai integration was enabled.
The langchain integration will trace Azure OpenAI spans as workflow spans if there is an equivalent llm span from the openai integration.async for model in client.models.list()) caused a TypeError: 'async for' requires an object with __aiter__ method, got coroutine. See issue #14574.KeyError exceptions in test runs when gevent is detected within the environment.ray.init().ray.data._internal to the module denylist._acquire method of the Lock profiler (note: this only occurs when assertions are enabled.)DD_PROFILING_API_TIMEOUT doesn't have any effect, and is marked to be removed in upcoming 4.0 release. New environment variable DD_PROFILING_API_TIMEOUT_MS is introduced to configure timeout for uploading profiles to the backend. The default value is 10000 ms (10 seconds)ValueError: Formatting field not found in record: 'dd.service'.