Estimated end-of-life date, accurate to within three months: 05-2027 See the support level definitions for more information.
mlflow
DD_API_KEY, DD_APP_KEY and DD_MODEL_LAB_ENABLED are set, HTTP requests to the MLFlow tracking server will include the DD-API-KEY and DD-APPLICATION-KEY headers. #16685AI Guard
block=False, which now defaults to block=True.strands-agents>=1.29.0; the HookProvider works with any version that exposes the hooks system.azure_durable_functions
profiling
DD_EXPERIMENTAL_PROPAGATE_PROCESS_TAGS_ENABLED=false.runtime metrics
DD_EXPERIMENTAL_PROPAGATE_PROCESS_TAGS_ENABLED=false.remote configuration
DD_EXPERIMENTAL_PROPAGATE_PROCESS_TAGS_ENABLED=false.dynamic instrumentation
DD_EXPERIMENTAL_PROPAGATE_PROCESS_TAGS_ENABLED=false.crashtracking
DD_EXPERIMENTAL_PROPAGATE_PROCESS_TAGS_ENABLED=false.data streams monitoring
DD_EXPERIMENTAL_PROPAGATE_PROCESS_TAGS_ENABLED=false.database monitoring
DD_EXPERIMENTAL_PROPAGATE_PROCESS_TAGS_ENABLED=false.Stats computation
DD_EXPERIMENTAL_PROPAGATE_PROCESS_TAGS_ENABLED=false.LLM Observability
Experiment tasks can now optionally receive dataset record metadata as a third metadata parameter. Tasks with the existing (input_data, config) signature continue to work unchanged.
This introduces RemoteEvaluator which allows users to reference LLM-as-Judge evaluations configured in the Datadog UI by name when running local experiments. For more information, see the documentation: https://docs.datadoghq.com/llm_observability/guide/evaluation_developer_guide/#using-managed-evaluators
This adds cache creation breakdown metrics for the Anthropic integration. When making Anthropic calls with prompt caching, ephemeral_5m_input_tokens and ephemeral_1h_input_tokens metrics are now reported, distinguishing between 5 minute and 1 hour prompt caches.
Adds support for reasoning and extended thinking content in Anthropic, LiteLLM, and OpenAI-compatible integrations. Anthropic thinking blocks (type: "thinking") are now captured as role: "reasoning" messages in both streaming and non-streaming responses, as well as in input messages for tool use continuations. LiteLLM now extracts reasoning_output_tokens from completion_tokens_details and captures reasoning_content in output messages for OpenAI-compatible providers.
LLMJudge now forwards any extra client_options to the underlying provider client constructor. This allows passing provider-specific options such as base_url, timeout, organization, or max_retries directly through client_options.
Dataset records' tags can now be operated on with 3 new Dataset methods: `dataset.add_tags<span class="title-ref">, </span>dataset.remove_tags<span class="title-ref">, and </span>dataset.replace_tags<span class="title-ref">. All 3 new methods accepts an int indicating the zero based index of the record to operate on, and a list of strings in the format of key:values representing the tags. For example, if the tag "env:prod" exists on the 1st record of the dataset </span><span class="title-ref">ds</span><span class="title-ref">, calling </span><span class="title-ref">ds.remove_tags(0, ["env:prod"]</span>` will update the local state of the dataset record to have the "env:prod" tag removed.
Change experiment execution to run evaluators immediately after each record's task completes instead of batching all tasks first. Experiment spans and evaluation metrics are now posted incrementally as records complete rather than waiting until the end. This improves progress visibility and preserves partial results if a run fails midway.
Adds support for Pydantic AI evaluations in LLM Observability Experiments by allowing users to pass a pydantic evaluation (which inherents from Evaluator) in an LLM Obs Experiment.
Example:
from pydantic_evals.evaluators import EqualsExpected
from ddtrace.llmobs import LLMObs
dataset = LLMObs.create_dataset( dataset_name="<DATASET_NAME>", description="<DATASET_DESCRIPTION>", records=[RECORD_1, RECORD_2, RECORD_3, ...]
)
def my_task(input_data, config): return input_data["output"]
def my_summary_evaluator(inputs, outputs, expected_outputs, evaluators_results): return evaluators_results["Correctness"].count(True)
equals_expected = EqualsExpected()
experiment = LLMObs.experiment( name="<EXPERIMENT_NAME>", task=my_task, dataset=dataset, evaluators=[equals_expected], summary_evaluators=[my_summary_evaluator], # optional, used to summarize the experiment results description="<EXPERIMENT_DESCRIPTION>."
)
result = experiment.run()
tracer
DD_EXPERIMENTAL_PROPAGATE_PROCESS_TAGS_ENABLED=false.pymongo
DD_TRACE_MONGODB_OBFUSCATION to allow the mongodb.query to be obfuscated or not. Resource names always remain normalized regardless of the value. To preserve raw mongodb.query values, pair with DD_APM_OBFUSCATION_MONGODB_ENABLED=false on the Datadog Agent. See Datadog trace obfuscation docs: Trace obfuscation.google_cloud_pubsub
google-cloud-pubsub library. Instruments PublisherClient.publish() and SubscriberClient.subscribe() to generate spans for message publishing and consuming, with optional distributed trace context propagation via message attributes. Use DD_GOOGLE_CLOUD_PUBSUB_PROPAGATION_ENABLED to control context propagation (default: True) and DD_GOOGLE_CLOUD_PUBSUB_PROPAGATION_AS_SPAN_LINKS to attach propagated context as span links instead of re-parenting subscriber spans under the producer trace (default: False).multipart/form-data bodies, which could allow an attacker to bypass WAF inspection by hiding a malicious value among safe ones.UNVALIDATED_REDIRECT vulnerabilities could be reported for a single redirect() call.GraphInterrupt exceptions were incorrectly marked as errors in APM traces. GraphInterrupt is a control-flow exception used in LangGraph's human-in-the-loop workflows and should not be treated as an error condition.anyio.ClosedResourceError raised during MCP server session teardown when the ddtrace MCP integration is enabled.pytest-rerunfailures and flaky were silently overridden by the ddtrace plugin. With this change, external rerun plugins will now drive retries as expected when Auto Test Retries and Early Flake Detection features are both disabled, otherwise our retry mechanism takes precedence and a warning is emitted.RuntimeError that occurred when the git binary was not available. Git metadata upload is now skipped gracefully with a warning instead of aborting pytest startup.RuntimeError could be raised when iterating over the context._meta dictionary while creating spans or generating distributed traces.DD_TRACE_DEBUG instead of its own dedicated environment variable DD_INTERNAL_TELEMETRY_DEBUG_ENABLED. Setting DD_TRACE_DEBUG=true no longer enables telemetry debug mode. To enable telemetry debug mode, set DD_INTERNAL_TELEMETRY_DEBUG_ENABLED=true.cache_creation_input_tokens and cache_read_input_tokens were not captured when using the LiteLLM integration with providers that support prompt caching (e.g., Anthropic, OpenAI, Deepseek).@llm decorator raised a LLMObsAnnotateSpanError exception when a decorated function returned a value that could not be parsed as LLM messages. Note that manual annotation still overrides this automatic annotation.@llm decorator did not automatically annotate the return value as output in traces. The decorator now captures the return value and annotates it as output, consistent with @workflow and @task decorators. Manual annotations via LLMObs.annotate() still take precedence.repr() strings instead of JSON. Pydantic v1 and v2 models are now properly serialized using model_dump() or .dict() respectively.LLMObs.workflow()) and OTel-bridged spans (e.g. from Strands Agents with DD_TRACE_OTEL_ENABLED=1) produced separate LLMObs traces instead of a single unified trace.DD_LLMOBS_PAYLOAD_SIZE_BYTES and DD_LLMOBS_EVENT_SIZE_BYTES environment variables respectively. These default to 5242880 (5 MiB) and 5000000 (5 MB), matching the previous hardcoded values.tool_search_tool_regex.input_tokens from the initial message_start chunk instead of the final message_delta chunk, which contains the accurate cumulative input token count.SIGTERM instead of honoring --graceful-timeout. #16424AttributeError crash that occurred when the lock profiler or stack profiler encountered _DummyThread instances. _DummyThread lacks the _native_id attribute, so accessing native_id raises AttributeError. The profiler now falls back to using the thread identifier when native_id is unavailable.acquire call was successful.gevent.wait called with the objects keyword argument (e.g. gevent.wait(objects=[g1, g2])) now correctly links the greenlets to their parent task. Additionally, greenlets joined via gevent.joinall or gevent.wait from a user-level greenlet are now attributed to that greenlet instead of always being attributed to the Hub.Fetched March 26, 2026