Estimated end-of-life date, accurate to within three months: 05-2027 See the support level definitions for more information.
mlflow
DD_API_KEY, DD_APP_KEY and DD_MODEL_LAB_ENABLED are set, HTTP requests to the MLFlow tracking server will include the DD-API-KEY and DD-APPLICATION-KEY headers. #16685AI Guard
block=False, which now defaults to block=True.strands-agents>=1.29.0; the HookProvider works with any version that exposes the hooks system.azure_durable_functions
profiling
DD_EXPERIMENTAL_PROPAGATE_PROCESS_TAGS_ENABLED=false.runtime metrics
DD_EXPERIMENTAL_PROPAGATE_PROCESS_TAGS_ENABLED=false.remote configuration
DD_EXPERIMENTAL_PROPAGATE_PROCESS_TAGS_ENABLED=false.dynamic instrumentation
DD_EXPERIMENTAL_PROPAGATE_PROCESS_TAGS_ENABLED=false.crashtracking
DD_EXPERIMENTAL_PROPAGATE_PROCESS_TAGS_ENABLED=false.data streams monitoring
DD_EXPERIMENTAL_PROPAGATE_PROCESS_TAGS_ENABLED=false.database monitoring
DD_EXPERIMENTAL_PROPAGATE_PROCESS_TAGS_ENABLED=false.Stats computation
DD_EXPERIMENTAL_PROPAGATE_PROCESS_TAGS_ENABLED=false.LLM Observability
metadata parameter. Tasks with the existing (input_data, config) signature continue to work unchanged.RemoteEvaluator which allows users to reference LLM-as-Judge evaluations configured in the Datadog UI by name when running local experiments. For more information, see the documentation: https://docs.datadoghq.com/llm_observability/guide/evaluation_developer_guide/#using-managed-evaluatorsephemeral_5m_input_tokens and ephemeral_1h_input_tokens metrics are now reported, distinguishing between 5 minute and 1 hour prompt caches.type: "thinking") are now captured as role: "reasoning" messages in both streaming and non-streaming responses, as well as in input messages for tool use continuations. LiteLLM now extracts reasoning_output_tokens from completion_tokens_details and captures reasoning_content in output messages for OpenAI-compatible providers.tracer
DD_EXPERIMENTAL_PROPAGATE_PROCESS_TAGS_ENABLED=false.pytest-rerunfailures and flaky were silently overridden by the ddtrace plugin. With this change, external rerun plugins will now drive retries as expected when Auto Test Retries and Early Flake Detection features are both disabled, otherwise our retry mechanism takes precedence and a warning is emitted.RuntimeError could be raised when iterating over the context._meta dictionary while creating spans or generating distributed traces.DD_TRACE_DEBUG instead of its own dedicated environment variable DD_INTERNAL_TELEMETRY_DEBUG_ENABLED. Setting DD_TRACE_DEBUG=true no longer enables telemetry debug mode. To enable telemetry debug mode, set DD_INTERNAL_TELEMETRY_DEBUG_ENABLED=true.cache_creation_input_tokens and cache_read_input_tokens were not captured when using the LiteLLM integration with providers that support prompt caching (e.g., Anthropic, OpenAI, Deepseek).SIGTERM instead of honoring --graceful-timeout. #16424AttributeError crash that occurred when the lock profiler or stack profiler encountered _DummyThread instances. _DummyThread lacks the _native_id attribute, so accessing native_id raises AttributeError. The profiler now falls back to using the thread identifier when native_id is unavailable.acquire call was successful.Fetched March 26, 2026