Estimated end-of-life date, accurate to within three months: 05-2027 See the support level definitions for more information.
pin parameter in ddtrace.contrib.dbapi.TracedConnection, ddtrace.contrib.dbapi.TracedCursor, and ddtrace.contrib.dbapi_async.TracedAsyncConnection is deprecated and will be removed in version 5.0.0. To manage configuration of DB tracing please use integration configuration and environment variables.ASM
ddtrace.appsec.ai_guard.integrations.litellm.DatadogAIGuardGuardrail class can be registered as a custom guardrail in the LiteLLM proxy to evaluate requests and responses against AI Guard security policies. Requires the LiteLLM proxy guardrails API v2 available since litellm>=1.46.1.azure_cosmos
CI Visibility
DD_AGENTLESS_LOG_SUBMISSION_ENABLED=true for agentless setups, or DD_LOGS_INJECTION=true when using the Datadog Agent.llama_index
llama-index-core>=0.11.0. Traces LLM calls, query engines, retrievers, embeddings, and agents. See the llama_index documentation for more information.tracing
OTEL_TRACES_EXPORTER=otlp to send spans to an OTLP endpoint instead of the Datadog Agent.LLM Observability
decorator tag to LLM Observability spans that are traced by a function decorator.pydantic_evals ReportEvaluator as a summary evaluator when its evaluate return annotation is exactly ScalarResult. The scalar value is recorded as the summary evaluation. Report evaluators that declare a broader analysis return type (for example the full ReportAnalysis union) are not accepted as summary evaluators; use a class-based or function summary evaluator instead. Examples and further documentation can found in our documentation [here](https://docs.datadoghq.com/llm_observability/guide/evaluation_developer_guide).Example:
from pydantic_evals.evaluators import ReportEvaluator
from pydantic_evals.evaluators import ReportEvaluatorContext
from pydantic_evals.reporting.analyses import ScalarResult
from ddtrace.llmobs import LLMObs
dataset = LLMObs.create_dataset(
dataset_name="<DATASET_NAME>",
description="<DATASET_DESCRIPTION>",
records=[RECORD_1, RECORD_2, RECORD_3, ...]
)
class TotalCasesEvaluator(ReportEvaluator):
def evaluate(self, ctx: ReportEvaluatorContext) -> ScalarResult:
return ScalarResult(
title='Total Cases',
value=len(ctx.report.cases),
unit='cases',
)
def my_task(input_data, config):
return input_data["output"]
equals_expected = EqualsExpected()
summary_evaluator = TotalCasesEvaluator()
experiment = LLMObs.experiment(
name="<EXPERIMENT_NAME>",
task=my_task,
dataset=dataset,
evaluators=[equals_expected],
summary_evaluators=[summary_evaluator],
description="<EXPERIMENT_DESCRIPTION>."
)
result = experiment.run()
ModuleNotFoundError could be raised at startup in Python environments without the _ctypes extension module.invoke_agent) were incorrectly appearing as siblings of their SDK parent span (e.g. call_agent) rather than being nested under it.model_name and model_provider reported on AWS Bedrock LLM spans as the model_id full model identifier value (e.g., "amazon.nova-lite-v1:0") and "amazon_bedrock" respectively. Bedrock spans' model_name and model_provider now correctly match backend pricing data, which enables features including cost tracking.defer_loading=True) in Anthropic and OpenAI integrations caused LLMObs span payloads to include full tool descriptions and JSON schemas for every tool in a large catalog. Deferred tool definitions now have their description and schema stripped from span metadata, with only the tool name preserved.os._exit, SIGKILL, segfault) caused buffered test events to be lost. To enable eager flushing, set DD_TRACE_PARTIAL_FLUSH_MIN_SPANS=1./search_commits endpoint caused the git metadata upload to fall back to sending the full 30-day commit history instead of aborting. This fallback could trigger cascading write load on the backend. The upload now aborts when search_commits fails, matching the behavior when the /packfile upload itself fails.Fetched April 9, 2026