Estimated end-of-life date, accurate to within three months: 05-2027 See the support level definitions for more information.
ASM
ddtrace.appsec.ai_guard.integrations.litellm.DatadogAIGuardGuardrail class can be registered as a custom guardrail in the LiteLLM proxy to evaluate requests and responses against AI Guard security policies. Requires the LiteLLM proxy guardrails API v2 available since litellm>=1.46.1.azure_cosmos
CI Visibility
DD_AGENTLESS_LOG_SUBMISSION_ENABLED=true for agentless setups, or DD_LOGS_INJECTION=true when using the Datadog Agent.tracing
OTEL_TRACES_EXPORTER=otlp to send spans to an OTLP endpoint instead of the Datadog Agent.LLM Observability
decorator tag to LLM Observability spans that are traced by a function decorator.pydantic_evals ReportEvaluator as a summary evaluator when its evaluate return annotation is exactly ScalarResult. The scalar value is recorded as the summary evaluation. Report evaluators that declare a broader analysis return type (for example the full ReportAnalysis union) are not accepted as summary evaluators; use a class-based or function summary evaluator instead. Examples and further documentation can found in our documentation here.Example:
from pydantic_evals.evaluators import ReportEvaluator
from pydantic_evals.evaluators import ReportEvaluatorContext
from pydantic_evals.reporting.analyses import ScalarResult
from ddtrace.llmobs import LLMObs
dataset = LLMObs.create_dataset(
dataset_name="<DATASET_NAME>",
description="<DATASET_DESCRIPTION>",
records=[RECORD_1, RECORD_2, RECORD_3, ...]
)
class TotalCasesEvaluator(ReportEvaluator):
def evaluate(self, ctx: ReportEvaluatorContext) -> ScalarResult:
return ScalarResult(
title='Total Cases',
value=len(ctx.report.cases),
unit='cases',
)
def my_task(input_data, config):
return input_data["output"]
equals_expected = EqualsExpected()
summary_evaluator = TotalCasesEvaluator()
experiment = LLMObs.experiment(
name="<EXPERIMENT_NAME>",
task=my_task,
dataset=dataset,
evaluators=[equals_expected],
summary_evaluators=[summary_evaluator],
description="<EXPERIMENT_DESCRIPTION>."
)
result = experiment.run()
ModuleNotFoundError could be raised at startup in Python environments without the _ctypes extension module.invoke_agent) were incorrectly appearing as siblings of their SDK parent span (e.g. call_agent) rather than being nested under it.model_name and model_provider reported on AWS Bedrock LLM spans as the model_id full model identifier value (e.g., "amazon.nova-lite-v1:0") and "amazon_bedrock" respectively. Bedrock spans' model_name and model_provider now correctly match backend pricing data, which enables features including cost tracking.defer_loading=True) in Anthropic and OpenAI integrations caused LLMObs span payloads to include full tool descriptions and JSON schemas for every tool in a large catalog. Deferred tool definitions now have their description and schema stripped from span metadata, with only the tool name preserved.Fetched April 7, 2026