Release on: 2023-07-10
Refactor the SBOM collection parameters from:
conf.d/container_lifecycle.d/conf.yaml existence (A) # to schedule the container lifecycle long running check
conf.d/container_image.d/conf.yaml existence (B) # to schedule the container image metadata long running check
conf.d/sbom.d/conf.yaml existence (C) # to schedule the SBOM long running check
Inside datadog.yaml:
container_lifecycle:
enabled: (D) # Used to control the start of the container_lifecycle forwarder but has been decommissioned by #16084 (7.45.0-rc)
dd_url: # \
additional_endpoints: # |
use_compression: # |
compression_level: # > generic parameters for the generic EVP pipeline
… # |
use_v2_api: # /
container_image:
enabled: (E) # Used to control the start of the container_image forwarder but has been decommissioned by #16084 (7.45.0-rc)
dd_url: # \
additional_endpoints: # |
use_compression: # |
compression_level: # > generic parameters for the generic EVP pipeline
… # |
use_v2_api: # /
sbom:
enabled: (F) # control host SBOM collection and do **not** control container-related SBOM since #16084 (7.45.0-rc)
dd_url: # \
additional_endpoints: # |
use_compression: # |
compression_level: # > generic parameters for the generic EVP pipeline
… # |
use_v2_api: # /
analyzers: (G) # trivy analyzers user for host SBOM collection
cache_directory: (H)
clear_cache_on_exit: (I)
use_custom_cache: (J)
custom_cache_max_disk_size: (K)
custom_cache_max_cache_entries: (L)
cache_clean_interval: (M)
container_image_collection:
metadata:
enabled: (N) # Controls the collection of the container image metadata in workload meta
sbom:
enabled: (O)
use_mount: (P)
scan_interval: (Q)
scan_timeout: (R)
analyzers: (S) # trivy analyzers user for containers SBOM collection
check_disk_usage: (T)
min_available_disk: (U)
to:
conf.d/{container_lifecycle,container_image,sbom}.d/conf.yaml no longer needs to be created. A default version is always shipped with the Agent Docker image with an underscore-prefixed ad_identifier that will be synthesized by the agent at runtime based on config {container_lifecycle,container_image,sbom}.enabled parameters.
Inside datadog.yaml:
container_lifecycle:
enabled: (A) # Replaces the need for creating a conf.d/container_lifecycle.d/conf.yaml file
dd_url: # \
additional_endpoints: # |
use_compression: # |
compression_level: # > unchanged generic parameters for the generic EVP pipeline
… # |
use_v2_api: # /
container_image:
enabled: (B) # Replaces the need for creating a conf.d/container_image.d/conf.yaml file
dd_url: # \
additional_endpoints: # |
use_compression: # |
compression_level: # > unchanged generic parameters for the generic EVP pipeline
… # |
use_v2_api: # /
sbom:
enabled: (C) # Replaces the need for creating a conf.d/sbom.d/conf.yaml file
dd_url: # \
additional_endpoints: # |
use_compression: # |
compression_level: # > unchanged generic parameters for the generic EVP pipeline
… # |
use_v2_api: # /
cache_directory: (H)
clear_cache_on_exit: (I)
cache: # Factorize all settings related to the custom cache
enabled: (J)
max_disk_size: (K)
max_cache_entries: (L)
clean_interval: (M)
host: # for host SBOM parameters that were directly below `sbom` before.
enabled: (F) # sbom.host.enabled replaces sbom.enabled
analyzers: (G) # sbom.host.analyzers replaces sbom.analyzers
container_image: # sbom.container_image replaces container_image_collection.sbom
enabled: (O)
use_mount: (P)
scan_interval: (Q)
scan_timeout: (R)
analyzers: (S) # trivy analyzers user for containers SBOM collection
check_disk_usage: (T)
min_available_disk: (U)
This change adds support for ingesting information such as database settings and schemas as database "metadata"
Add the capability for the security-agent compliance module to export detailed Kubernetes node configurations.
Add <span class="title-ref">unsafe-disable-verification</span> flag to skip TUF/in-toto verification when downloading and installing wheels with the <span class="title-ref">integrations install</span> command
Add <span class="title-ref">container.memory.working_set</span> metric on Linux (computed as Usage - InactiveFile) and Windows (mapped to Private Working Set)
Enabling dogstatsd_metrics_stats_enable will now enable dogstatsd_logging_enabled. When enabled, dogstatsd_logging_enabled generates dogstatsd log files at:
Windows:
c:\programdata\datadog\logs\dogstatsd_info\dogstatsd-stats.logLinux:
/var/log/datadog/dogstatsd_info/dogstatsd-stats.logMacOS:
/opt/datadog-agent/logs/dogstatsd_info/dogstatsd-stats.logThese log files are also automatically attached to the flare.
You can adjust the dogstatsd-stats logging configuration by using:
SizeInBytes (default: dogstatsd_log_file_max_size:"10Mb")Int (default: dogstatsd_log_file_max_rolls:3)The <span class="title-ref">network_config.enable_http_monitoring</span> configuration has changed to <span class="title-ref">service_monitoring_config.enable_http_monitoring</span>.
Add Oracle execution plans
Oracle query metrics
Add support for Oracle RDS multi-tenant
agent status -v now shows verbose diagnostic information. Added tailer-specific stats to the verbose status page with improved auto multi-line detection information.health command from the Agent and Cluster Agent now have a configurable timeout (60 second by default).1.19.10flush_timestamp to payload.peer.service.span.kind.0.47.9 which has fixes to improve efficiency when fetching beans, fixes for process attachment in some JDK versions, and fixes a thread leak.auto_multi_line_detection, auto_multi_line_sample_size, and auto_multi_line_match_threshold were not working when set though a pod annotation or container label.device_ip to exporter_ip.hostNetwork: true, the leader election mechanism was using a node name instead of the pod name. This was breaking the “follower to leader” forwarding mechanism. This change introduce the DD_POD_NAME environment variable as a more reliable way to set the cluster-agent pod name. It is supposed to be filled by the Kubernetes downward API.Fetched April 3, 2026