owners parameter to define_asset_job, mirroring the field on regular jobs.LOGS_CAPTURED event messages no longer say "Started capturing logs", since compute log managers typically upload logs on completion rather than streaming in real time.dagster-clickhouse, dagster-clickhouse-pandas, and dagster-clickhouse-polars libraries with native ClickHouse resources, IO managers, and dg components.EcsUserCodeLauncher now accepts a repository_credentials config option, allowing ECR credentials to be configured at the agent or deployment level instead of only per code location.auth_provider config option (azure_wif, gcp_wif, or aws_wif), with optional extras dagster-postgres[azure], dagster-postgres[gcp], and dagster-postgres[aws]. The Helm chart supports WIF via global.postgresqlAuthWifEnabled. (Thanks, @JohnMav!)dg plus deploy configure.key:value autocompletion in search inputs not advancing into value-suggestion context when accepting a key with the Enter key.--defs-state-info argument was applied to the multipex server command but not to the gRPC server command in serverless deployments.enable_duplicate_source_asset_keys could emit duplicate dependency entries and ambiguous source metadata when multiple dbt sources resolved to the same asset key.NO_PROXY or no_proxy environment variable did not affect the configuration of Kubernetes API calls.validator usage with field_validator to eliminate Pydantic deprecation warnings.dg CLI configuration option for setting a custom virtual environment path.The Dagster+ Terraform provider lets platform teams manage deployments, access controls, alerting, and more as code. Define entire environments declaratively, review changes through pull requests, and integrate Dagster+ into your existing infrastructure workflows.
azuredevops kind.dg utils integrations sub-command has been removed.DatabricksClientResource now accepts a credentials_strategy argument, enabling federated and custom authentication flows via the Databricks SDK's CredentialsStrategy protocol. (Thanks, @hbellur0526!)--api-token flag was ignored when passed to dg CLI commands.dg api asset-checks where the command used a non-existent top-level GraphQL resolver path.TypeError when credentials_strategy was None.AI agents that only understand business definitions without knowing whether the underlying pipeline actually succeeded are confidently wrong and operational context from the orchestrator is the missing piece.
Once your pipelines span multiple Databricks workspaces, you're no longer orchestrating a single system you're coordinating a distributed one.
PipesCompositeMessageReader (preview) to support multiple concurrent message streams in a single Pipes session.sensor:, schedule:, and job: attribute support to the asset selection syntax (e.g., sensor:my_sensor, job:my_job).automation_type: attribute support to the asset selection syntax, allowing queries like automation_type:schedule or automation_type:sensor. (Thanks, @bengotow!)AirbyteWorkspaceComponent, FivetranWorkspaceComponent) now default to LOCAL_FILESYSTEM state storage instead of legacy_code_server_snapshots.dg api issue create and dg api issue update commands.dg api issue list.DagsterDbtTranslatorSettings.enable_source_metadata now defaults to True, enabling upstream asset key remapping based on dbt source table names by default.execute_in_process() would sometimes fail to display the step timeline in the Dagster UI.PickledObjectS3IOManager now defaults the S3 key prefix to an empty string when none is provided. (Thanks, @aksestok!)PipesDatabricksClient.run_multi_task and PipesDatabricksServerlessClient.run_multi_task now give each submitted task its own message destination by default, fixing chunk-file collisions between concurrent tasks.free_slots_after_run_end_seconds concurrency configuration option.dg api commands for programmatic inspection of assets, runs, jobs, schedules, and more.is_virtual parameter on @asset and AssetSpec for modeling assets like database views that automatically reflect upstream changes without explicit materialization.external_asset_from_spec and external_assets_from_specs. Use AssetSpec inputs directly to Definitions(...) or AssetsDefinition(specs=[...]) instead.AssetKey deps argument support from asset dependencies. Use a sequence of AssetDep objects instead.get_all_asset_specs from Definitions.legacy_freshness_policy parameter from @observable_source_asset.auto_observe_interval_minutes parameter from @observable_source_asset.legacy_freshness_policies_by_output_name parameter from AssetsDefinition.load_component_at_path from ComponentLoadContext. Use context.load_component instead.build_defs_at_path from ComponentLoadContext.AirbyteState enum (use AirbyteJobStatusType instead) and removed deprecated legacy_freshness_policy and auto_materialize_policy parameters from build_airbyte_assets().DagsterLookerResource.build_defs, get_asset_key, get_dashboard_asset_key, get_explore_asset_key, get_view_asset_key methods, and Type[DagsterLookerApiTranslator] support from API helpers.PowerBIWorkspace.build_defs(), translator key helpers (use get_asset_spec() instead), and Type[DagsterPowerBITranslator] support in load_powerbi_asset_specs() (pass an instance instead).SigmaOrganization.build_defs(), DagsterSigmaTranslator.get_asset_key() (use get_asset_spec(...).key instead), and Type[DagsterSigmaTranslator] support in load_sigma_asset_specs() (pass an instance instead).@asset decorator and AssetSpec now accept an is_virtual parameter for defining assets that represent views or derived tables that don't need to be materialized. Virtual assets are supported in staleness calculations, execution planning, and declarative automation.enable_dbt_views_as_virtual_assets setting to DbtTranslatorSettings for automatically treating dbt views as virtual assets.run_tags and specifying an asset_selection in the RunRequest would not apply the job's run_tags to the resulting run.None fields.dg plus deploy configure generating a GitHub Action that used Docker instead of the PEX build strategy.ansi-to-react library update.members.value eq queries.DAGSTER_CLOUD_RAW_GIT_URL and DAGSTER_CLOUD_GIT_URL environment variables when onlyAllowUserDefinedK8sConfigFields was set.Introduces Dagster skills, partitioned asset checks, state backed components, virtual assets, and stronger integrations.
How we configure Copybara for bi-directional syncing to enable a hub-and-spoke model for Git repositories
dg projects can now configure agent_queue and image in pyproject.toml under [tool.dg.project], which are included in the generated dagster_cloud.yaml when running dg plus deploy.dg api job list and dg api job get commands for querying job metadata including schedules, sensors, and tags.dg api asset-check list, dg api asset-check get-executions, and dg api asset get-partition-status commands.flagAssetCatalogSidebar feature flag. The asset catalog sidebar is now always enabled.EcsRunLauncher and Dagster+ ECS agent now supply idempotency tokens when creating ECS services and tasks, improving retry behavior after transient failures.DbtCloudComponent now supports an option to include a polling sensor for monitoring dbt Cloud job runs.snap_to_yaml incorrectly removing empty dicts that represent valid config values for Permissive, Map, and Noneable config types.DagsterInvalidDefinitionError during nested component post_processing resolution. (Thanks, @vidiyala99!)PipesECSClient incorrectly treating a task that failed to start as a successful execution.dg plus deploy not correctly pulling in environment variables when refreshing definitions state for state-backed components.dg plus integrations dbt download-manifest command to download dbt manifests from Dagster Plus for local development.AI has made contributing to open source easier but reviewing contributions is still hard. At Dagster, we're improving the contributor experience with smarter review tooling, clearer guidelines, and a focus on contributions that are easier to evaluate, merge, and maintain.
setuptools<82 pin from the dagster package.partitions attribute in asset selection syntax to filter assets by partition definition type (e.g., partitions:"static").DbtProjectComponents with custom translation methods to not work with utilities like build_schedule_from_dbt_selection.service_discovery_role_arn configuration parameter.OpExecutionContext, AssetExecutionContext, AssetCheckExecutionContext) now expose a multi_partition_key property that returns a MultiPartitionKey when the current run is a multi-partition run.Braze and Runpod kind tags. (Thanks, @dragos-pop!)DbtCloudComponent for loading dbt Cloud projects as Dagster assets using the Components API.dbt_cloud_assets decorator now supports partitioned assets via the partitions_def parameter.FivetranWorkspace now supports a retry_on_reschedule option to automatically retry syncs rescheduled by Fivetran due to quota limits, as well as resync operations.k8sApiCaBundlePath to configure a custom CA certificate path for Kubernetes API communication.service_spec_config field for arbitrary Kubernetes Service spec overrides (for example, clusterIP: None for headless services).s3_pickle_io_manager failing with dynamic outputs when step keys contain bracket characters in the generated S3 object path.PipesEMRServerlessClient where a custom CloudWatch log group name configured in monitoringConfiguration.cloudWatchLoggingConfiguration.logGroupName was ignored, causing log streaming to always use the default /aws/emr-serverless log group. (Thanks, @kchainani-figma!)DataOps is about building a system that provides visibility into what's happening and control over how it behaves
--db-pool-recycle, --db-pool-pre-ping, and others) to dg dev and dagster dev.dg plus config view command for inspecting the current CLI configuration.dagster-azure, including AzureBlobStorageResourceComponent and ADLS2ResourceComponent for declarative YAML configuration of Azure resources.DatabricksAssetBundleComponent is now subsettable at the job level, enabling selective execution of individual Databricks tasks.DatabricksAssetBundleComponent now uses the Databricks CLI to resolve variable references to task and job names in bundle templates.DatabricksWorkspaceComponent.dagster-dbt now prefers dbt-core for manifest parsing when it is installed.BigQueryResourceComponent, GCSResourceComponent, GCSFileManagerResourceComponent, and DataprocResourceComponent for declarative YAML configuration of GCP resources.BigQueryIOManager now supports a configurable write_mode parameter (truncate, replace, or append).dg plus pull env now merges pulled secrets into the existing .env file instead of replacing it, preserving any locally-set variables not present in Dagster Plus.KeyError for run_page_url in DatabricksWorkspaceComponent.DatabricksAssetBundleComponent and DatabricksWorkspaceComponent to use job and task key combinations, preventing conflicts when task keys are not unique across jobs.Standardizing on Databricks is a smart strategic move, but consolidation alone does not create a working operating model across teams, tools, and downstream systems. By pairing Databricks and Unity Catalog with Dagster, enterprises can add the coordination layer needed for dependency visibility, end-to-end lineage, and faster, more confident delivery at scale.
psycopg2-binary has been removed as a dependency from dagster-postgres. If you were previously relying on this transitive dependency, you may need to explicitly add psycopg2-binary to your project.DbtProject and DbtProjectComponent now expose prepare_project_cli_args to allow customizing CLI arguments used to generate the manifest.dagster/table_name metadata. Additionally, dagster-tableau and dagster-looker assets now populate dagster/storage_kind based on the upstream connection type.sling package import is now deferred to improve import-time performance.__enter__ on nested resource attributes annotated with dagster.ResourceDependency during parent resource setup. (Thanks, @danielgafni!)AI coding agents are changing how data engineers work. This Dagster University course shows how to build a production-ready ELT pipeline from prompts while learning practical patterns for reliable AI-assisted development.
EcsRunLauncher will now retry a task launch when a RunTask API call fails due to a throttling error in an underlying EC2 API call.workbook_selector and project_selector in TableauComponent.DbtProject constructor now correctly accepts strings for the target_path parameter.