[experimental] AutomationCondition.eager() will now only launch runs for missing partitions which become missing after the condition has been added to the asset. This avoids situations in which the eager policy kicks off a large amount of work when added to an asset with many missing historical static/dynamic partitions.
[experimental] Added a new AutomationCondition.asset_matches() condition, which can apply a condition against an arbitrary asset in the graph.
[experimental] Added the ability to specify multiple kinds for an asset with the kinds parameter.
[dagster-github] Added create_pull_request method on GithubClient that enables creating a pull request.
[dagster-github] Added create_ref method on GithubClient that enables creating a new branch.
[dagster-embedded-elt] dlt assets now generate column metadata for child tables.
[dagster-embedded-elt] dlt assets can now fetch row count metadata with dlt.run(...).fetch_row_count() for both partitioned and non-partitioned assets. Thanks @kristianandre!
[dagster-airbyte] relation identifier metadata is now attached to Airbyte assets.
[dagster-embedded-elt] relation identifier metadata is now attached to sling assets.
[dagster-embedded-elt] relation identifier metadata is now attached to dlt assets.
JobDefinition, @job, and define_asset_job now take a run_tags parameter. If run_tags are defined, they will be attached to all runs of the job, and tags will not be. If run_tags is not set, then tags are attached to all runs of the job (status quo behavior). This change enables the separation of definition-level and run-level tags on jobs.
Then env var DAGSTER_COMPUTE_LOG_TAIL_WAIT_AFTER_FINISH can now be used to pause before capturing logs (thanks @HynekBlaha!)
The kinds parameter is now available on AssetSpec.
OutputContext now exposes the AssetSpec of the asset that is being stored as an output (thanks, @marijncv!)
[experimental] Backfills are incorporated into the Runs page to improve observability and provide a more simplified UI. See the GitHub discussion for more details.
[ui] The updated navigation is now enabled for all users. You can revert to the legacy navigation via a feature flag. See GitHub discussion for more.
[ui] Improved performance for loading partition statuses of an asset job.
[dagster-docker] Run containers launched by the DockerRunLauncher now include dagster/job_name and dagster/run_id labels.
[dagster-aws] The ECS launcher now automatically retries transient ECS RunTask failures (like capacity placement failures).
Changed the log volume for global concurrency blocked runs in the run coordinator to be less spammy.
[ui] Asset checks are now visible in the run page header when launched from a schedule.
[ui] Fixed asset group outlines not rendering properly in Safari.
[ui] Reporting a materialization event now removes the asset from the asset health "Execution failures" list and returns the asset to a green / success state.
[ui] When setting an AutomationCondition on an asset, the label of this condition will now be shown in the sidebar on the Asset Details page.
[ui] Previously, filtering runs by Created date would include runs that had been updated after the lower bound of the requested time range. This has been updated so that only runs created after the lower bound will be included.
[ui] When using the new experimental navigation flag, added a fix for the automations page for code locations that have schedules but no sensors.
[ui] Fixed tag wrapping on asset column schema table.
[ui] Restored object counts on the code location list view.
[ui] Padding when displaying warnings on unsupported run coordinators has been corrected (thanks @hainenber!)
[dagster-k8s] Fixed an issue where run termination sometimes did not terminate all step processes when using the k8s_job_executor, if the termination was initiated while it was in the middle of launching a step pod.
AssetSpec now has a with_io_manager_key method that returns an AssetSpec with the appropriate metadata entry to dictate the key for the IO manager used to load it. The deprecation warning for SourceAsset now references this method.
Added a max_runtime_seconds configuration option to run monitoring, allowing you to specify that any run in your Dagster deployment should terminate if it exceeds a certain runtime. Prevoiusly, jobs had to be individually tagged with a dagster/max_runtime tag in order to take advantage of this feature. Jobs and runs can still be tagged in order to override this value for an individual run.
It is now possible to set both tags and a custom execution_fn on a ScheduleDefinition. Schedule tags are intended to annotate the definition and can be used to search and filter in the UI. They will not be attached to run requests emitted from the schedule if a custom execution_fn is provided. If no custom execution_fn is provided, then for back-compatibility the tags will also be automatically attached to run requests emitted from the schedule.
SensorDefinition and all of its variants/decorators now accept a tags parameter. The tags annotate the definition and can be used to search and filter in the UI.
Added the dagster definitions validate command to Dagster CLI. This command validates if Dagster definitions are loadable.
[dagster-databricks] Databricks Pipes now allow running tasks in existing clusters.
Fixed an issue where calling build_op_context in a unit test would sometimes raise a TypeError: signal handler must be signal.SIG_IGN, signal.SIG_DFL, or a callable object Exception on process shutdown.
[dagster-webserver] Fix an issue where the incorrect sensor/schedule state would appear when using DefaultScheduleStatus.STOPPED / DefaultSensorStatus.STOPPED after performing a reset.
Fixed an issue where users with Launcher permissions for a particular code location were not able to cancel backfills targeting only assets in that code location.
Fixed an issue preventing long-running alerts from being sent when there was a quick subsequent run.
Added --partition-range option to dagster asset materialize CLI. This option only works for assets with single-run Backfill Policies.
Added a new .without() method to AutomationCondition.eager(), AutomationCondition.on_cron(), and AutomationCondition.on_missing() which allows sub-conditions to be removed, e.g. AutomationCondition.eager().without(AutomationCondition.in_latest_time_window()).
Added AutomationCondition.on_missing(), which materializes an asset partition as soon as all of its parent partitions are filled in.
pyproject.toml can now load multiple Python modules as individual Code Locations. Thanks, @bdart!
[ui] If a code location has errors, a button will be shown to view the error on any page in the UI.
[dagster-adls2] The ADLS2PickleIOManager now accepts lease_duration configuration. Thanks, @0xfabioo!
[dagster-embedded-elt] Added an option to fetch row count metadata after running a Sling sync by calling sling.replicate(...).fetch_row_count().
[dagster-fivetran] The dagster-fivetran integration will now automatically pull and attach column schema metadata after each sync.
Fixed an issue which could cause errors when using AutomationCondition.any_downstream_condition() with downstream AutoMaterializePolicy objects.
Fixed an issue where process_config_and_initialize did not properly handle processing nested resource config.
[ui] Fixed an issue that would cause some AutomationCondition evaluations to be labeled DepConditionWrapperCondition instead of the key that they were evaluated against.
[dagster-webserver] Fixed an issue with code locations appearing in fluctuating incorrect state in deployments with multiple webserver processes.
[dagster-embedded-elt] Fixed an issue where Sling column lineage did not correctly resolve int the Dagster UI.
[dagster-k8s] The wait_for_pod check now waits until all pods are available, rather than erroneously returning after the first pod becomes available. Thanks @easontm!
The AssetSpec constructor now raises an error if an invalid group name is provided, instead of an error being raised when constructing the Definitions object.
dagster/relation_identifier metadata is now automatically attached to assets which are stored using a DbIOManager.
[ui] Streamlined the code location list view.
[ui] The “group by” selection on the Timeline Overview page is now part of the query parameters, meaning it will be retained when linked to directly or when navigating between pages.
[dagster-dbt] When instantiating DbtCliResource, the project_dir argument will now override the DBT_PROJECT_DIR environment variable if it exists in the local environment (thanks, @marijncv!).
[dagster-embedded-elt] dlt assets now generate rows_loaded metadata (thanks, @kristianandre!).
Fixed a bug where setting asset_selection=[] on RunRequest objects yielded from sensors using asset_selection would select all assets instead of none.
Fixed bug where the tick status filter for batch-fetched graphql sensors was not being respected.
[examples] Fixed missing assets in assets_dbt_python example.
[dagster-airbyte] Updated the op names generated for Airbyte assets to include the full connection ID, avoiding name collisions.
[dagster-dbt] Fixed issue causing dagster-dbt to be unable to load dbt projects where the adapter did not have a database field set (thanks, @dargmuesli!)
[dagster-dbt] Removed a warning about not being able to load the dbt.adapters.duckdb module when loading dbt assets without that package installed.
You may now wipe specific asset partitions directly from the execution context in user code by calling DagsterInstance.wipe_asset_partitions.
Dagster+ users with a "Viewer" role can now create private catalog views.
Fixed an issue where the default IOManager used by Dagster+ Serverless did not respect setting allow_missing_partitions as metadata on a downstream asset.
Fixed an issue where runs in Dagster+ Serverless that materialized partitioned assets would sometimes fail with an object has no attribute '_base_path' error.
[dagster-graphql] Fixed an issue where the statuses filter argument to the sensorsOrError GraphQL field was sometimes ignored when querying GraphQL for multiple sensors at the same time.
Updated multi-asset sensor definition to be less likely to timeout queries against the asset history storage.
Consolidated the CapturedLogManager and ComputeLogManager APIs into a single base class.
[ui] Added an option under user settings to clear client side indexeddb caches as an escape hatch for caching related bugs.
[dagster-aws, dagster-pipes] Added a new PipesECSClient to allow Dagster to interface with ECS tasks.
[dagster-dbt] Increased the default timeout when terminating a run that is running a dbt subprocess to wait 25 seconds for the subprocess to cleanly terminate. Previously, it would only wait 2 seconds.
[dagster-sdf] Increased the default timeout when terminating a run that is running an sdf subprocess to wait 25 seconds for the subprocess to cleanly terminate. Previously, it would only wait 2 seconds.
[dagster-sdf] Added support for caching and asset selection (Thanks, akbog!)
[dagster-dlt] Added support for AutomationCondition using DagsterDltTranslator.get_automation_condition() (Thanks, aksestok!)
[ui] Fixed a bug where in-progress runs from a backfill could not be terminated from the backfill UI.
[ui] Fixed a bug that caused an "Asset must be part of at least one job" error when clicking on an external asset in the asset graph UI
Fixed an issue where viewing run logs with the latest 5.0 release of the watchdog package raised an exception.
[ui] Fixed issue causing the “filter to group” action in the lineage graph to have no effect.
[ui] Fixed case sensitivity when searching for partitions in the launchpad.
[ui] Fixed a bug which would redirect to the events tab for an asset if you loaded the partitions tab directly.
[ui] Fixed issue causing runs to get skipped when paging through the runs list (Thanks, @HynekBlaha!)
[ui] Fixed a bug where the asset catalog list view for a particular group would show all assets.
[dagster-dbt] fix bug where empty newlines in raw dbt logs were not being handled correctly.
[dagster-k8s, dagster-celery-k8s] Correctly set dagster/image label when image is provided from user_defined_k8s_config. (Thanks, @HynekBlaha!)
[dagster-duckdb] Fixed an issue for DuckDB versions older than 1.0.0 where an unsupported configuration option, custom_user_agent, was provided by default
[dagster-k8s] Fixed an issue where Kubernetes Pipes failed to create a pod if the op name contained capital or non-alphanumeric containers.
[dagster-embedded-elt] Fixed an issue where dbt assets downstream of Sling were skipped
[dagser-aws]: Direct AWS API arguments in PipesGlueClient.run have been deprecated and will be removed in 1.9.0. The new params argument should be used instead.
The default io_manager on Serverless now supports the allow_missing_partitions configuration option.
Fixed a bug that caused an error when loading the launchpad for a partition, when using in Dagster+ with an agent with version below 1.8.2
1.8.3 (core) / 0.24.3 (libraries) (YANKED - This version of Dagster resulted in errors when trying to launch runs that target individual asset partitions)#
When different assets within a code location have different PartitionsDefinitions, there will no longer be an implicit asset job __ASSET_JOB_... for each PartitionsDefinition; there will just be one with all the assets. This reduces the time it takes to load code locations with assets with many different PartitionsDefinitions.
[ui] Fixed a collection of broken links pointing to renamed Declarative Automation pages.
[dagster-dbt] Fixed issue preventing usage of MultiPartitionMapping with @dbt_assets (Thanks, @arookieds!)
[dagster-azure] Fixed issue that would cause an error when configuring an AzureBlobComputeLogManager without a secret_key (Thanks, @ion-elgreco and @HynekBlaha!)
A native scheduler with support for exactly-once, fault tolerant, timezone-aware scheduling.
A new Dagster daemon process has been added to manage your schedules and sensors with a
reconciliation loop, ensuring that all runs are executed exactly once, even if the Dagster daemon
experiences occasional failure. See the
Migration Guide for
instructions on moving from SystemCronScheduler or K8sScheduler to the new scheduler.
First-class sensors, built on the new Dagster daemon, allow you to instigate runs based on
changes in external state - for example, files on S3 or assets materialized by other Dagster
pipelines. See the Sensors Overview
for more information.
Dagster now supports pipeline run queueing. You can apply instance-level run concurrency
limits and prioritization rules by adding the QueuedRunCoordinator to your Dagster instance. See
the Run Concurrency Overview
for more information.
The IOManager abstraction provides a new, streamlined primitive for granular control over where
and how solid outputs are stored and loaded. This is intended to replace the (deprecated)
intermediate/system storage abstractions, See the
IO Manager Overview for more
information.
A new Partitions page in Dagit lets you view your your pipeline runs organized by partition.
You can also launch backfills from Dagit and monitor them from this page.
A new Instance Status page in Dagit lets you monitor the health of your Dagster instance,
with repository location information, daemon statuses, instance-level schedule and sensor
information, and linkable instance configuration.
Resources can now declare their dependencies on other resources via the
required_resource_keys parameter on @resource.
Our support for deploying on Kubernetes is now mature and battle-tested Our Helm chart is
now easier to configure and deploy, and we’ve made big investments in observability and
reliability. You can view Kubernetes interactions in the structured event log and use Dagit to
help you understand what’s happening in your deployment. The defaults in the Helm chart will
give you graceful degradation and failure recovery right out of the box.
Experimental support for dynamic orchestration with the new DynamicOutputDefinition API.
Dagster can now map the downstream dependencies over a dynamic output at runtime.
We’ve dropped support for Python 2.7, based on community usage and enthusiasm for Python 3-native
public APIs.
Removal of deprecated APIs
These APIs were marked for deprecation with warnings in the 0.9.0 release, and have been removed in
the 0.10.0 release.
The decorator input_hydration_config has been removed. Use the dagster_type_loader decorator
instead.
The decorator output_materialization_config has been removed. Use dagster_type_materializer
instead.
The system storage subsystem has been removed. This includes SystemStorageDefinition,
@system_storage, and default_system_storage_defs . Use the new IOManagers API instead. See
the IO Manager Overview for more
information.
The config_field argument on decorators and definitions classes has been removed and replaced
with config_schema. This is a drop-in rename.
The argument step_keys_to_execute to the functions reexecute_pipeline and
reexecute_pipeline_iterator has been removed. Use the step_selection argument to select
subsets for execution instead.
Repositories can no longer be loaded using the legacy repository key in your workspace.yaml;
use load_from instead. See the
Workspaces Overview for
documentation about how to define a workspace.
Breaking API Changes
SolidExecutionResult.compute_output_event_dict has been renamed to
SolidExecutionResult.compute_output_events_dict. A solid execution result is returned from
methods such as result_for_solid. Any call sites will need to be updated.
The .compute suffix is no longer applied to step keys. Step keys that were previously named
my_solid.compute will now be named my_solid. If you are using any API method that takes a
step_selection argument, you will need to update the step keys accordingly.
The pipeline_def property has been removed from the InitResourceContext passed to functions
decorated with @resource.
Dagstermill
If you are using define_dagstermill_solid with the output_notebook parameter set to True,
you will now need to provide a file manager resource (subclass of
dagster.core.storage.FileManager) on your pipeline mode under the resource key "file_manager",
e.g.:
from dagster import ModeDefinition, local_file_manager, pipeline
from dagstermill import define_dagstermill_solid
my_dagstermill_solid = define_dagstermill_solid("my_dagstermill_solid", output_notebook=True,...)@pipeline(mode_defs=[ModeDefinition(resource_defs={"file_manager": local_file_manager})])defmy_dagstermill_pipeline():
my_dagstermill_solid(...)
Helm Chart
The schema for the scheduler values in the helm chart has changed. Instead of a simple toggle
on/off, we now require an explicit scheduler.type to specify usage of the
DagsterDaemonScheduler, K8sScheduler, or otherwise. If your specified scheduler.type has
required config, these fields must be specified under scheduler.config.
snake_case fields have been changed to camelCase. Please update your values.yaml as follows:
pipeline_run → pipelineRun
dagster_home → dagsterHome
env_secrets → envSecrets
env_config_maps → envConfigMaps
The Helm values celery and k8sRunLauncher have now been consolidated under the Helm value
runLauncher for simplicity. Use the field runLauncher.type to specify usage of the
K8sRunLauncher, CeleryK8sRunLauncher, or otherwise. By default, the K8sRunLauncher is
enabled.
All Celery message brokers (i.e. RabbitMQ and Redis) are disabled by default. If you are using
the CeleryK8sRunLauncher, you should explicitly enable your message broker of choice.
Event log messages streamed to stdout and stderr have been streamlined to be a single line
per event.
Experimental support for memoization and versioning lets you execute pipelines incrementally,
selecting which solids need to be rerun based on runtime criteria and versioning their outputs
with configurable identifiers that capture their upstream dependencies.
To set up memoized step selection, users can provide a MemoizableIOManager, whose has_output
function decides whether a given solid output needs to be computed or already exists. To execute
a pipeline with memoized step selection, users can supply the dagster/is_memoized_run run tag
to execute_pipeline.
To set the version on a solid or resource, users can supply the version field on the definition.
To access the derived version for a step output, users can access the version field on the
OutputContext passed to the handle_output and load_input methods of IOManager and the
has_output method of MemoizableIOManager.
Schedules that are executed using the new DagsterDaemonScheduler can now execute in any
timezone by adding an execution_timezone parameter to the schedule. Daylight Savings Time
transitions are also supported. See the
Schedules Overview for
more information and examples.
Countdown and refresh buttons have been added for pages with regular polling queries (e.g. Runs,
Schedules).
Confirmation and progress dialogs are now presented when performing run terminations and
deletions. Additionally, hanging/orphaned runs can now be forced to terminate, by selecting
"Force termination immediately" in the run termination dialog.
The Runs page now shows counts for "Queued" and "In progress" tabs, and individual run pages
show timing, tags, and configuration metadata.
The backfill experience has been improved with means to view progress and terminate the entire
backfill via the partition set page. Additionally, errors related to backfills are now surfaced
more clearly.
Shortcut hints are no longer displayed when attempting to use the screen capture command.
The asset page has been revamped to include a table of events and enable organizing events by
partition. Asset key escaping issues in other views have been fixed as well.
Miscellaneous bug fixes, frontend performance tweaks, and other improvements are also included.
Added a new dagster-docker library with a DockerRunLauncher that launches each run in its own
Docker container. (See Deploying with Docker docs
for an example.)
Added support for AWS Athena. (Thanks @jmsanders!)
Added mocks for AWS S3, Athena, and Cloudwatch in tests. (Thanks @jmsanders!)
Allow setting of S3 endpoint through env variables. (Thanks @marksteve!)
Various bug fixes and new features for the Azure, Databricks, and Dask integrations.
Added a create_databricks_job_solid for creating solids that launch Databricks jobs.
Fixed helm chart to only add flower to the K8s ingress when enabled (thanks @PenguinToast!)
Updated helm chart to use more lenient timeouts for liveness probes on user code deployments (thanks @PenguinToast!)
Bugfixes
[Helm/K8s] Due to Flower being incompatible with Celery 5.0, the Helm chart for Dagster now uses a specific image mher/flower:0.9.5 for the Flower pod.
[Dagit] Show recent runs on individual schedule pages
[Dagit] It’s no longer required to run dagster schedule up or press the Reconcile button before turning on a new schedule for the first time
[Dagit] Various improvements to the asset view. Expanded the Last Materialization Event view. Expansions to the materializations over time view, allowing for both a list view and a graphical view of materialization data.
Community Contributions
Updated many dagster-aws tests to use mocked resources instead of depending on real cloud resources, making it possible to run these tests locally. (thanks @jmsanders!)
Bugfixes
fixed an issue with retries in step launchers
[Dagit] bugfixes and improvements
Fixed an issue where dagit sometimes left hanging processes behind after exiting
Experimental
[K8s] The dagster daemon is now optionally deployed by the helm chart. This enables run-level queuing with the QueuedRunCoordinator.