v0.22.0: Distributed operation framework, Gradient Accumulation enhancements, FSDP enhancements, and more!
A new framework has been introduced which can help catch timeout errors caused by distributed operations failing before they occur. As this adds a tiny bit of overhead, it is an opt-in scenario. Simply run your code with ACCELERATE_DEBUG_MODE="1" to enable this. Read more in the docs, introduced via https://github.com/huggingface/accelerate/pull/1756
Accelerator.load_state can now load the most recent checkpoint automaticallyIf a ProjectConfiguration has been made, using accelerator.load_state() (without any arguments passed) can now automatically find and load the latest checkpoint used, introduced via https://github.com/huggingface/accelerate/pull/1741
In this release multiple new enhancements to distributed gradient accumulation have been added.
accelerator.accumulate() now supports passing in multiple models introduced via https://github.com/huggingface/accelerate/pull/1708.backward() via https://github.com/huggingface/accelerate/pull/1726DataLoaderDispatcher added via https://github.com/huggingface/accelerate/pull/1846no_sync by @NouamaneTazi in https://github.com/huggingface/accelerate/pull/1726get_scale() by patching the step method of optimizer. by @yuxinyuan in https://github.com/huggingface/accelerate/pull/1720set_module_tensor_to_device. by @Narsil in https://github.com/huggingface/accelerate/pull/1731__repr__ of AlignDevicesHook by @KacperWyrwal in https://github.com/huggingface/accelerate/pull/1735KwargsHandler.to_kwargs not working with os.environ initialization in __post_init__ by @CyCle1024 in https://github.com/huggingface/accelerate/pull/1738autocast kwargs and simplify autocast wrapper by @muellerzr in https://github.com/huggingface/accelerate/pull/1740Accelerator.save_state using multi-gpu by @CyCle1024 in https://github.com/huggingface/accelerate/pull/1760max_memory argument is in unexpected order by @ranchlai in https://github.com/huggingface/accelerate/pull/1759is_aim_available() function to not match aim >= 4.0.0 by @alberttorosyan in https://github.com/huggingface/accelerate/pull/1769load_fsdp_optimizer by @awgu in https://github.com/huggingface/accelerate/pull/1755torch.distributed is disabled by @natsukium in https://github.com/huggingface/accelerate/pull/1800get_balanced_memory to avoid OOM by @ranchlai in https://github.com/huggingface/accelerate/pull/1798convert_file_size_to_int by @ranchlai in https://github.com/huggingface/accelerate/pull/1799allow_val_change by @SumanthRH in https://github.com/huggingface/accelerate/pull/1796gather_for_metrics by @dleve123 in https://github.com/huggingface/accelerate/pull/1784load_and_quantize_model arg by @JonathanRayner in https://github.com/huggingface/accelerate/pull/1822init_on_device by @shingjan in https://github.com/huggingface/accelerate/pull/1826unwrap_model and keep_fp32_wrapper=False by @BenjaminBossan in https://github.com/huggingface/accelerate/pull/1838verify_device_map by @Rexhaif in https://github.com/huggingface/accelerate/pull/1842gpu_ids (Rel. Issue #1848) by @devymex in https://github.com/huggingface/accelerate/pull/1850fsdp_with_peak_mem_tracking.py by @pacman100 in https://github.com/huggingface/accelerate/pull/1856init_on_device by @shingjan in https://github.com/huggingface/accelerate/pull/1852DataLoaderDispatcher by @thevasudevgupta in https://github.com/huggingface/accelerate/pull/1846The following contributors have made significant changes to the library over the last release:
Accelerator.accumulate() (#1708)no_sync (#1726)DataLoaderDispatcher (#1846)Full Changelog: https://github.com/huggingface/accelerate/compare/v0.21.0...v0.22.0
Fetched April 7, 2026