v0.24.0: Improved Reproducability, Bug fixes, and other Small Improvements
One critical issue with Accelerate is training runs were different when using an iterable dataset, no matter what seeds were set. v0.24.0 introduces the dataloader.set_epoch() function to all Accelerate DataLoaders, where if the underlying dataset (or sampler) has the ability to set the epoch for reproducability it will do so. This is similar to the implementation already existing in transformers. To use:
dataloader = accelerator.prepare(dataloader)
# Say we want to resume at epoch/iteration 2
dataloader.set_epoch(2)
For more information see this PR, we will update the docs on a subsequent release with more information on this API.
save and save_state via the ProjectConfiguration dataclass. See #1953 for more info.bfloat16 mixed precision via torch.autocastall_gather_into_tensor is now used as the main gather operation, reducing memory in the cases of big tensorsdrop_last=True will now properly have the desired affect when performing Accelerator().gather_for_metrics()dispatch_model by @austinapatel in https://github.com/huggingface/accelerate/pull/1971save and save_state via ProjectConfiguration by @muellerzr in https://github.com/huggingface/accelerate/pull/1953torch.autocast for bfloat16 mixed precision by @brcps12 in https://github.com/huggingface/accelerate/pull/2033all_gather_into_tensor by @muellerzr in https://github.com/huggingface/accelerate/pull/1968gather_for_metrics by @muellerzr in https://github.com/huggingface/accelerate/pull/2048Full Changelog: https://github.com/huggingface/accelerate/compare/v0.23.0...v0.24.0
Fetched April 7, 2026