v0.25.0: safetensors by default, new trackers, and plenty of bug fixes
As of this release, safetensors will be the default format saved when applicable! To read more about safetensors and why it's best to use it for safety (and not pickle/torch.save), check it out here
This release has two new experiment trackers, ClearML and DVCLive!
To use them, just pass clear_ml or dvclive to log_with in the Accelerator init. h/t to @eugen-ajechiloae-clearml and @dberenbaum
FSDP had a huge refactoring so that the interface when using FSDP is the exact same as every other scenario when using accelerate. No more needing to call accelerator.prepare() twice!
We now raise and try to disable P2P communications on consumer GPUs for the 3090 series and beyond. Without this users were seeing timeout issues and the like as NVIDIA dropped P2P support. If using accelerate launch we will automatically disable, and if we sense that it is still enabled on distributed setups using 3090's +, we will raise an error.
When doing .gather(), if tensors are on different devices we explicitly will raise an error (for now only valid on CUDA)
shuffle=True when using multiple GPUs and the new SeedableRandomSampler.save as False by @muellerzr in https://github.com/huggingface/accelerate/pull/2138launch, and pick up in state if a user will face issues. by @muellerzr in https://github.com/huggingface/accelerate/pull/2195Full Changelog: https://github.com/huggingface/accelerate/compare/v0.24.1...v0.25.0
Fetched April 7, 2026