v0.6.0: Checkpointing and bfloat16 support
This release adds support for bloat16 mixed precision training (requires PyTorch >= 1.10) and a brand-new checkpoint utility to help with resuming interrupted trainings. We also get a completely revamped documentation frontend.
Save the current state of all your objects (models, optimizers, RNG states) with accelerator.save_state(path_to_checkpoint) and reload everything by calling accelerator.load_state(path_to_checkpoint)
Accelerate now supports bfloat16 mixed precision training. As a result the old --fp16 argument has been deprecated to be replaced by the more generic --mixed-precision.
You can now type accelerate env to have a copy-pastable summary of your environment and default configuration. Very convenient when opening a new issue!
The documentation has been switched to the new Hugging Face frontend, like Transformers and Datasets.
store_true on argparse in nlp example by @monologg in https://github.com/huggingface/accelerate/pull/183set_to_none in Optimizer.zero_grad by @sgugger in https://github.com/huggingface/accelerate/pull/189debug_launcher by @sgugger in https://github.com/huggingface/accelerate/pull/259Full Changelog: https://github.com/huggingface/accelerate/compare/v0.5.1...v0.6.0
Fetched April 7, 2026