v0.7.0: Logging API, FSDP, batch size finder and examples revamp
Use any of your favorite logging libraries (TensorBoard, Wandb, CometML...) with just a few lines of code inside your training scripts with Accelerate. All details are in the documentation.
PyTorch recently released a new model wrapper for sharded DDP training called FSDP. This release adds support for it (note that it doesn't work with mixed precision yet). See all caveats in the documentation.
Say goodbye to the CUDA OOM errors with the new find_executable_batch_size decorator. Just decorate your training function and pick a starting batch size, then let Accelerate do the rest.
The Accelerate examples are now split in two: you can find in the base folder a very simple nlp and computer vision examples, as well as complete versions incorporating all features. But you can also browse the examples in the by_feature subfolder, which will show you exactly what code to add for each given feature (checkpointing, tracking, cross-validation etc.)
mixed_precision for launch command by @sgugger in https://github.com/huggingface/accelerate/pull/300lr_scheduler to Accelerator.prepare by @sgugger in https://github.com/huggingface/accelerate/pull/301Full Changelog: https://github.com/huggingface/accelerate/compare/v0.6.0...v0.7.0
Fetched April 7, 2026