v1.7.0 : Regional compilation, Layerwise casting hook, FSDPv2 + QLoRA
Instead of compiling the entire model at once, regional compilation targets repeated blocks (such as decoder layers) first. This allows the compiler to cache and reuse optimized code for subsequent blocks, significantly reducing the cold start compilation time typically seen during the first inference. Thanks @IlyasMoutawwakil for the feature ! You can view the full benchmark here, and check out our updated compilation guide for more details!
To enable this feature, set use_regional_compilation=True in the TorchDynamoPlugin configuration.
# Configure the compilation backend
dynamo_plugin = TorchDynamoPlugin(
use_regional_compilation=True,
... # other parameters
)
# Initialize accelerator with the plugin
accelerator = Accelerator(dynamo_plugin=dynamo_plugin)
# This will apply compile_regions to your model
model = accelerator.prepare(model)
We've introduced a new hook that enables per-layer upcasting and downcasting (e.g., for Linear layers) during inference. This allows users to run models with separate storage and compute dtypes, resulting in memory savings. The concept was first implemented in diffusers, where downcasting models to FP8 proved effective without major quality degradation. Contributed by @sayakpaul in https://github.com/huggingface/accelerate/pull/3427
model = ....
storage_dtype = torch.float8_e4m3fn
compute_dtype = torch.bfloat16
attach_layerwise_casting_hooks(
model,
storage_dtype=storage_dtype,
compute_dtype=compute_dtype,
)
This release includes numerous new features and bug fixes. Notably, we’ve added support for FULL_STATE_DICT, a widely used option in FSDP, now enabling .save_pretrained() in transformers to work with FSDP2 wrapped models. QLoRA training is now supported as well but more testing is needed. We have also resolved a backend issue related to parameter offloading to CPU. Additionally, a significant memory spike that occurred when cpu_ram_efficient_loading=True was enabled has been fixed. Several other minor improvements and fixes are also included—see the What’s Changed section for full details.
FULL_STATE_DICT have been enabled by @S1ro1 in https://github.com/huggingface/accelerate/pull/3527cpu_ram_efficient_loading=True by @S1ro1 in https://github.com/huggingface/accelerate/pull/3482We have added a documentation for Intel Gaudi hardware ! The support is already available since v1.5.0 through this PR.
dynamic argumentWe've updated the logic for setting self.dynamic to explicitly preserve None rather than defaulting to False when the USE_DYNAMIC environment variable is unset. This change aligns the behavior with the PyTorch documentation for torch.compile. Thanks to @yafshar for contributing this improvement in #3567.
low_precision_training guide by @sadra-barikbin in https://github.com/huggingface/accelerate/pull/3488torch.distributed.checkpoint.state_dict.set_model_state_dict in load_checkpoint_in_model by @ringohoffman in https://github.com/huggingface/accelerate/pull/3432weights_only=True by @bzhong-solink in https://github.com/huggingface/accelerate/pull/3497cpu_ram_efficient_loading=True by @S1ro1 in https://github.com/huggingface/accelerate/pull/3482accelerator.prepare + IPEX for 2+ nn.Models and/or optim.Optimizers by @mariusarvinte in https://github.com/huggingface/accelerate/pull/3517set_epoch does not take effect. by @hongjx175 in https://github.com/huggingface/accelerate/pull/3556_cast_and_contiguous by @dlvp in https://github.com/huggingface/accelerate/pull/3559cpu_ram_efficient_loading by @SumanthRH in https://github.com/huggingface/accelerate/pull/3307synchronize call for xpu in _gpu_gather by @faaany in https://github.com/huggingface/accelerate/pull/3563Full Changelog: https://github.com/huggingface/accelerate/compare/v1.6.0...v1.7.0
Fetched April 7, 2026