🧨 Diffusers now uses 🤗 PEFT, new tuning methods, better quantization support, higher flexibility and more
🧨 Diffusers now leverage PEFT as a backend for LoRA inference for Stable Diffusion models (#873, #993, #961). Relevant PRs on 🧨 Diffusers are https://github.com/huggingface/diffusers/pull/5058, https://github.com/huggingface/diffusers/pull/5147, https://github.com/huggingface/diffusers/pull/5151 and https://github.com/huggingface/diffusers/pull/5359. This helps in unlocking a vast number of practically demanding use cases around adapter-based inference 🚀. Now you can do the following with easy-to-use APIs and it supports different checkpoint formats (Diffusers format, Kohya format ...):
For details, refer to the documentation at Inference with PEFT.
r=0). This used to be possible, in which case the adapter was ignored.As always, a bunch of small improvements, bug fixes and doc improvements were added. We thank all the external contributors, both new and recurring. Below is the list of all changes since the last release.
CI] Pin diffusers by @younesbelkada in https://github.com/huggingface/peft/pull/936LoRA] Add scale_layer / unscale_layer by @younesbelkada in https://github.com/huggingface/peft/pull/935tests] add transformers & diffusers integration tests by @younesbelkada in https://github.com/huggingface/peft/pull/962safe_merge option in merge by @younesbelkada in https://github.com/huggingface/peft/pull/1001core / LoRA] Add safe_merge to bnb layers by @younesbelkada in https://github.com/huggingface/peft/pull/1009LoRA] Revert original behavior for scale / unscale by @younesbelkada in https://github.com/huggingface/peft/pull/1029LoRA] Raise error when adapter name not found in set_scale by @younesbelkada in https://github.com/huggingface/peft/pull/1034core] Fix use_reentrant issues by @younesbelkada in https://github.com/huggingface/peft/pull/1036tests] Update Dockerfile to use cuda 12.2 by @younesbelkada in https://github.com/huggingface/peft/pull/1050Full Changelog: https://github.com/huggingface/peft/compare/v0.5.0...v0.6.0
Fetched April 7, 2026