We tested PEFT on @OpenAI's Whisper Large model and got: i) 5x larger batch sizes ii) Less than 8GB GPU VRAM iii) Best part? Almost no degredation to WER 🤯
Without PEFT:
With PEFT:
prepare_for_int8_training utilityThis utility enables preprocessing the base model to be ready for INT8 training.
core] add prepare_model_for_training by @younesbelkada in https://github.com/huggingface/peft/pull/85core] Some changes with prepare_model_for_training & few fixes by @younesbelkada in https://github.com/huggingface/peft/pull/105disable_adapter() context managerEnables to disable adapter layers to get the outputs from the frozen base models. An exciting application of this feature allows only a single model copy to be used for policy model and reference model generations in RLHF.
core] add prepare_model_for_training by @younesbelkada in https://github.com/huggingface/peft/pull/85bnb] add flan-t5 example by @younesbelkada in https://github.com/huggingface/peft/pull/86prepare_model_for_training flexible by @pacman100 in https://github.com/huggingface/peft/pull/90bnb optional by @pacman100 in https://github.com/huggingface/peft/pull/97core] Some changes with prepare_model_for_training & few fixes by @younesbelkada in https://github.com/huggingface/peft/pull/105EleutherAI/gpt-neox-20b to support matrix by @pacman100 in https://github.com/huggingface/peft/pull/109core] Fix autocast issue by @younesbelkada in https://github.com/huggingface/peft/pull/121prepare_for_int8_training by @pacman100 in https://github.com/huggingface/peft/pull/127pyproject.toml by @SauravMaheshkar in https://github.com/huggingface/peft/pull/125The following contributors have made significant changes to the library over the last release:
Full Changelog: https://github.com/huggingface/peft/compare/v0.1.0...v0.2.0
Fetched April 7, 2026