Docs, Testing Suite, Multi Adapter Support, New methods and examples
With task guides, conceptual guides, integration guides, and code references all available at your fingertips, ๐ค PEFT's docs (found at https://huggingface.co/docs/peft) provide an insightful and easy-to-follow resource for anyone looking to how to use ๐ค PEFT. Whether you're a seasoned pro or just starting out, PEFT's documentation will help you to get the most out of it.
Comprised of both unit and integration tests, it rigorously tests core features, examples, and various models on different setups, including single and multiple GPUs. This commitment to testing helps ensure that PEFT maintains the highest levels of correctness, usability, and performance, while continuously improving in all areas.
CI] Add ci tests by @younesbelkada in https://github.com/huggingface/peft/pull/203CI] Add more ci tests by @younesbelkada in https://github.com/huggingface/peft/pull/223tests] Adds more tests + fix failing tests by @younesbelkada in https://github.com/huggingface/peft/pull/238tests] Adds GPU tests by @younesbelkada in https://github.com/huggingface/peft/pull/256tests] add slow tests to GH workflow by @younesbelkada in https://github.com/huggingface/peft/pull/304core] Better log messages by @younesbelkada in https://github.com/huggingface/peft/pull/366PEFT just got even more versatile with its new Multi Adapter Support! Now you can train and infer with multiple adapters, or even combine multiple LoRA adapters in a weighted combination. This is especially handy for RLHF training, where you can save memory by using a single base model with multiple adapters for actor, critic, reward, and reference. And the icing on the cake? Check out the LoRA Dreambooth inference example notebook to see this feature in action.
PEFT just got even better, thanks to the contributions of the community! The AdaLoRA method is one of the exciting new additions. It takes the highly regarded LoRA method and improves it by allocating trainable parameters across the model to maximize performance within a given parameter budget. Another standout is the Adaption Prompt method, which enhances the already popular Prefix Tuning by introducing zero init attention.
Good news for LoRA users! PEFT now allows you to merge LoRA parameters into the base model's parameters, giving you the freedom to remove the PEFT wrapper and apply downstream optimizations related to inference and deployment. Plus, you can use all the features that are compatible with the base model without any issues.
utils] add merge_lora utility function by @younesbelkada in https://github.com/huggingface/peft/pull/227core] Fix peft multi-gpu issue by @younesbelkada in https://github.com/huggingface/peft/pull/145CI] Add ci tests by @younesbelkada in https://github.com/huggingface/peft/pull/203main by @younesbelkada in https://github.com/huggingface/peft/pull/224CI] Add more ci tests by @younesbelkada in https://github.com/huggingface/peft/pull/223utils] add merge_lora utility function by @younesbelkada in https://github.com/huggingface/peft/pull/227core] Fix offload issue by @younesbelkada in https://github.com/huggingface/peft/pull/248Automation] Add stale bot by @younesbelkada in https://github.com/huggingface/peft/pull/247Automation] Update stale.py by @younesbelkada in https://github.com/huggingface/peft/pull/254tests] Adds more tests + fix failing tests by @younesbelkada in https://github.com/huggingface/peft/pull/238tests] Adds GPU tests by @younesbelkada in https://github.com/huggingface/peft/pull/256test] Add Dockerfile by @younesbelkada in https://github.com/huggingface/peft/pull/278tests] add CI training tests by @younesbelkada in https://github.com/huggingface/peft/pull/311merge_and_unload when having additional trainable modules by @pacman100 in https://github.com/huggingface/peft/pull/322pip caching to CI by @SauravMaheshkar in https://github.com/huggingface/peft/pull/314tests] add slow tests to GH workflow by @younesbelkada in https://github.com/huggingface/peft/pull/304core] Better log messages by @younesbelkada in https://github.com/huggingface/peft/pull/366try and finally in disable_adapter() to catch exceptions by @mukobi in https://github.com/huggingface/peft/pull/368CI] Fix nightly CI issues by @younesbelkada in https://github.com/huggingface/peft/pull/375The following contributors have made significant changes to the library over the last release:
@QingruZhang
@yeoedward
@Splo2t
Fetched April 7, 2026