@iboing and @5eqn contributed CorDA: Context-Oriented Decomposition Adaptation of Large Language Models for Task-Aware Parameter-Efficient Fine-tuning . This task-driven initialization method has two modes, knowledge-preservation and instruction-preservation, both using external data to select ranks intelligently. The former can be used to select those ranks that correspond to weights not affiliated with knowledge from, say, a QA dataset. The latter can be used to select those ranks that correspond most to the task at hand (e.g., a classification task). (#2231)
The new Trainable Tokens tuner allows for selective training of tokens without re-training the full embedding matrix, e.g. when adding support for reasoning / thinking tokens. This is a lot more memory efficient and the saved checkpoint is much smaller. It can be used standalone or in conjunction with LoRA adapters by passing trainable_token_indices to LoraConfig. (#2376)
LoRA now supports targeting multihead attention modules (but for now only those with _qkv_same_embed_dim=True). These modules were tricky as they may expose linear submodules but won't use their forward methods, therefore needing explicit support. (#1324)
Hotswapping now allows different alpha scalings and ranks without recompilation of the model when the model is prepared using a call to prepare_model_for_compiled_hotswap() before compiling the model. (#2177)
GPTQModel support was added in #2247 as a replacement for AutoGPTQ which is not maintained anymore.
all-linear as target_modules for custom (non-transformers) models (#2267). With this change comes a bugfix where it was possible that non-linear layers were selected when they shared the same name with a linear layer (e.g., bar.foo and baz.foo).register_peft_method() call. (#2282)PEFT_TYPE_TO_MODEL_MAPPING is now deprecated and should not be relied upon. Use PEFT_TYPE_TO_TUNER_MAPPING instead. (#2282)modules_to_save keys wrongly matched parts of the state dict if the key was a substring of another key (e.g., classifier and classifier2). (#2334)disable_input_dtype_casting=True. (#2353)rank_pattern and alpha_pattern used by many adapters now supports matching full paths as well by specifying the pattern with a caret in front, for example: ^foo to target model.foo but not model.bar.foo. (#2419)adapter_name conflict with tuner by @pzdkn in https://github.com/huggingface/peft/pull/2254"all-linear" to target custom models by @BenjaminBossan in https://github.com/huggingface/peft/pull/2267__all__ by @bluenote10 in https://github.com/huggingface/peft/pull/2280config.py by @innerlee in https://github.com/huggingface/peft/pull/2297prepare_model_for_kbit_training docstring by @NilBiescas in https://github.com/huggingface/peft/pull/2305resize_token_embeddings to docs by @bingwork in https://github.com/huggingface/peft/pull/2290get_peft_model() for in-place base model modification by @d-kleine in https://github.com/huggingface/peft/pull/2313low_cpu_mem_usage=True with 8bit bitsandbytes by @BenjaminBossan in https://github.com/huggingface/peft/pull/2325PEFT_TYPE_TO_MODEL_MAPPING variable with deprecation by @BenjaminBossan in https://github.com/huggingface/peft/pull/2328modules_to_save loading if substring by @BenjaminBossan in https://github.com/huggingface/peft/pull/2334modules_to_save by @BenjaminBossan in https://github.com/huggingface/peft/pull/2220torch.compile tests and docs by @BenjaminBossan in https://github.com/huggingface/peft/pull/2332nn.Conv1d by @CCLDArjun in https://github.com/huggingface/peft/pull/2333prepare_model_for_compiled_hotswap raises when no adapter was found by @BenjaminBossan in https://github.com/huggingface/peft/pull/2375hf_hub_download arguments are used when loading locally by @henryzhengr in https://github.com/huggingface/peft/pull/2373all-linear target modules by @BenjaminBossan in https://github.com/huggingface/peft/pull/2391PeftConfig.from_pretrained by @BenjaminBossan in https://github.com/huggingface/peft/pull/2397.eval() for inference by @faaany in https://github.com/huggingface/peft/pull/2408Full Changelog: https://github.com/huggingface/peft/compare/v0.14.0...v0.15.0
Fetched April 7, 2026