SSDTrainer — Simple Self-DistillationA new experimental SSDTrainer implements the method described in Embarrassingly Simple Self-Distillation Improves Code Generation. SSD samples completions from the model itself at a training-time temperature/truncation setting, then fine-tunes on those raw, unverified samples with standard cross-entropy loss. No reward model, verifier, teacher model, or RL: just prompts and the model.
from datasets import Dataset
from trl.experimental.ssd import SSDConfig, SSDTrainer
dataset = Dataset.from_dict({
"prompt": [
[{"role": "user", "content": "Write a function to add two numbers."}],
[{"role": "user", "content": "Write a function to check if a number is prime."}],
],
})
trainer = SSDTrainer(
model="Qwen/Qwen3-4B-Instruct",
args=SSDConfig(
output_dir="ssd-model",
temperature=0.6, # T_train from the paper
top_k=20,
top_p=0.95,
learning_rate=5e-6,
),
train_dataset=dataset,
)
trainer.train()
by @kashif in https://github.com/huggingface/trl/pull/5505
GRPOTrainerWhen tool calls produce more tokens than max_completion_length allows, GRPOTrainer now rolls back the tool messages/images added in the current iteration instead of trying to truncate them. This removes ~80 lines of fragile, image-boundary-aware bookkeeping in favor of a ~15-line snapshot-and-rollback. Since overlong samples almost always get rewarded as failures anyway, the learning signal is effectively unchanged — but the code is dramatically simpler and no longer needs per-VLM-family vision-token lookup tables.
by @qgallouedec in https://github.com/huggingface/trl/pull/5521
Continuing the effort from v1.1:
{% generation %} markers, enabling assistant-only loss masking for DeepSeek-V3 models. By @RudrenduPaul in https://github.com/huggingface/trl/pull/5527As a result of a tightened detection (see fixes below), the list of templates reported as tool-calling capable is now correct — notably, the basic Llama 3 template is no longer falsely classified as tool-calling capable.
A major cleanup sweep keeps KTOTrainer and DPOTrainer in lockstep, same initialization patterns, same config surface, same precompute behavior:
precompute_ref_batch_size to KTO (https://github.com/huggingface/trl/pull/5530)ref_model initialization (https://github.com/huggingface/trl/pull/5534)None args (https://github.com/huggingface/trl/pull/5531)generate_during_eval (https://github.com/huggingface/trl/pull/5551)ref_model when precompute_ref_log_probs is set in DPO/KTO (https://github.com/huggingface/trl/pull/5542)All by @albertvillanova.
prepare_multimodal_messages by @albertvillanova in https://github.com/huggingface/trl/pull/5474prepare_multimodal_messages by @albertvillanova in https://github.com/huggingface/trl/pull/5508supports_tool_calling falsely accepting templates that drop assistant tool_calls by @qgallouedec in https://github.com/huggingface/trl/pull/5517add_response_schema for VLM processors — the schema was being set on the outer processor instead of the inner tokenizer, so it had no effect. This also collapses a handful of __init__/decode-gate workarounds. By @qgallouedec in https://github.com/huggingface/trl/pull/5520use_transformers_paged in GRPOConfig and RLOOConfig (and remove entirely from experimental OnlineDPOConfig, GOLDConfig, SelfDistillationConfig). Will be removed from the remaining configs in v2.0.0. In a small A/B benchmark (Qwen3-0.6B GRPO), the paged path is ~20% slower and uses ~6x more peak VRAM than the default; it's also superseded by transformers continuous batching. By @qgallouedec in https://github.com/huggingface/trl/pull/5544chat_templates/README by @qgallouedec in https://github.com/huggingface/trl/pull/5545supports_tool_calling falsely accepting templates that drop assistant tool_calls by @qgallouedec in https://github.com/huggingface/trl/pull/5517use_transformers_paged by @qgallouedec in https://github.com/huggingface/trl/pull/5544add_response_schema for VLM processors by @qgallouedec in https://github.com/huggingface/trl/pull/5520chat_templates/README by @qgallouedec in https://github.com/huggingface/trl/pull/5545Full Changelog: https://github.com/huggingface/trl/compare/v1.1.0...v1.2.0
DistillationTrainer for efficient on-policy distillationRead the blog post: https://huggingface.co/spaces/HuggingFaceTB/trl-distillation-trainer
The new DistillationTrainer implements on-policy knowledge distillation as described in On-Policy Distillation of Language Models: Learning from Self-Generated Mistakes. It extends the ideas from the GKDTrainer with three key optimizations: a generation buffer that decouples the training microbatch size from the generation batch size (up to 40x speedup), external teacher server support so the teacher doesn't need to fit on training GPUs, and binary-encoded logprob payloads that shrink transfer payloads by ~5x.
from datasets import load_dataset
from trl.experimental.distillation import DistillationConfig, DistillationTrainer
dataset = load_dataset("openai/gsm8k", "main", split="train")
dataset = dataset.map(
lambda x: {"messages": [{"role": "user", "content": x["question"]}]},
remove_columns=dataset.column_names,
)
trainer = DistillationTrainer(
model="Qwen/Qwen2.5-1.5B-Instruct",
teacher_model="Qwen/Qwen2.5-7B-Instruct",
args=DistillationConfig(
output_dir="results/distill-qwen-gsm8k",
lmbda=1.0, # fully on-policy (student generates)
beta=1.0, # reverse KL
teacher_model_init_kwargs={"torch_dtype": "bfloat16"},
),
train_dataset=dataset,
)
trainer.train()
by @cmpatino in https://github.com/huggingface/trl/pull/5407, https://github.com/huggingface/trl/pull/5500 and https://github.com/huggingface/trl/pull/5501
AsyncGRPOTrainerAsyncGRPOTrainer now supports a chunked LM-head path that computes per-token log-probs and entropy via online logsumexp without materializing the full [N, V] logits tensor. Combined with completion_mask filtering to skip prompt tokens, this brings massive memory savings on long sequences — up to 44x lower peak-allocated memory on an 8192-token sequence:
chunk_lm_head_size | Peak Alloc (GB) | Reduction | Wall Time (ms) |
|---|---|---|---|
None (baseline) | 18.55 | 1.00x | 808.7 |
4096 | 0.42 | 44.32x | 459.0 |
8192 | 0.76 | 24.34x | 393.0 |
Enable it via the new chunk_lm_head_size option in AsyncGRPOConfig:
from trl.experimental.async_grpo import AsyncGRPOConfig, AsyncGRPOTrainer
trainer = AsyncGRPOTrainer(
model="Qwen/Qwen2.5-0.5B-Instruct",
args=AsyncGRPOConfig(chunk_lm_head_size=4096),
...
)
Note: mutually exclusive with use_liger_kernel (both replace the LM head forward pass).
by @AmineDiro in https://github.com/huggingface/trl/pull/5349
{% generation %} support in training chat templatesSFT with assistant_only_loss=True requires chat templates to include {% generation %} / {% endgeneration %} markers so that return_assistant_tokens_mask=True produces correct masks. Very few models ship these markers natively, so users hit a cryptic error when enabling assistant-only loss with models like Qwen3, Llama 3 or GPT-OSS.
SFTTrainer now automatically swaps in a patched training chat template when the original template lacks generation markers — no manual template surgery required. Training templates are shipped for Qwen2.5, Qwen3, Llama 3 and GPT-OSS, stored as standalone .jinja files under trl/chat_templates/ for readability, diffability, and editor syntax highlighting.
from trl import SFTConfig, SFTTrainer
trainer = SFTTrainer(
model="Qwen/Qwen3-4B",
args=SFTConfig(assistant_only_loss=True), # now just works
train_dataset=dataset,
)
trainer.train()
by @qgallouedec in https://github.com/huggingface/trl/pull/5459, https://github.com/huggingface/trl/pull/5470, by @RudrenduPaul in https://github.com/huggingface/trl/pull/5493 and https://github.com/huggingface/trl/pull/5522, and by @casinca in https://github.com/huggingface/trl/pull/5484
Agent training now supports a broader family of models via native tool-call response schemas:
A new supports_tool_calling() utility detects whether a tokenizer/processor can render a full tool-calling turn, and GRPOTrainer now validates tool support at initialization — raising a clear error upfront instead of failing cryptically mid-training.
by @qgallouedec in https://github.com/huggingface/trl/pull/5462, https://github.com/huggingface/trl/pull/5464, https://github.com/huggingface/trl/pull/5463, https://github.com/huggingface/trl/pull/5469 and https://github.com/huggingface/trl/pull/5454
environment_factory tool methods can now return multimodal content blocks (images + text) for VLM training. Previously, tool responses were always converted to str(result), discarding any visual information. Now tools can return content block lists with images, and the trainer handles them end-to-end through tokenization, generation, and the forward pass — including correct pixel_values plumbing.
class ScreenshotEnv:
def take_screenshot(self) -> list[dict]:
return [
{"type": "image", "image": self.browser.screenshot()},
{"type": "text", "text": "Current page state"},
]
The OpenEnv browsergym.py example has been migrated to this pattern, and a new carla_vlm.py example demonstrates VLM training against CARLA with camera-image tool responses.
by @sergiopaniego in https://github.com/huggingface/trl/pull/5323 and https://github.com/huggingface/trl/pull/5437, and by @qgallouedec in https://github.com/huggingface/trl/pull/5448
accuracy_reward and reasoning_accuracy_reward now emit extra diagnostic columns (solution, gold_parsed, answer_parsed) via the log_extra callback introduced in v1.0.0. These show up in the rich completions table, making it much easier to debug why a reward was (or wasn't) assigned.
from datasets import load_dataset
from trl import GRPOConfig, GRPOTrainer
from trl.rewards import accuracy_reward
dataset = load_dataset("trl-lib/DeepMath-103K", split="train")
trainer = GRPOTrainer(
model="Qwen/Qwen2-0.5B-Instruct",
reward_funcs=accuracy_reward,
args=GRPOConfig(log_completions=True),
train_dataset=dataset,
)
trainer.train()
by @qgallouedec in https://github.com/huggingface/trl/pull/5308
KTOConfig by @albertvillanova in https://github.com/huggingface/trl/pull/5477prepare_multimodal_messages by @albertvillanova in https://github.com/huggingface/trl/pull/5424prepare_multimodal_messages by @albertvillanova in https://github.com/huggingface/trl/pull/5475pixel_position_ids with image_position_ids for Gemma 4 support by @qgallouedec in https://github.com/huggingface/trl/pull/5452isinstance(part, dict) checks in image extraction by @qgallouedec in https://github.com/huggingface/trl/pull/5439_get_tool_suffix_ids by @qgallouedec in https://github.com/huggingface/trl/pull/5440prepare_deepspeed by @albertvillanova in https://github.com/huggingface/trl/pull/5414ImportError with vllm-0.10.2 in OnlineDPO and OpenEnv by @albertvillanova in https://github.com/huggingface/trl/pull/5423_get_per_token_logps_and_entropies return type by @kashif in https://github.com/huggingface/trl/pull/5456prepare_multimodal_messages not normalizing empty string content for assistant/tool roles by @albertvillanova in https://github.com/huggingface/trl/pull/5496pad_token_id by @albertvillanova in https://github.com/huggingface/trl/pull/5487huggingface-cli references with hf by @hanouticelina in https://github.com/huggingface/trl/pull/5486truncation_mode from experimental truncate_dataset by @albertvillanova in https://github.com/huggingface/trl/pull/5467keep_end truncation mode in DPOConfig and SFTConfig — will be removed in v2.0.0. Use keep_start instead. By @albertvillanova in https://github.com/huggingface/trl/pull/5465pad_token config parameter in DPOConfig, SFTConfig, and RewardConfig — will be removed in v2.0.0. Set tokenizer.pad_token directly on the processing_class instead. By @albertvillanova in https://github.com/huggingface/trl/pull/5480trl.experimental.judges module and all judge support from trainers. Judges were experimental, unused in practice, and llm-blender (backing PairRMJudge) was unmaintained and incompatible with transformers v5 — actively blocking v5 adoption. Everything judges did can be achieved with reward_funcs. OnlineDPOTrainer, NashMDTrainer, and XPOTrainer are now unified on reward-model scoring only. By @qgallouedec in https://github.com/huggingface/trl/pull/5485carla_vlm OpenEnv example by @sergiopaniego in https://github.com/huggingface/trl/pull/5437completion_only_loss in SFT trainer docs by @RudrenduPaul in https://github.com/huggingface/trl/pull/5494DistillationTrainer by @cmpatino in https://github.com/huggingface/trl/pull/5500prepare_multimodal_messages by @albertvillanova in https://github.com/huggingface/trl/pull/5476make precommit to fix docstring style by @albertvillanova in https://github.com/huggingface/trl/pull/5436test_rloo[fsdp2]: replace non-deterministic xfail with skipif for transformers 5.4.0 by @albertvillanova in https://github.com/huggingface/trl/pull/5403input_ids or inputs_embeds by @albertvillanova in https://github.com/huggingface/trl/pull/5422eval_strategy by @SunMarc in https://github.com/huggingface/trl/pull/5426TypeError: 'NoneType' object is not iterable by @albertvillanova in https://github.com/huggingface/trl/pull/5427TypeError: 'NoneType' object is not iterable by @albertvillanova in https://github.com/huggingface/trl/pull/5438test_rloo[fsdp2] after transformers 5.5.0 release by @albertvillanova in https://github.com/huggingface/trl/pull/5442eval_strategy by @SunMarc in https://github.com/huggingface/trl/pull/5426environment_factory for VLM training by @sergiopaniego in https://github.com/huggingface/trl/pull/5323isinstance(part, dict) checks in image extraction by @qgallouedec in https://github.com/huggingface/trl/pull/5439pixel_position_ids with image_position_ids for Gemma4 support by @qgallouedec in https://github.com/huggingface/trl/pull/5452_get_tool_suffix_ids by @qgallouedec in https://github.com/huggingface/trl/pull/5440.jinja files by @qgallouedec in https://github.com/huggingface/trl/pull/5459supports_tool_calling utility and validate tool support at init by @qgallouedec in https://github.com/huggingface/trl/pull/5462{% generation %} support to training chat templates by @qgallouedec in https://github.com/huggingface/trl/pull/5470DistillationTrainer for efficient on-policy distillation by @cmpatino in https://github.com/huggingface/trl/pull/5407huggingface-cli references with hf by @hanouticelina in https://github.com/huggingface/trl/pull/5486{% generation %} markers for training chat template by @casinca in https://github.com/huggingface/trl/pull/5484trl.experimental.judges module and all judge support from trainers by @qgallouedec in https://github.com/huggingface/trl/pull/5485DistillationTrainer by @cmpatino in https://github.com/huggingface/trl/pull/5501DistillationTrainer by @cmpatino in https://github.com/huggingface/trl/pull/5500Full Changelog: https://github.com/huggingface/trl/compare/v1.0.0...v1.1.0
Read our blog post for an overview of TRL v1.
Asynchronous GRPO decouples generation from the gradient update loop by offloading rollouts to an external vLLM server. Generation runs in parallel while training continues, eliminating idle GPU time and improving hardware utilization.
from trl.experimental.async_grpo import AsyncGRPOTrainer
from trl.rewards import accuracy_reward
from datasets import load_dataset
dataset = load_dataset("trl-lib/DeepMath-103K", split="train")
trainer = AsyncGRPOTrainer(
model="Qwen/Qwen2.5-0.5B-Instruct",
reward_funcs=accuracy_reward,
train_dataset=dataset,
)
trainer.train()
by @qgallouedec in https://github.com/huggingface/trl/pull/5293
VESPO addresses training instability in off-policy RL caused by policy staleness, asynchronous updates, and train-inference mismatches. Rather than relying on heuristic token-level clipping (GRPO) or sequence-length normalization (GSPO), VESPO derives a principled reshaping kernel from a variational framework. In practice, this yields a smooth, asymmetric Gamma weighting function that gracefully suppresses extreme sequence-level importance weights without introducing length bias. It can be enabled via the loss_type parameter of GRPOConfig:
from trl import GRPOConfig, GRPOTrainer
trainer = GRPOTrainer(
model="Qwen/Qwen3-0.6B",
args=GRPOConfig(loss_type="vespo"),
...
)
by @casinca in https://github.com/huggingface/trl/pull/5199
DPPO is a new experimental trainer that replaces the standard PPO clipping mechanism with divergence constraints, providing more principled trust-region updates.
by @LeonEricsson in https://github.com/huggingface/trl/pull/5117
SDPO is a new experimental trainer that augments on-policy RL with self-distillation from the model's own high-reward trajectories. Instead of using an external teacher, SDPO treats the current model conditioned on feedback as a self-teacher, distilling its feedback-informed predictions back into the policy.
from trl.experimental import SDPOTrainer, SDPOConfig
config = SDPOConfig(
output_dir="./results",
num_generations=8,
success_reward_threshold=1.0,
use_successful_as_teacher=True,
)
trainer = SDPOTrainer(
model="Qwen/Qwen2.5-Math-1.5B-Instruct",
reward_funcs=[accuracy_reward],
args=config,
train_dataset=dataset,
)
trainer.train()
by @MengAiDev in https://github.com/huggingface/trl/pull/4935
Reward functions can return a dictionary of extra values (scalars or per-sample columns) that will be logged alongside the reward. This makes it easier to track intermediate signals without writing custom callbacks.
def my_reward_fn(completions, answer, log_extra=None, log_metric=None, **kwargs):
extracted = [extract_answer(c) for c in completions]
rewards = [1.0 if e == a else 0.0 for e, a in zip(extracted, answer)]
if log_extra:
log_extra("golden_answer", list(answer))
log_extra("extracted_answer", extracted)
if log_metric:
log_metric("accuracy", sum(rewards) / len(rewards))
return rewards
<img width="1400" height="407" alt="image" src="https://github.com/user-attachments/assets/d345b0ac-0d3c-446f-9321-a26e73ee16b4" />
<img width="1353" height="673" alt="image" src="https://github.com/user-attachments/assets/b4c0302b-f69a-4715-9aad-278b4ad13299" />
by @manueldeprada in https://github.com/huggingface/trl/pull/5233
VLLMClient.chat()VLLMClient.chat() now supports tool calling, enabling agentic workflows directly through the vLLM client interface.
by @kansalaman in https://github.com/huggingface/trl/pull/4889
BFD packing is 35% faster. The "bfd-requeue" packing strategy has also been renamed to "bfd_split". See MIGRATION.md for details.
by @mariosasko in https://github.com/huggingface/trl/pull/5189
The GKD/GOLD trainer now supports buffered rollout generation, decoupling generation from gradient updates for more efficient distillation. vLLM inference support has also been added to the base self-distillation trainer.
by @cmpatino in https://github.com/huggingface/trl/pull/5137 and https://github.com/huggingface/trl/pull/5388
A MIGRATION.md guide has been added covering all breaking changes when upgrading from TRL v0 to v1. If you're already on v0.29, the changes are minimal.
by @qgallouedec in https://github.com/huggingface/trl/pull/5255
vllm_mode to "colocate" by @qgallouedec in https://github.com/huggingface/trl/pull/5255truncation_mode in SFT by @albertvillanova in https://github.com/huggingface/trl/pull/5306max_length in DPO VLM training by @albertvillanova in https://github.com/huggingface/trl/pull/5284pad_to_multiple_of to GRPOTrainer and RLOOTrainer by @czkkkkkk in https://github.com/huggingface/trl/pull/5180print_prompt_completions_sample to include reasoning content by @qgallouedec in https://github.com/huggingface/trl/pull/5327pixel_position_ids vision key by @qgallouedec in https://github.com/huggingface/trl/pull/5374None to apply_chat_template when it is an empty list by @rabinadk1 in https://github.com/huggingface/trl/pull/5380accuracy_reward crash when called from non-main thread by @qgallouedec in https://github.com/huggingface/trl/pull/5281RewardFunc type alias to reflect actual calling convention by @s-zx in https://github.com/huggingface/trl/pull/5246prepare_multimodal_messages to support tool_calls and tool role by @alvarobartt in https://github.com/huggingface/trl/pull/5212AGENTS.md) by @qgallouedec in https://github.com/huggingface/trl/pull/5236environment_factory by @sergiopaniego in https://github.com/huggingface/trl/pull/5235.ai by @qgallouedec in https://github.com/huggingface/trl/pull/5268pad_to_multiple_of to GRPOTrainer and RLOOTrainer by @czkkkkkk in https://github.com/huggingface/trl/pull/5180prepare_multimodal_messages to support tool_calls and tool role by @alvarobartt in https://github.com/huggingface/trl/pull/5212AGENTS.md) by @qgallouedec in https://github.com/huggingface/trl/pull/5236prompts in vLLM client and server by @qgallouedec in https://github.com/huggingface/trl/pull/5225truncate_prompt_tokens for vLLM 0.17.0 by @winglian in https://github.com/huggingface/trl/pull/5248rollout_func from _generate_single_turn to _generate by @qgallouedec in https://github.com/huggingface/trl/pull/5232RewardFunc type alias to reflect actual calling convention by @s-zx in https://github.com/huggingface/trl/pull/5246_generate_single_turn by @qgallouedec in https://github.com/huggingface/trl/pull/5239_generate_single_turn by @qgallouedec in https://github.com/huggingface/trl/pull/5240bfd-requeue to bfd_split by @mariosasko in https://github.com/huggingface/trl/pull/5189vllm_mode to "colocate" and add v0→v1 migration guide by @qgallouedec in https://github.com/huggingface/trl/pull/5255grpo_trainer.py): Variational Sequence-Level Soft Policy Optimization (VESPO) by @casinca in https://github.com/huggingface/trl/pull/5199accuracy_reward crash when called from non-main thread by @qgallouedec in https://github.com/huggingface/trl/pull/5281hasattr and getattr with defaults in AGENTS.md by @qgallouedec in https://github.com/huggingface/trl/pull/5294RewardFunc type annotation to allow Nonevalues in reward list by @qgallouedec in https://github.com/huggingface/trl/pull/5297Json() type for tool calling dataset format by @lhoestq in https://github.com/huggingface/trl/pull/5307GRPOTrainer/async: fix prefix EOS slicing for tool suffix (with Qwen3/3.5 type of chat templates) by @casinca in https://github.com/huggingface/trl/pull/5330grpo_trainer.py by @casinca in https://github.com/huggingface/trl/pull/5332environment_factory by @sergiopaniego in https://github.com/huggingface/trl/pull/5235print_prompt_completions_sample to include reasoning content by @qgallouedec in https://github.com/huggingface/trl/pull/5327AGENTS.md by @qgallouedec in https://github.com/huggingface/trl/pull/5280pixel_position_ids vision key by @qgallouedec in https://github.com/huggingface/trl/pull/5374.ai by @qgallouedec in https://github.com/huggingface/trl/pull/5268apply_chat_template when it is an empty list by @rabinadk1 in https://github.com/huggingface/trl/pull/5380TRACKIO_SPACE_ID env var from all scripts by @sergiopaniego in https://github.com/huggingface/trl/pull/5365pr_template_check.yml by @qgallouedec in https://github.com/huggingface/trl/pull/5393disable_config=True from generate to GenerationConfig by @qgallouedec in https://github.com/huggingface/trl/pull/5384Full Changelog: https://github.com/huggingface/trl/compare/v0.29.0...v1.0.0
VESPO addresses training instability in off-policy RL caused by policy staleness, asynchronous updates, and train-inference mismatches. Rather than relying on heuristic token-level clipping (GRPO) or sequence-length normalization (GSPO), VESPO derives a principled reshaping kernel from a variational framework. In practice, this yields a smooth, asymmetric Gamma weighting function that gracefully suppresses extreme sequence-level importance weights without introducing length bias. It can be enabled via the loss_type parameter of GRPOConfig:
from trl import GRPOConfig, GRPOTrainer
trainer = GRPOTrainer(
model="Qwen/Qwen3-0.6B",
args=GRPOConfig(loss_type="vespo"),
...
)
by @casinca in https://github.com/huggingface/trl/pull/5199
DPPO is a new experimental trainer that replaces the standard PPO clipping mechanism with divergence constraints, providing more principled trust-region updates.
by @LeonEricsson in https://github.com/huggingface/trl/pull/5117
Reward functions can return a dictionary of extra values (scalars or per-sample columns) that will be logged alongside the reward. This makes it easier to track intermediate signals without writing custom callbacks.
def my_reward_fn(completions, answer, log_extra=None, log_metric=None, **kwargs):
extracted = [extract_answer(c) for c in completions]
rewards = [1.0 if e == a else 0.0 for e, a in zip(extracted, answer)]
if log_extra:
log_extra("golden_answer", list(answer))
log_extra("extracted_answer", extracted)
if log_metric:
log_metric("accuracy", sum(rewards) / len(rewards))
return rewards
<img width="1400" height="407" alt="image" src="https://github.com/user-attachments/assets/d345b0ac-0d3c-446f-9321-a26e73ee16b4" />
<img width="1353" height="673" alt="image" src="https://github.com/user-attachments/assets/b4c0302b-f69a-4715-9aad-278b4ad13299" />
by @manueldeprada in https://github.com/huggingface/trl/pull/5233
VLLMClient.chat()VLLMClient.chat() now supports tool calling, enabling agentic workflows directly through the vLLM client interface.
by @kansalaman in https://github.com/huggingface/trl/pull/4889
BFD packing is 35% faster. The "bfd-requeue" packing strategy has also been renamed to "bfd_split". See MIGRATION.md for details.
by @mariosasko in https://github.com/huggingface/trl/pull/5189
The GKD/GOLD trainer now supports buffered rollout generation, decoupling generation from gradient updates for more efficient distillation.
by @cmpatino in https://github.com/huggingface/trl/pull/5137
A MIGRATION.md guide has been added covering all breaking changes when upgrading from TRL v0 to v1. If you're already on v0.29, the changes are minimal.
by @qgallouedec in https://github.com/huggingface/trl/pull/5255
vllm_mode to "colocate" by @qgallouedec in https://github.com/huggingface/trl/pull/5255truncation_mode in SFT by @albertvillanova in https://github.com/huggingface/trl/pull/5306max_length in DPO VLM training by @albertvillanova in https://github.com/huggingface/trl/pull/5284pad_to_multiple_of to GRPOTrainer and RLOOTrainer by @czkkkkkk in https://github.com/huggingface/trl/pull/5180accuracy_reward crash when called from non-main thread by @qgallouedec in https://github.com/huggingface/trl/pull/5281RewardFunc type alias to reflect actual calling convention by @s-zx in https://github.com/huggingface/trl/pull/5246prepare_multimodal_messages to support tool_calls and tool role by @alvarobartt in https://github.com/huggingface/trl/pull/5212AGENTS.md) by @qgallouedec in https://github.com/huggingface/trl/pull/5236pad_to_multiple_of to GRPOTrainer and RLOOTrainer by @czkkkkkk in https://github.com/huggingface/trl/pull/5180prepare_multimodal_messages to support tool_calls and tool role by @alvarobartt in https://github.com/huggingface/trl/pull/5212AGENTS.md) by @qgallouedec in https://github.com/huggingface/trl/pull/5236prompts in vLLM client and server by @qgallouedec in https://github.com/huggingface/trl/pull/5225truncate_prompt_tokens for vLLM 0.17.0 by @winglian in https://github.com/huggingface/trl/pull/5248rollout_func from _generate_single_turn to _generate by @qgallouedec in https://github.com/huggingface/trl/pull/5232RewardFunc type alias to reflect actual calling convention by @s-zx in https://github.com/huggingface/trl/pull/5246_generate_single_turn by @qgallouedec in https://github.com/huggingface/trl/pull/5239_generate_single_turn by @qgallouedec in https://github.com/huggingface/trl/pull/5240bfd-requeue to bfd_split by @mariosasko in https://github.com/huggingface/trl/pull/5189vllm_mode to "colocate" and add v0→v1 migration guide by @qgallouedec in https://github.com/huggingface/trl/pull/5255grpo_trainer.py): Variational Sequence-Level Soft Policy Optimization (VESPO) by @casinca in https://github.com/huggingface/trl/pull/5199accuracy_reward crash when called from non-main thread by @qgallouedec in https://github.com/huggingface/trl/pull/5281hasattr and getattr with defaults in AGENTS.md by @qgallouedec in https://github.com/huggingface/trl/pull/5294RewardFunc type annotation to allow Nonevalues in reward list by @qgallouedec in https://github.com/huggingface/trl/pull/5297Json() type for tool calling dataset format by @lhoestq in https://github.com/huggingface/trl/pull/5307Full Changelog: https://github.com/huggingface/trl/compare/v0.29.0...v1.0.0rc1
prepare_multimodal_messages to support tool_calls and tool role by @alvarobartt in https://github.com/huggingface/trl/pull/5212prompts in vLLM client and server by @qgallouedec in https://github.com/huggingface/trl/pull/5225rollout_func from _generate_single_turn to _generate by @qgallouedec in https://github.com/huggingface/trl/pull/5232_generate_single_turn by @qgallouedec in https://github.com/huggingface/trl/pull/5239_generate_single_turn by @qgallouedec in https://github.com/huggingface/trl/pull/5240Full Changelog: https://github.com/huggingface/trl/compare/v0.29.0...v0.29.1
environment_factory to GRPOTrainerGRPOTrainer now accepts an environment_factory argument, allowing users to specify a custom environment class for training. This enables more flexible and diverse training scenarios by letting users define their own environments with specific dynamics and reward structures.
from datasets import Dataset
from trl import GRPOConfig, GRPOTrainer
dataset = Dataset.from_dict({
"prompt": [[{"role": "user", "content": f"Increment the counter by {i}."}] for i in range(1, 7)]
})
def reward_func(environments, **kwargs):
return [env.counter for env in environments]
class IncrementEnv:
def reset(self):
self.counter = 0
def increment(self, step: int) -> int:
"""
Increment the internal counter.
Args:
step: Value to add to the counter.
Returns:
The updated counter value.
"""
self.counter += step
return self.counter
trainer = GRPOTrainer(
model="Qwen/Qwen3-0.6B",
args=GRPOConfig(chat_template_kwargs={"enable_thinking": False}),
train_dataset=dataset,
reward_funcs=reward_func,
environment_factory=IncrementEnv,
)
trainer.train()
by @qgallouedec in https://github.com/huggingface/trl/pull/5093
TRL introduces agent-native CLI Integration: trl-training, a first-class Agent Skill that exposes TRL’s training workflows (SFT, DPO, GRPO, etc.) in a structured, agent-readable format. The skill is packaged directly with the trl library and can be installed via the CLI:
# Install into the project's agent directory (default scope=project), by agent name: claude, codex, opencode
trl skills install trl-training --target <agent>
This enables AI agents to safely and reproducibly execute TRL training workflows using a well-defined interface.
Skills can be installed at the project or global scope, and support explicit targets and overwrite controls.
launch_args for all trainers by @qgallouedec in https://github.com/huggingface/trl/pull/5059num_labels to 1 in causal model initialization for RewardTrainer by @qgallouedec in https://github.com/huggingface/trl/pull/5066None in get_trackio_space_url() to prevent errors by @qgallouedec in https://github.com/huggingface/trl/pull/5115trl <command> --help TypeError caused by unescaped % in TrainingArguments help strings by @albertvillanova in https://github.com/huggingface/trl/pull/5135SFTTrainer support for single-image data by @qgallouedec in https://github.com/huggingface/trl/pull/5132grpo_trainer.md by @casinca in https://github.com/huggingface/trl/pull/5047RewardTrainer collator from chosen/rejected_input_ids to chosen/rejected_ids by @qgallouedec in https://github.com/huggingface/trl/pull/5179RewardTrainer tests by @qgallouedec in https://github.com/huggingface/trl/pull/5060get_training_chat_template by @qgallouedec in https://github.com/huggingface/trl/pull/5108train_dataset is required by @albertvillanova in https://github.com/huggingface/trl/pull/5171grpo_trainer.md by @casinca in https://github.com/huggingface/trl/pull/5047launch_args for all trainers by @qgallouedec in https://github.com/huggingface/trl/pull/5059num_labels to 1 in causal model initialization for RewardTrainer by @qgallouedec in https://github.com/huggingface/trl/pull/5066RewardTrainer tests by @qgallouedec in https://github.com/huggingface/trl/pull/5060get_training_chat_template by @qgallouedec in https://github.com/huggingface/trl/pull/5108None in get_trackio_space_url() to prevent errors by @qgallouedec in https://github.com/huggingface/trl/pull/5115trl <command> --help TypeError caused by unescaped % in TrainingArguments help strings by @albertvillanova in https://github.com/huggingface/trl/pull/5135environment_factory to GRPOTrainer by @qgallouedec in https://github.com/huggingface/trl/pull/5093SFTTrainer support for single-image data by @qgallouedec in https://github.com/huggingface/trl/pull/5132train_dataset is required by @albertvillanova in https://github.com/huggingface/trl/pull/5171RewardTrainer collator from chosen/rejected_input_ids to chosen/rejected_ids by @qgallouedec in https://github.com/huggingface/trl/pull/5179Full Changelog: https://github.com/huggingface/trl/compare/v0.28.0...v0.29.0
is_conversational by @qgallouedec in https://github.com/huggingface/trl/pull/4923openenv/utils.py: fallback for no vLLM installed case by @Datta0 in https://github.com/huggingface/trl/pull/4868current_gradient_accumulation_steps by @qgallouedec in https://github.com/huggingface/trl/pull/4852get_open_port based on vLLM version by @qgallouedec in https://github.com/huggingface/trl/pull/4883device_map init consistency in GRPO/RLOO/KTO by @qgallouedec in https://github.com/huggingface/trl/pull/4909warnings_issued by @qgallouedec in https://github.com/huggingface/trl/pull/4960DPOConfig by @qgallouedec in https://github.com/huggingface/trl/pull/4969warmup_ratio with warmup_steps by @qgallouedec in https://github.com/huggingface/trl/pull/4983RewardTrainer, RLOOTrainer and GRPOTrainer by @qgallouedec in https://github.com/huggingface/trl/pull/4823TestGRPOTrainer.test_training_vlm_and_liger and update version checks by @qgallouedec in https://github.com/huggingface/trl/pull/4898compute_metrics in SFTTrainer by @qgallouedec in https://github.com/huggingface/trl/pull/4950RewardTrainer by @qgallouedec in https://github.com/huggingface/trl/pull/4959compute_metrics in RewardTrainer by @qgallouedec in https://github.com/huggingface/trl/pull/4958CITATION.cff by @qgallouedec in https://github.com/huggingface/trl/pull/4856DataCollatorForVisionLanguageModeling by @qgallouedec in https://github.com/huggingface/trl/pull/4911max_length in RewardConfig and SFTConfig by @qgallouedec in https://github.com/huggingface/trl/pull/4910sync_ref_model in GRPOTrainer and RLOOTrainer when using PEFT models by @qgallouedec in https://github.com/huggingface/trl/pull/4912⬆️ Bump dev version by @qgallouedec in https://github.com/huggingface/trl/pull/4835
Support triggering CI via push to ci-* branches by @albertvillanova in https://github.com/huggingface/trl/pull/4840
Revert CI hotfix pinning transformers 4.57.4 after tiny model regeneration by @albertvillanova in https://github.com/huggingface/trl/pull/4833
Use pytest-datadir in CI tests by @albertvillanova in https://github.com/huggingface/trl/pull/4836
Refactor KTO coordinated with DPO [c/N]: Remove ref_model_init_kwargs by @albertvillanova in https://github.com/huggingface/trl/pull/4837
Fix _patch_transformers_hybrid_cache for peft by @albertvillanova in https://github.com/huggingface/trl/pull/4844
Refactor KTO [4/N]: Remove unused padding_value by @albertvillanova in https://github.com/huggingface/trl/pull/4839
Remove unused padding_value from BCO by @albertvillanova in https://github.com/huggingface/trl/pull/4846
Fix CI with dev dependencies: Mark Qwen3-VL tests as xfail by @albertvillanova in https://github.com/huggingface/trl/pull/4851
Fix: undefined current_gradient_accumulation_steps by @qgallouedec in https://github.com/huggingface/trl/pull/4852
Remove deprecated parameters by @qgallouedec in https://github.com/huggingface/trl/pull/4847
Add Nash Learning from Human Feedback paper to paper index by @kansalaman in https://github.com/huggingface/trl/pull/4860
Use pytest-datadir for accelerate config files by @albertvillanova in https://github.com/huggingface/trl/pull/4861
Update OpenEnv dependency to new version for hf jobs scripts by @sergiopaniego in https://github.com/huggingface/trl/pull/4843
Update CITATION.cff by @qgallouedec in https://github.com/huggingface/trl/pull/4856
[GRPOTrainer]: Agent Training Supports Async Tool Calls by @pramodith in https://github.com/huggingface/trl/pull/4742
Enhance GRPO documentation with scaling notes by @javadtaghia in https://github.com/huggingface/trl/pull/4849
Add retry strategy to vLLM Client for increased robustness by @apalmas-saifh in https://github.com/huggingface/trl/pull/4845
Update generate_tiny_models.py: CohereForAI -> CohereLabs by @Michellehbn in https://github.com/huggingface/trl/pull/4877
Refactor KTO coordinated with DPO [e/N]: Remove label_pad_token_id by @albertvillanova in https://github.com/huggingface/trl/pull/4875
Refactor KTO coordinated with DPO [d/N]: Remove base_model_attribute_name by @albertvillanova in https://github.com/huggingface/trl/pull/4862
Fix type hint in openenv/utils.py: fallback for no vLLM installed case by @Datta0 in https://github.com/huggingface/trl/pull/4868
Update transformer version checks and documentation for lr_scheduler_kwargs workaround by @qgallouedec in https://github.com/huggingface/trl/pull/4876
fix(DeepSeek OPSM): passing correct (vLLM) logprobs by @casinca in https://github.com/huggingface/trl/pull/4857
Remove label_pad_token_id from experimental trainers by @albertvillanova in https://github.com/huggingface/trl/pull/4878
Fix SFT training for prompt-completion type and transformers v5 by @qgallouedec in https://github.com/huggingface/trl/pull/4880
Bugfix: Logprob drift in vLLM serving mode (compared to colocate mode) by @kdubovikov in https://github.com/huggingface/trl/pull/4873
Enable vLLM sleep mode for generation in Online DPO by @winglian in https://github.com/huggingface/trl/pull/4882
Test distributed training for RewardTrainer, RLOOTrainer and GRPOTrainer by @qgallouedec in https://github.com/huggingface/trl/pull/4823
Mark ZeRO 2 as xfail in distributed tests due to current failure by @qgallouedec in https://github.com/huggingface/trl/pull/4885
Fix import path for get_open_port based on vLLM version by @qgallouedec in https://github.com/huggingface/trl/pull/4883
Fix RewardTrainer's results not reproducible by @liyc-ai in https://github.com/huggingface/trl/pull/4887
GOLD training speed up by @141forever in https://github.com/huggingface/trl/pull/4888
Transformers v5 release: extend xfail condition for TestGRPOTrainer.test_training_vlm_and_liger and update version checks by @qgallouedec in https://github.com/huggingface/trl/pull/4898
Fix CI NotImplementedError for bfloat16 by @albertvillanova in https://github.com/huggingface/trl/pull/4902
Fix CI AssertionError: Parameter has not changed by @albertvillanova in https://github.com/huggingface/trl/pull/4904
Refactor vLLM generation [1/N]: Extract vLLM generation by @albertvillanova in https://github.com/huggingface/trl/pull/4700
Created new PTT integration docs as requested by @adityachallapally in https://github.com/huggingface/trl/pull/4907
Fix CI TypeError in llm-blender tests by @albertvillanova in https://github.com/huggingface/trl/pull/4919
Rearrange variable assignments in DataCollatorForVisionLanguageModeling by @qgallouedec in https://github.com/huggingface/trl/pull/4911
Fix help text formatting for max_length in RewardConfig and SFTConfig by @qgallouedec in https://github.com/huggingface/trl/pull/4910
device_map init consistency in GRPO/RLOO/KTO by @qgallouedec in https://github.com/huggingface/trl/pull/4909
Comment about overriding prediction_step in GRPOTrainer and RLOOTrainer by @qgallouedec in https://github.com/huggingface/trl/pull/4913
Remove gradient checkpointing option from various training scripts by @qgallouedec in https://github.com/huggingface/trl/pull/4905
docs: add DoRA (2402.09353) to Paper Index by @billycrapediem in https://github.com/huggingface/trl/pull/4892
Fix CI AssertionError: assert not True by @albertvillanova in https://github.com/huggingface/trl/pull/4921
Fix CI ValueError for 0 temperature by @albertvillanova in https://github.com/huggingface/trl/pull/4916
Fix extra EOS appended in DPO preprocessing for conversational data by @qgallouedec in https://github.com/huggingface/trl/pull/4908
Remove chat template setup in dpo_vlm.py by @qgallouedec in https://github.com/huggingface/trl/pull/4906
Update learning rate comments and add assertions for reference model parameters in GRPO and RLOO tests by @qgallouedec in https://github.com/huggingface/trl/pull/4914
Add validation for sync_ref_model in GRPOTrainer and RLOOTrainer when using PEFT models by @qgallouedec in https://github.com/huggingface/trl/pull/4912
Support tool call data in is_conversational by @qgallouedec in https://github.com/huggingface/trl/pull/4923
Set model dtype to float32 in tests of trainers by @albertvillanova in https://github.com/huggingface/trl/pull/4924
Require transformers<5 with PairRMJudge by @albertvillanova in https://github.com/huggingface/trl/pull/4926
Move VLLMClient to generation module by @albertvillanova in https://github.com/huggingface/trl/pull/4928
Set model dtype to float32 in experimental tests of trainers by @albertvillanova in https://github.com/huggingface/trl/pull/4925
Fix profiling of VLLMGeneration.sync_weights by @albertvillanova in https://github.com/huggingface/trl/pull/4931
Fix import statement for import_utils in vllm_client.py by @qgallouedec in https://github.com/huggingface/trl/pull/4932
Set default top_k to 0 in VLLMClient by @albertvillanova in https://github.com/huggingface/trl/pull/4927
[GRPO] Add parquet logging for completions with individual rewards by @qgallouedec in https://github.com/huggingface/trl/pull/4818
Fix SFTTrainer init logic: remove TrainingArguments.push_to_hub_token only for transformers < v5 by @albertvillanova in https://github.com/huggingface/trl/pull/4942
Remove ref_model_init_kwargs from experimental BCO by @albertvillanova in https://github.com/huggingface/trl/pull/4946
Update wordle.py example with masking of env tokens by @sergiopaniego in https://github.com/huggingface/trl/pull/4895
Fix PPO run_name parameter not taking effect by @mel3c in https://github.com/huggingface/trl/pull/4945
Minor fix docs style by @albertvillanova in https://github.com/huggingface/trl/pull/4953
Add test for training with compute_metrics in SFTTrainer by @qgallouedec in https://github.com/huggingface/trl/pull/4950
Remove access to warnings_issued by @qgallouedec in https://github.com/huggingface/trl/pull/4960
NeMo-Gym Integration by @cmunley1 in https://github.com/huggingface/trl/pull/4848
Add test for tool call data in RewardTrainer by @qgallouedec in https://github.com/huggingface/trl/pull/4959
Add test for training with compute_metrics in RewardTrainer by @qgallouedec in https://github.com/huggingface/trl/pull/4958
Remove max_prompt_length from experimental PRM by @albertvillanova in https://github.com/huggingface/trl/pull/4963
Remove max_prompt_length from experimental BCO by @albertvillanova in https://github.com/huggingface/trl/pull/4964
Remove max_prompt_length from experimental CPO by @albertvillanova in https://github.com/huggingface/trl/pull/4965
Remove max_prompt_length from experimental ORPO by @albertvillanova in https://github.com/huggingface/trl/pull/4966
Revert change in GRPO from NeMo-Gym Integration by @qgallouedec in https://github.com/huggingface/trl/pull/4970
Fix test_train_with_chat_template_kwargs by @qgallouedec in https://github.com/huggingface/trl/pull/4971
Remove padding_value from experimental CPO and use pad_token_id by @albertvillanova in https://github.com/huggingface/trl/pull/4962
Remove truncation from tokenizer calls if no max_length by @albertvillanova in https://github.com/huggingface/trl/pull/4972
Set specific OpenEnv version when installed by @sergiopaniego in https://github.com/huggingface/trl/pull/4978
Fix add_column in test_train_with_chat_template_kwargs by @albertvillanova in https://github.com/huggingface/trl/pull/4979
Support truncated completions in GRPO multi-turn training by @albertvillanova in https://github.com/huggingface/trl/pull/4976
Replace torch.allclose with torch.testing.assert_close by @qgallouedec in https://github.com/huggingface/trl/pull/4977
Simplify instructions of installation of OpenEnv by @sergiopaniego in https://github.com/huggingface/trl/pull/4980
Deprecate parameters in DPOConfig by @qgallouedec in https://github.com/huggingface/trl/pull/4969
[CI] Disallow installation of transformers 5.1.0 due to compatibility issues with DeepSpeed by @qgallouedec in https://github.com/huggingface/trl/pull/4982
Replace warmup_ratio with warmup_steps by @qgallouedec in https://github.com/huggingface/trl/pull/4983
Pin transformers!=5.1.0 in deepspeed extra due to incompatibility by @albertvillanova in https://github.com/huggingface/trl/pull/4985
Fix passing tokenizer in test_train_with_chat_template_kwargs by @albertvillanova in https://github.com/huggingface/trl/pull/4987
Update dataset configuration name in toolcall dataset loading by @qgallouedec in https://github.com/huggingface/trl/pull/4984
Use local variable instead of attribute in collator tests by @qgallouedec in https://github.com/huggingface/trl/pull/4957
Fix import of AutoModelForCausalLMWithValueHead from experimental by @albertvillanova in https://github.com/huggingface/trl/pull/4990
Assert chat_template is applied in test_train_with_chat_template_kwargs by @albertvillanova in https://github.com/huggingface/trl/pull/4991
Fix deprecation of DPOConfig.max_completion_length by @albertvillanova in https://github.com/huggingface/trl/pull/4992
Fix post_init warning stacklevel to 3 by @albertvillanova in https://github.com/huggingface/trl/pull/4993
Fix ZeRO-3 + PEFT + gradient checkpointing by @qgallouedec in https://github.com/huggingface/trl/pull/4951
Add GitHub Actions workflow for testing against Transformers branch by @qgallouedec in https://github.com/huggingface/trl/pull/4995
Add distributed smoke tests workflow for Transformers branch by @qgallouedec in https://github.com/huggingface/trl/pull/4996
Update NeMo-Gym to use env_mask by @cmunley1 in https://github.com/huggingface/trl/pull/4986
Update sampling mode to token level for safety by @sergiopaniego in https://github.com/huggingface/trl/pull/4989
perf: Qwen SAPO loss optimization by @casinca in https://github.com/huggingface/trl/pull/4956
Fix GRPO tool calling for corrupted tool calls by @akshayballal95 in https://github.com/huggingface/trl/pull/4890
Add sanitize_logprob function for NaN handling in vLLM log probabilities by @qgallouedec in https://github.com/huggingface/trl/pull/5001
[tests] Remove xfail for transformers version >= 5.0.0 due to upstream bug resolution by @qgallouedec in https://github.com/huggingface/trl/pull/5000
docs: add CGPO/Mixture of Judges (2409.20370) to Paper Index + link ref to AllTrueJudge by @nabin2004 in https://github.com/huggingface/trl/pull/5002
Filter CI SWIG deprecation warnings by @albertvillanova in https://github.com/huggingface/trl/pull/5004
Fix CI TRLExperimentalWarning in regular tests by @albertvillanova in https://github.com/huggingface/trl/pull/5007
Add support for nested_gather in OnlineDPOTrainer for transformers v5.2.0 and above by @qgallouedec in https://github.com/huggingface/trl/pull/4981
Fix CI FutureWarning: ref_model_init_kwargs is deprecated by @albertvillanova in https://github.com/huggingface/trl/pull/5009
Fix typo in DPO max_prompt_length deprecation warning message by @albertvillanova in https://github.com/huggingface/trl/pull/5020
Fix vision model prompt truncation bug in DPOTrainer by @albertvillanova in https://github.com/huggingface/trl/pull/5023
Pin transformers < 5 in judges extra due to incompatibility by @albertvillanova in https://github.com/huggingface/trl/pull/5024
Fix CI FutureWarning: generate_during_eval is deprecated by @albertvillanova in https://github.com/huggingface/trl/pull/5017
Fix typo in xfail test reason by @albertvillanova in https://github.com/huggingface/trl/pull/5028
Fix CI FutureWarning: rpo_alpha is deprecated by @albertvillanova in https://github.com/huggingface/trl/pull/5011
Fix CI FutureWarning: use_logits_to_keep is deprecated by @albertvillanova in https://github.com/huggingface/trl/pull/5013
Mark Qwen3VL tests as xfail for transformers 5.0.x by @albertvillanova in https://github.com/huggingface/trl/pull/5029
[CI] Silence PyTorch JIT and DataLoader deprecation warnings by @qgallouedec in https://github.com/huggingface/trl/pull/4999
Add length-unbiased GRPO loss (LUSPO) by @Haseebasif7 in https://github.com/huggingface/trl/pull/4988
Fix CI FutureWarning: tools is deprecated by @albertvillanova in https://github.com/huggingface/trl/pull/5015
Filter max_prompt_length UserWarning in all test cases by @albertvillanova in https://github.com/huggingface/trl/pull/5035
Fix CI FutureWarning: max_prompt_length is deprecated by @albertvillanova in https://github.com/huggingface/trl/pull/5019
Allow testing with transformers 5.1.0 via xfail marks by @albertvillanova in https://github.com/huggingface/trl/pull/5034
Rename AOT loss type 'aot_pair' to 'aot_unpaired' in DPO by @qgallouedec in https://github.com/huggingface/trl/pull/5038
Deprecate string usage for ref_model in DPOTrainer by @qgallouedec in https://github.com/huggingface/trl/pull/5040
Deprecate FDivergenceType in DPOConfig; update f_divergence_type to use string values by @qgallouedec in https://github.com/huggingface/trl/pull/5039
Fix multiprocessing start method to 'spawn' for test compatibility with Python 3.12+ by @qgallouedec in https://github.com/huggingface/trl/pull/5036
Add Online Direct Preference Optimization section to paper index by @qgallouedec in https://github.com/huggingface/trl/pull/5037
Release: 0.28 by @albertvillanova in https://github.com/huggingface/trl/pull/5043
Full Changelog: https://github.com/huggingface/trl/compare/v0.27.0...v0.28.0
warnings_issued by @qgallouedec in #4960Full Changelog: https://github.com/huggingface/trl/compare/v0.27.1...v0.27.2
current_gradient_accumulation_steps by @qgallouedec in https://github.com/huggingface/trl/pull/4852Full Changelog: https://github.com/huggingface/trl/compare/v0.27.0...v0.27.1
vllm_group_port argument to GRPO, RLOO and OnlineDPO configuration by @pointerhacker in https://github.com/huggingface/trl/pull/4545forward_masked_logits function by @qgallouedec in https://github.com/huggingface/trl/pull/4729AutoModelForCausalLMWithValueHead and AutoModelForSeq2SeqLMWithValueHead to experimental by @qgallouedec in https://github.com/huggingface/trl/pull/4654experimental.utils by @qgallouedec in https://github.com/huggingface/trl/pull/4667DataCollatorForChatML to experimental.utils by @qgallouedec in https://github.com/huggingface/trl/pull/4668add_bos_token_if_needed and add_eos_token_if_needed to experimental.utils by @qgallouedec in https://github.com/huggingface/trl/pull/4674truncate_right and SIMPLE_CHAT_TEMPLATE to experimental.utils by @qgallouedec in https://github.com/huggingface/trl/pull/4677prepare_model_for_kbit_training, enable_gradient_checkpointing, prepare_peft_model to experimental.utils by @qgallouedec in https://github.com/huggingface/trl/pull/4704get_reward function to experimental.utils by @qgallouedec in https://github.com/huggingface/trl/pull/4683num_generations_eval=1 in the calculation of the advantage by @qgallouedec in https://github.com/huggingface/trl/pull/4662num_generations_eval is specified and different than num_generations by @apalmas-saifh in https://github.com/huggingface/trl/pull/4682generation_config for tiny model uploads by @qgallouedec in https://github.com/huggingface/trl/pull/4643qwen3_schema by @mattbui in https://github.com/huggingface/trl/pull/4709HybridCache in Liger-Kernel with transformers v5 by @qgallouedec in https://github.com/huggingface/trl/pull/4798args by @carlyou in https://github.com/huggingface/trl/pull/4801grpo_trainer.md): Added Qwen SAPO details under Loss Types by @casinca in https://github.com/huggingface/trl/pull/4681MergeModelCallback from import structure by @qgallouedec in https://github.com/huggingface/trl/pull/4664ChatMlSpecialTokens by @qgallouedec in https://github.com/huggingface/trl/pull/4666_win_rate_completions_df function from callbacks by @qgallouedec in https://github.com/huggingface/trl/pull/4672DbrxForCausalLM support by @qgallouedec in https://github.com/huggingface/trl/pull/4799compute_accuracy to PRM Trainer file by @qgallouedec in https://github.com/huggingface/trl/pull/4656clone_chat_template to chat_template_utils by @qgallouedec in https://github.com/huggingface/trl/pull/4653GeometricMixtureWrapper to nash_md_trainer.py by @qgallouedec in https://github.com/huggingface/trl/pull/4670exact_div, print_rich_table, truncate_response, forward to ppo_trainer by @qgallouedec in https://github.com/huggingface/trl/pull/4676OnPolicyConfig and PPOConfig and move OnlineTrainerState by @qgallouedec in https://github.com/huggingface/trl/pull/4671AutoModelForCausalLMWithValueHead to test_ppo_trainer by @qgallouedec in https://github.com/huggingface/trl/pull/4678generate and batch_generation to ppo_trainer.py by @qgallouedec in https://github.com/huggingface/trl/pull/4675TrainerCallback from top-level transformers by @qgallouedec in https://github.com/huggingface/trl/pull/4694top_k parameter in OnlineDPOTrainer by @qgallouedec in https://github.com/huggingface/trl/pull/4714PeftModel + peft_config in trainers by @qgallouedec in https://github.com/huggingface/trl/pull/4713TestParseResponse by @qgallouedec in https://github.com/huggingface/trl/pull/4736GuidedDecodingParams with StructuredOutputsParams in sampling parameter configuration by @qgallouedec in https://github.com/huggingface/trl/pull/4797--
Full Changelog: https://github.com/huggingface/trl/compare/v0.26.0...v0.27.0
Full Changelog: https://github.com/huggingface/trl/compare/v0.26.1...v0.26.2
num_generations_eval is specified and different than num_generations by @apalmas-saifh in https://github.com/huggingface/trl/pull/4682Full Changelog: https://github.com/huggingface/trl/compare/v0.26.0...v0.26.1
GRPOTrainer now supports training agents using tools. This allows language models to interact with external functions or APIs during training.
from datasets import Dataset
from trl import GRPOTrainer
def multiply(a: int, b: int) -> int:
"""
Multiplies two integers.
Args:
a: The first integer.
b: The second integer.
Returns:
The product of the two integers.
"""
return a * b
dataset = Dataset.from_list(
[
{"prompt": [{"role": "user", "content": "What is 3 multiplied by 4?"}], "answer": 12},
{"prompt": [{"role": "user", "content": "Calculate 7 times 8."}], "answer": 56},
{"prompt": [{"role": "user", "content": "Find the product of 5 and 6."}], "answer": 30},
{"prompt": [{"role": "user", "content": "What do you get when you multiply 9 by 9?"}], "answer": 81},
{"prompt": [{"role": "user", "content": "Compute 12 multiplied by 11."}], "answer": 132},
{"prompt": [{"role": "user", "content": "What is 15 times 14?"}], "answer": 210},
]
)
def accuracy(completions, answer, **kwargs):
predictions = [completion[-1]["content"] for completion in completions]
rewards = [float(str(ans) in pred) for pred, ans in zip(predictions, answer)]
return rewards
trainer = GRPOTrainer(
model="Qwen/Qwen3-0.6B",
train_dataset=dataset,
tools=[multiply],
reward_funcs=accuracy,
)
trainer.train()
by @qgallouedec in https://github.com/huggingface/trl/pull/4300
CISPO Loss was first introduced in the Minimax-M1 paper, the ScaleRL paper subsequently showed that CISPO loss scales the best in terms of performance and efficiency as models are trained for longer.
GRPOTrainer now supports the CISPO loss using loss_type="cispo" in the GRPOConfig.
by @pramodith in https://github.com/huggingface/trl/pull/4495
When the input model is quantized using bitsandbytes, vLLM will now also use quantization when in colocate mode.
by @sergiopaniego in https://github.com/huggingface/trl/pull/4496
TRL nows includes a reasoning reward function
from trl.rewards import reasoning_accuracy_reward
solutions = [r"\frac{1}{3}", r"\frac{1}{3}", r"\frac{1}{3}"]
completions = [
[
{
"role": "assistant",
"content": r"<think> Reasoning content </think> The final answer is \boxed{\frac{1}{3}}",
}
],
[
{
"role": "assistant",
"content": r"<think> Reasoning content </think> The final answer is \boxed{\frac{1}{2}}",
}
],
[
{
"role": "assistant",
"content": r"<think> Reasoning content with partial answers \boxed{\frac{1}{3}} but no final answer",
}
],
]
reasoning_accuracy_reward(completions, solutions) # [1.0, 0.0, 0.0]
As any other reward function, it can be used in GRPOTrainer or RLOOTrainer.
from trl import GRPOTrainer
from trl.rewards import reasoning_accuracy_reward
trainer = GRPOTrainer(
...,
reward_funcs=reasoning_accuracy_reward,
)
by @lewtun in https://github.com/huggingface/trl/pull/4563
shuffle_dataset option to SFTTrainerYou can now shuffle the dataset in SFTTrainer by setting the shuffle_dataset argument to True in SFTConfig. This is useful when the dataset features high similarity between consecutive samples.
from trl import SFTTrainer, SFTConfig
SFTConfig(shuffle_dataset=True)
by @qgallouedec in https://github.com/huggingface/trl/pull/4564
Soft Adaptive Policy Optimization (SAPO), replaces hard clipping with a smooth, temperature-controlled gate that adaptively attenuates off-policy updates while preserving useful learning signals. Compared with GSPO and GRPO, SAPO is both sequence-coherent and token-adaptive. Like GSPO, SAPO maintains sequence-level coherence, but its soft gating forms a continuous trust region that avoids the brittle hard clipping band used in GSPO.
You can now use SAPO loss in GRPOTrainer by setting loss_type="sapo" in the GRPOConfig.
by @pramodith in https://github.com/huggingface/trl/pull/4600
num_generations_eval parameter for efficient evaluation by @mingxuetian in https://github.com/huggingface/trl/pull/4458WinRateCallback to experimental by @qgallouedec in https://github.com/huggingface/trl/pull/4558device_map and dtype to "auto" by default by @qgallouedec in https://github.com/huggingface/trl/pull/4509flash-attn to flash-attn2 by @qgallouedec in https://github.com/huggingface/trl/pull/4514device_map=None for DeepSpeed and add ZeRO paper (1910.02054) to Paper Index by @JenWei0312 in https://github.com/huggingface/trl/pull/4551num_completions to num_generations by @pramodith in https://github.com/huggingface/trl/pull/4515rnj_1_instruct notebook by @sergiopaniego in https://github.com/huggingface/trl/pull/4646wandb_log_unique_prompts with log_unique_prompts by @taha-yassine in https://github.com/huggingface/trl/pull/4508prepare_model_for_kbit_training by @sergiopaniego in https://github.com/huggingface/trl/pull/4457lr_scheduler_kwargs dtype issue in Transformers 4.57.0 by @qgallouedec in https://github.com/huggingface/trl/pull/4513device_map and dtype to "auto" by default by @qgallouedec in https://github.com/huggingface/trl/pull/4509wandb_log_unique_prompts with log_unique_prompts by @taha-yassine in https://github.com/huggingface/trl/pull/4508num_completions to num_generations by @pramodith in https://github.com/huggingface/trl/pull/4515flash-attn to flash-attn2 by @qgallouedec in https://github.com/huggingface/trl/pull/4514prepare_model_for_kbit_training by @sergiopaniego in https://github.com/huggingface/trl/pull/4457device_map=None for DeepSpeed and add ZeRO paper (1910.02054) to Paper Index by @JenWei0312 in https://github.com/huggingface/trl/pull/4551num_generations_eval parameter for efficient evaluation by @mingxuetian in https://github.com/huggingface/trl/pull/4458shuffle_dataset option to SFTTrainer by @qgallouedec in https://github.com/huggingface/trl/pull/4564WinRateCallback to experimental by @qgallouedec in https://github.com/huggingface/trl/pull/4558rnj_1_instruct notebook by @sergiopaniego in https://github.com/huggingface/trl/pull/4646Full Changelog: https://github.com/huggingface/trl/compare/v0.25.0...v0.26.0
lr_scheduler_kwargs dtype issue in Transformers 4.57.0 by @qgallouedec in https://github.com/huggingface/trl/pull/4513Full Changelog: https://github.com/huggingface/trl/compare/v0.25.0...0.25.1
prepare_model_for_kbit_training to save VRAM by @sergiopaniego in https://github.com/huggingface/trl/pull/4335add_generation_prompt to processor_kwargs in GRPO and RLOO trainer by @qgallouedec in https://github.com/huggingface/trl/pull/4361trl.experimental by @qgallouedec in https://github.com/huggingface/trl/pull/4312add_generation_prompt=True for conversational only by @qgallouedec in https://github.com/huggingface/trl/pull/4362max_length explanation for VLM in online trainers by @sergiopaniego in https://github.com/huggingface/trl/pull/4220_generate in GRPO/RLOO: Move forward_kwargs outside generation method by @qgallouedec in https://github.com/huggingface/trl/pull/4154trl.experimental by @qgallouedec in https://github.com/huggingface/trl/pull/4312_generate in GRPO/RLOO: Insert images in the prompt by @qgallouedec in https://github.com/huggingface/trl/pull/4155prepare_model_for_kbit_training to save VRAM by @sergiopaniego in https://github.com/huggingface/trl/pull/4335add_generation_prompt to processor_kwargs in GRPO and RLOO trainer by @qgallouedec in https://github.com/huggingface/trl/pull/4361add_generation_prompt=True for conversational only by @qgallouedec in https://github.com/huggingface/trl/pull/4362max_length explanation for VLM in online trainers by @sergiopaniego in https://github.com/huggingface/trl/pull/4220Full Changelog: https://github.com/huggingface/trl/compare/v0.24.0...v0.25.0
token_type_ids in DPOTrainer by @aweers in https://github.com/huggingface/trl/pull/4285RichProgressCallback enhancement by @qgallouedec in https://github.com/huggingface/trl/pull/4245chat_template_kwargs in apply_chat_template by @cmpatino in https://github.com/huggingface/trl/pull/4233token_type_ids in DataCollatorForVisionLanguageModeling by @qgallouedec in https://github.com/huggingface/trl/pull/4190RewardTrainer refactor by @qgallouedec in https://github.com/huggingface/trl/pull/4093clone_chat_template by @qgallouedec in https://github.com/huggingface/trl/pull/4097vllm_enable_sleep_mode to RLOO Trainer by @sergiopaniego in https://github.com/huggingface/trl/pull/4107batchmean reduce op in GKDTrainer's loss by @cmpatino in https://github.com/huggingface/trl/pull/4105get_high_entropy_mask by @akakakakakaa in https://github.com/huggingface/trl/pull/4041DPOConfig.padding_value in favour or pad_token_id by @qgallouedec in https://github.com/huggingface/trl/pull/4006BestOfNSampler by @qgallouedec in https://github.com/huggingface/trl/pull/4291AlignPropTrainer, DDPOTrainer and IterativeSFTTrainer by @qgallouedec in https://github.com/huggingface/trl/pull/4068trl.experimental Submodule by @August-murr in https://github.com/huggingface/trl/pull/4073make_parser function in multiple scripts by @qgallouedec in https://github.com/huggingface/trl/pull/4050get_high_entropy_mask by @akakakakakaa in https://github.com/huggingface/trl/pull/4041trl.experimental Submodule by @August-murr in https://github.com/huggingface/trl/pull/4073AlignPropTrainer, DDPOTrainer and IterativeSFTTrainer by @qgallouedec in https://github.com/huggingface/trl/pull/4068set to list of tags by @qgallouedec in https://github.com/huggingface/trl/pull/4092DPOConfig.padding_value in favour or pad_token_id by @qgallouedec in https://github.com/huggingface/trl/pull/4006batchmean reduce op in GKDTrainer's loss by @cmpatino in https://github.com/huggingface/trl/pull/4105image_split_sizes in favour of image_grid_thw by @qgallouedec in https://github.com/huggingface/trl/pull/4111vllm_enable_sleep_mode to RLOO Trainer by @sergiopaniego in https://github.com/huggingface/trl/pull/4107backend parameter from GuidedDecodingParams by @qgallouedec in https://github.com/huggingface/trl/pull/4123max_batch_tokens, num_blocks and block_size from generation kwargs by @qgallouedec in https://github.com/huggingface/trl/pull/4065_generate by @qgallouedec in https://github.com/huggingface/trl/pull/4114image_split_sizes in favour of image_grid_thw by @qgallouedec in https://github.com/huggingface/trl/pull/4156_generate for GRPO with replay buffer by @qgallouedec in https://github.com/huggingface/trl/pull/4158<Tip> with new markdown syntax by @qgallouedec in https://github.com/huggingface/trl/pull/4161require_bitsandbytes by @qgallouedec in https://github.com/huggingface/trl/pull/4137clone_chat_template by @qgallouedec in https://github.com/huggingface/trl/pull/4097RewardTrainer refactor by @qgallouedec in https://github.com/huggingface/trl/pull/4093_generate in GRPO/RLOO: list of ints instead of tensors by @qgallouedec in https://github.com/huggingface/trl/pull/4146trainer.tokenizer by trainer.processing_class by @qgallouedec in https://github.com/huggingface/trl/pull/4185sft example script by @sergiopaniego in https://github.com/huggingface/trl/pull/4197Optional from processing_class in PPOTrainer by @sergiopaniego in https://github.com/huggingface/trl/pull/4212sft_video_llm example by @qgallouedec in https://github.com/huggingface/trl/pull/4214trl-internal-testing/tiny-DbrxForCausalLM by @qgallouedec in https://github.com/huggingface/trl/pull/4213_generate in GRPO/RLOO: Use prompt_ids from generation by @qgallouedec in https://github.com/huggingface/trl/pull/4152token_type_ids in DataCollatorForVisionLanguageModeling by @qgallouedec in https://github.com/huggingface/trl/pull/4190_generate in GRPO/RLOO: Rely on generator for prompt truncation by @qgallouedec in https://github.com/huggingface/trl/pull/4153chat_template_kwargs in apply_chat_template by @cmpatino in https://github.com/huggingface/trl/pull/4233RichProgressCallback enhancement by @qgallouedec in https://github.com/huggingface/trl/pull/4245token_type_ids in DPOTrainer by @aweers in https://github.com/huggingface/trl/pull/4285BestOfNSampler by @qgallouedec in https://github.com/huggingface/trl/pull/4291Full Changelog: https://github.com/huggingface/trl/compare/v0.23.0...v0.24.0
get_high_entropy_mask by @akakakakakaa in https://github.com/huggingface/trl/pull/4041Full Changelog: https://github.com/huggingface/trl/compare/v0.23.0...v0.23.1
SFT now supports Context Parallelism (CP) for training large language models on very large sequences. You can now train with an arbitrarily long sequence length.
<img width="844" height="336" alt="Screenshot 2025-09-09 at 10 39 30 PM" src="https://github.com/user-attachments/assets/f1dfc349-440a-4e05-aac9-439a3c286f08" />by @kashif in https://github.com/huggingface/trl/pull/3994
Dynamic Fine-Tuning (DFT) is a nnow supported in TRL.
from trl import SFTConfig
training_args = SFTConfig(
loss_type="dft",
...
)
<img width="692" height="472" alt="Screenshot 2025-09-09 at 10 37 36 PM" src="https://github.com/user-attachments/assets/4ee2b4ab-7cc6-4578-bfac-c38124891510" />
by @qgallouedec in https://github.com/huggingface/trl/pull/4042
Different implementations are used for rollout generation (vLLM) and model training. The implementation gap implicitly turns the on-policy RL to be off-policy. Truncated Importance Sampling (TIS) a simple yet effective importance sampling technique for handling such discrepancy. This is now implemented in GRPO.
from trl import GRPOConfig
training_args = GRPOConfig(
...
use_vllm=True,
vllm_importance_sampling_correction=True, # default True
vllm_importance_sampling_cap=2.0, # hyper-parameter C
)
by @LeonEricsson in https://github.com/huggingface/trl/pull/3867
Mixture of Experts (MoE) models require an auxiliary loss to ensure that the different experts are used evenly. This auxiliary loss is now supported in SFTTrainer.
training_args = SFTConfig(
model_init_kwargs={"output_router_logits": True},
...
)
by @pramodith in https://github.com/huggingface/trl/pull/4012
When running GRPO (or RLOO) with vLLM in colocated mode, the vLLM server consume VRAM during optimization while not being used. We now have an option to put the vLLM server to sleep during optimization to free up VRAM.
from trl import GRPOConfig
training_args = GRPOConfig(..., vllm_sleep_enabled=True)
by @edbeeching in https://github.com/huggingface/trl/pull/3968
You can now use vLLM server mode with OnlineDPOTrainer. Additionally, VLM models are now supported.
by @vaelev in https://github.com/huggingface/trl/pull/3783
The paper index has been significantly enhanced with the addition of 9+ new algorithm implementations, providing a more comprehensive resource for users.
by @behroozazarkhalili in https://github.com/huggingface/trl/pull/3990
get_soft_overlong_punishment by @qgallouedec in https://github.com/huggingface/trl/pull/3972args.gradient_checkpointing = False instead of args = dataclasses.replace(args, gradient_checkpointing=False) by @qgallouedec in https://github.com/huggingface/trl/pull/3981torch_dype to dtype everywhere by @sergiopaniego in https://github.com/huggingface/trl/pull/4000average_tokens_across_devices default replacement by @qgallouedec in https://github.com/huggingface/trl/pull/4039AutoModelForImageTextToText for DPO and Online DPO by @sergiopaniego in https://github.com/huggingface/trl/pull/4049SFTTrainer for compatibility with CP by @qgallouedec in https://github.com/huggingface/trl/pull/4038quantization_config=None by @qgallouedec in https://github.com/huggingface/trl/pull/4019Full Changelog: https://github.com/huggingface/trl/compare/v0.22.0...v0.23.0
Full Changelog: https://github.com/huggingface/trl/compare/v0.22.1...v0.22.2
importlib.metadata by @qgallouedecFull Changelog: https://github.com/huggingface/trl/compare/v0.22.0...v0.22.1