releases.shpreview

v1.3.0

April 26, 2026TRLView original ↗
$npx @buildinternet/releases get rel_aiaSofl7KZgttCpwD70W-

Features

Qwen 3.6 integration

<img width="1536" height="1024" alt="ChatGPT Image Apr 26, 2026 at 11_16_18 AM" src="https://github.com/user-attachments/assets/789aad15-03b2-4ece-9828-d5c1dfed1f1e" />

TRL v1.3 ships training support for the new Qwen 3.6 family (Qwen/Qwen3.6-27B, Qwen/Qwen3.6-35B-A3B). Qwen 3.6 reuses the Qwen3_5Moe* architecture but ships a slightly different chat template (adds a preserve_thinking flag, tweaks tool-arg stringification), so exact-string template matching needed updates across the stack.

What landed:

  • Chat templates: qwen3_6.jinja (verbatim from upstream) and qwen3_6_training.jinja (prefix-preserving + {% generation %} markers for assistant_only_loss=True)
  • Response schema: routes to the existing qwen3_5_schema for tool-call parsing — output format unchanged
  • Tiny test models for VLM training: tiny-Qwen3_5MoeForConditionalGeneration-3.6 (with MoE-specific shrinking)
  • Test matrix updated across SFT/DPO/GRPO/RLOO test_(train|training)_vlm cases
from trl import SFTConfig, SFTTrainer

trainer = SFTTrainer(
    model="Qwen/Qwen3.6-27B",
    args=SFTConfig(assistant_only_loss=True),  # works out of the box
    train_dataset=dataset,
)
trainer.train()

Tool-calling agent training also works end-to-end via the existing Qwen 3.5 response schema:

from trl import GRPOConfig, GRPOTrainer

def multiply(a: int, b: int) -> int:
    """
    Multiplies two integers.

    Args:
        a: The first integer.
        b: The second integer.

    Returns:
        The product of the two integers.
    """
    return a * b

trainer = GRPOTrainer(
    model="Qwen/Qwen3.6-27B",
    reward_funcs=my_reward_fn,
    args=GRPOConfig(...),
    train_dataset=dataset,
    tools=[multiply],
)
trainer.train()

by @qgallouedec in https://github.com/huggingface/trl/pull/5642

New experimental TPO trainer

<img width="711" height="177" alt="Screenshot 2026-04-26 at 11 37 28 AM" src="https://github.com/user-attachments/assets/6090212e-5c95-45c1-b137-87333d91daa6" />

A new experimental TPOTrainer implements Triple Preference Optimization, which augments DPO with a reference (gold) completion alongside chosen/rejected. The paper reports +7-19 points over DPO/SimPO on Arena-Hard, MixEval-Hard, MMLU-Pro and GSM8K, with less data.

from trl.experimental.tpo import TPOConfig, TPOTrainer

trainer = TPOTrainer(
    model="Qwen/Qwen3-0.6B",
    args=TPOConfig(output_dir="Qwen3-0.6B-TPO"),
    train_dataset=load_dataset("tpo-alignment/triple-preference-ultrafeedback-40K", split="train"),
)
trainer.train()

by @kashif in https://github.com/huggingface/trl/pull/5506

Speculative decoding in trl vllm-serve

A new --speculative_config JSON flag exposes vLLM's speculative decoding directly through trl vllm-serve — works with native MTP heads (Qwen3 Next), Eagle3 drafts, etc. — without forking the serve script.

# Qwen3 native MTP (no extra draft model)
trl vllm-serve --model Qwen/Qwen3-Next-80B-A3B-Instruct \
    --speculative_config '{"method": "qwen3_next_mtp", "num_speculative_tokens": 5}'

# Eagle3 draft model
trl vllm-serve --model Qwen/Qwen3-32B \
    --speculative_config '{"model": "RedHatAI/Qwen3-32B-speculator.eagle3", "method": "eagle3", "num_speculative_tokens": 3}'

by @Ofir408 in https://github.com/huggingface/trl/pull/5605

KTO ↔ DPO alignment: nearing the finish line

Twelve more alignment PRs this cycle, bringing KTOTrainer and DPOTrainer essentially into structural parity. Notable shifts include moving completion assembly out of _prepare_dataset into a new DataCollatorForKTO, inlining the two-pass tokenization into a single pass, removing BOS/EOS handling, and supporting IterableDataset and dict eval_dataset. The goal — promoting KTO out of experimental and into stable — is now within reach for an upcoming release.

PRs (all by @albertvillanova): #5582, #5578, #5579, #5583, #5587, #5599, #5601, #5600, #5606, #5612, #5632, #5635

More {% generation %} training chat templates

Three more model families gain training-compatible chat templates with {% generation %} markers, so assistant_only_loss=True works out of the box:

Other

Fixes

Documentation and Examples

CI

New Contributors

What's Changed

New Contributors

Full Changelog: https://github.com/huggingface/trl/compare/v1.2.0...v1.3.0

Fetched April 26, 2026