releases.shpreview

v0.11.0

September 19, 2024TRLView original ↗
$npx -y @buildinternet/releases show rel_LhVZLTZfCHXWZHCVxtsDe

We are excited to introduce the new v0.11.0 release, with many new features and post-training algorithms. The highlights are as follows:

New post-training methods

Generalized Knowledge Distillation

<img width="992" alt="Screenshot 2024-09-19 at 10 01 02" src="https://github.com/user-attachments/assets/97afd65d-1a2c-484b-b6dd-b02a2cbe6430">

Generalized Knowledge Distillation (GKD) is a post-training method from Google DeepMind that extends standard knowledge distillation by allowing the student to generate outputs during training and receive online feedback from the teacher. It consistently outperforms SFT and in some cases enables the student model to match the performance of the teacher, but with far fewer parameters.

To train models with this method, check out the GKDTrainer.

Exploratory Preference Optimization

<img width="1224" alt="Screenshot 2024-09-19 at 10 13 27" src="https://github.com/user-attachments/assets/36decb24-ef01-41f1-84e8-53b491eb6c86">

Exploratory Preference Optimization is an online post-training method from researchers at Microsoft, MIT, and Wisconsin that extends DPO to incorporate online feedback from reward models or LLM judges. It is similar to online DPO, but has a slightly different theoretical basis concerning sample efficiency.

To train models with this method, check out the XPOTrainer.

Nash Learning with Human Feedback

<img width="476" alt="Screenshot 2024-09-19 at 10 32 04" src="https://github.com/user-attachments/assets/8e68263f-bf5a-4f68-b451-110c78e27bb6">

Nash Learning with Human Feedback is a novel post-training method from Google DeepMind that uses pairwise preference models which are conditioned on two inputs, instead of the single one used in reward models. These preference models are then used to train a policy that consistently produces responses that are preferred over those from competing policies, thus approximating a Nash equilibrium (i.e. a two player game where actions are responses and payoffs are given by the preference model).

To train models with this method, check out the NashMDTrainer.

New trainer features

Deprecations 🚨

  • The PPOTrainer is marked for deprecated in favour of PPOv2Trainer to provide a consistent API across TRL's trainers. It will be removed in v0.12.0. By @qgallouedec in https://github.com/huggingface/trl/pull/2016
  • The RichProgressCallback has been removed from the example scripts as it caused a variety of problems with logging in distributed environments. You can still use it by adding it manually to the trainer callbacks. By @lewtun in https://github.com/huggingface/trl/pull/2053

Bugfixes and improvements

New Contributors

Full Changelog: https://github.com/huggingface/trl/compare/v0.9.6...v0.11.0

Fetched April 7, 2026