releases.shpreview

v1.11.0

v1.11.0: TE MXFP8, FP16/BF16 with MPS, Python 3.10

$npx -y @buildinternet/releases show rel_AF35D-r2W5W7F9e3VvYe0

TE MXFP8 support

We've added support for MXFP8 in our TransformerEngine integration. To use that, you need to set use_mxfp8_block_scaling in fp8_config. See nvidia docs [here]. (https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/examples/fp8_primer.html#MXFP8-and-block-scaling)

FP16/BF16 Training for MPS devices

BF16 and FP16 support for MPS devices is finally here. You can now pass mixed_precision = "fp16" or "bf16" when training on a mac (fp16 requires torch 2.8 and bf16 requires torch 2.6)

FSDP updates

The following PRs add respectively support to ignored_params and no_sync() for FSDPv2:

Mixed precision can now be passed as a dtype string from accelerate cli flag or fsdp_config in accelerate config file:

Nd-parallel updates

Some minor updates concerning nd-parallelism.

Bump to Python 3.10

We've dropped support for python 3.9 as it reached EOL in October.

Lots of minor fixes:

New Contributors

Full Changelog: https://github.com/huggingface/accelerate/compare/v1.10.1...v1.11.0

Fetched April 7, 2026