v0.10.2: Patch release
This patch removes the hard requirement for transformers>=4.25.1 in case external libraries were downgrading the library upon startup in a non-controllable way.
🚨🚨🚨 Note that xformers in not automatically enabled anymore 🚨🚨🚨
The reasons for this are given here: https://github.com/huggingface/diffusers/pull/1640#discussion_r1044651551:
We should not automatically enable xformers for three reasons:
It's not PyTorch-like API. PyTorch doesn't by default enable all the fastest options available We allocate GPU memory before the user even does .to("cuda") This behavior is not consistent with cases where xformers is not installed
=> This means: If you were used to have xformers automatically enabled, please make sure to add the following now:
from diffusers.utils.import_utils import is_xformers_available
unet = ... # load unet
if is_xformers_available():
try:
unet.enable_xformers_memory_efficient_attention(True)
except Exception as e:
logger.warning(
"Could not enable memory efficient attention. Make sure xformers is installed"
f" correctly and a GPU is available: {e}"
)
for the UNet (e.g. in dreambooth) or for the pipeline:
from diffusers.utils.import_utils import is_xformers_available
pipe = ... # load pipeline
if is_xformers_available():
try:
pipe.enable_xformers_memory_efficient_attention(True)
except Exception as e:
logger.warning(
"Could not enable memory efficient attention. Make sure xformers is installed"
f" correctly and a GPU is available: {e}"
)
Fetched April 7, 2026