Release v1.0.13
bfloat16 or float16wandb project name arg added by https://github.com/caojiaolong, use arg.experiment for nametorch.utils.checkpoint.checkpoint() wrapper in timm.models that defaults use_reentrant=False, unless TIMM_REENTRANT_CKPT=1 is set in env.convnext_nano 384x384 ImageNet-12k pretrain & fine-tune. https://huggingface.co/models?search=convnext_nano%20r384vit_large_patch14_clip_224.dfn2b_s39bRmsNorm layer & fn to match standard formulation, use PT 2.5 impl when possible. Move old impl to SimpleNorm layer, it's LN w/o centering or bias. There were only two timm models using it, and they have been updated.cache_dir arg for model creationtrust_remote_code for HF datasets wrapperinception_next_atto model added by creatorhf-hub: based loading, and thus will work with new Transformers TimmWrapperModelQuickstart doc by @ariG23498 in https://github.com/huggingface/pytorch-image-models/pull/2381Full Changelog: https://github.com/huggingface/pytorch-image-models/compare/v1.0.12...v1.0.13
Fetched April 7, 2026