releases.shpreview
Hugging Face/huggingface_hub

huggingface_hub

$npx -y @buildinternet/releases show huggingface-hub
Mon
Wed
Fri
AprMayJunJulAugSepOctNovDecJanFebMarApr
Less
More
Releases20Avg6/moVersionsv1.3.2 → v1.11.0
May 13, 2025
[v0.31.2] Hot-fix: make `hf-xet` optional again and bump the min version of the package

Patch release to make hf-xet optional. More context in #3079 and #3078.

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.31.1...v0.31.2

May 6, 2025
[v0.31.0] LoRAs with Inference Providers, `auto` mode for provider selection, embeddings models and more

🧑‍🎨 Introducing LoRAs with fal.ai and Replicate providers

We're introducing blazingly fast LoRA inference powered by fal.ai and Replicate through Hugging Face Inference Providers! You can use any compatible LoRA available on the Hugging Face Hub and get generations at lightning fast speed ⚡

from huggingface_hub import InferenceClient

client = InferenceClient(provider="fal-ai") # or provider="replicate"

# output is a PIL.Image object
image = client.text_to_image(
    "a boy and a girl looking out of a window with a cat perched on the window sill. There is a bicycle parked in front of them and a plant with flowers to the right side of the image. The wall behind them is visible in the background.",
    model="openfree/flux-chatgpt-ghibli-lora",
)
  • [Inference Providers] LoRAs with Replicate by @hanouticelina in #3054
  • [Inference Providers] Support for LoRAs with fal by @hanouticelina in #3005

⚙️ auto mode for provider selection

You can now automatically select a provider for a model using auto mode — it will pick the first available provider based on your preferred order set in https://hf.co/settings/inference-providers.

from huggingface_hub import InferenceClient

# will select the first provider available for the model, sorted by your order.
client = InferenceClient(provider="auto") 

completion = client.chat.completions.create(
    model="Qwen/Qwen3-235B-A22B",
    messages=[
        {
            "role": "user",
            "content": "What is the capital of France?"
        }
    ],
)

print(completion.choices[0].message)

⚠️ Note: This is now the default value for the provider argument. Previously, the default was hf-inference, so this change may be a breaking one if you're not specifying the provider name when initializing InferenceClient or AsyncInferenceClient.

  • PoC: provider="auto" by @julien-c in #3011

🧠 Embeddings support with Sambanova (feature-extraction)

We added support for feature extraction (embeddings) inference with sambanova provider.

  • [Inference Providers] sambanova supports feature extraction by @hanouticelina in #3037

⚡ Other Inference features

HF Inference API provider is now fully integrated as an Inference Provider, this means it only supports a predefined list of deployed models, selected based on popularity. Cold-starting arbitrary models from the Hub is no longer supported — if a model isn't already deployed, it won’t be available via HF Inference API.

Miscellaneous improvements and some bug fixes:

  • Fix 'sentence-transformers/all-MiniLM-L6-v2' doesn't support task 'feature-extraction' by @Wauplin in #2968
  • fix text generation by @hanouticelina in #2982
  • Fix HfInference conversational by @Wauplin in #2985
  • Fix 'sentence_similarity' on InferenceClient by @tomaarsen in #3004
  • Update inference types (automated commit) by @HuggingFaceInfra in #3015
  • update text to speech input by @hanouticelina in #3025
  • [Inference Providers] fix inference with URL endpoints by @hanouticelina in #3041
  • Update inference types (automated commit) by @HuggingFaceInfra in #3051

✅ Of course, all of those inference changes are available in the AsyncInferenceClient async equivalent 🤗

🚀 Xet

Thanks to @bpronan's PR, Xet now supports uploading byte arrays:

from huggingface_hub import upload_file

file_content = b"my-file-content"
repo_id = "username/model-name" # `hf-xet` should be installed and Xet should be enabled for this repo

upload_file(
    path_or_fileobj=file_content,
    repo_id=repo_id,
)
  • Xet Upload with byte array by @bpronan in #3035

Additionally, we’ve added documentation for environment variables used by hf-xet to optimize file download/upload performance — including options for caching (HF_XET_CHUNK_CACHE_SIZE_BYTES), concurrency (HF_XET_NUM_CONCURRENT_RANGE_GETS), high-performance mode (HF_XET_HIGH_PERFORMANCE), and sequential writes (HF_XET_RECONSTRUCT_WRITE_SEQUENTIALLY).

  • Docs for xet env variables by @rajatarya in #3024
  • Minor xet changes: HF_HUB_DISABLE_XET flag, suppress logger.info by @rajatarya in #3039

Miscellaneous improvements:

  • Removing workaround for deprecated refresh route headers by @bpronan in #2993

✨ HF API

We added HTTP download support for files larger than 50GB — enabling more reliable handling of large file downloads.

  • Add HTTP Download support for files > 50GB by @rajatarya in #2991

We also added dynamic batching to upload_large_folder, replacing the fixed 50-files-per-commit rule with an adaptive strategy that adjusts based on commit success and duration — improving performance and reducing the risk of hitting the commits rate limit on large repositories.

  • Fix dynamic commit size by @maximizemaxwell in #3016

We added support for new arguments when creating or updating Hugging Face Inference Endpoints.

  • add route payload to deploy Inference Endpoints by @Vaibhavs10 in #3013
  • Add the 'env' parameter to creating/updating Inference Endpoints by @tomaarsen in #3045

💔 Breaking changes

  • The default value of the provider argument in InferenceClient and AsyncInferenceClient is now "auto" instead of "hf-inference" (HF Inference API). This means provider selection will now follow your preferred order set in your inference provider settings. If your code relied on the previous default ("hf-inference"), you may need to update it explicitly to avoid unexpected behavior.
  • HF Inference API Routing Update: The inference URL path for feature-extraction and sentence-similarity tasks has changed from https://router.huggingface.co/hf-inference/pipeline/{task}/{model}to https://router.huggingface.co/hf-inference/models/{model}/pipeline/{task}.
  • [inference] Necessary breaking change: nest task-specific route inside of model route by @julien-c in #3044

🛠️ Small fixes and maintenance

😌 QoL improvements

  • Unlist TPUs from SpaceHardware by @Wauplin in #2973
  • dev(narugo): disable hf_transfer when custom 'Range' header is assigned by @narugo1992 in #2979
  • Improve error handling for invalid eval results in model cards by @hanouticelina in #3000
  • Handle Rate Limits in Pagination with Automatic Retries by @Weyaxi in #2970
  • Add example for downloading files in subdirectories, related to https://github.com/huggingface/huggingface_hub/pull/3014 by @mixer3d in #3023
  • Super-micro-tiny-PR to allow for direct copy-paste :) by @fracapuano in #3030
  • Migrate to logger.warning usage by @emmanuel-ferdman in #3056

🐛 Bug and typo fixes

  • Retry on transient error in download workflow by @Wauplin in #2976
  • fix snapshot download behavior in offline mode when downloading to a local dir by @hanouticelina in #3009
  • fix docstring by @hanouticelina in #3040
  • fix default CACHE_DIR by @albertcthomas in #3050

🏗️ internal

  • fix: fix test_get_hf_file_metadata_from_a_lfs_file as since xet migration by @XciD in #2972
  • A better security-wise style bot GH Action by @hanouticelina in #2914
  • prepare for next release by @hanouticelina in #2983
  • Bump hf_xet min version to 1.0.0 + make it required dep on 64 bits by @hanouticelina in #2971
  • fix permissions for style bot by @hanouticelina in #3012
  • remove (inference only) VCR tests by @hanouticelina in #3021
  • remove test by @hanouticelina in #3028

Community contributions

The following contributors have made significant changes to the library over the last release:

  • @bpronan
    • Removing workaround for deprecated refresh route headers (#2993)
    • Xet Upload with byte array (#3035)
  • @tomaarsen
    • Fix 'sentence_similarity' on InferenceClient (#3004)
    • Add the 'env' parameter to creating/updating Inference Endpoints (#3045)
  • @Weyaxi
    • Handle Rate Limits in Pagination with Automatic Retries (#2970)
  • @rajatarya
    • Add HTTP Download support for files > 50GB (#2991)
    • Docs for xet env variables (#3024)
    • Minor xet changes: HF_HUB_DISABLE_XET flag, suppress logger.info (#3039)
  • @Vaibhavs10
    • add route payload to deploy Inference Endpoints (#3013)
  • @maximizemaxwell
    • Fix dynamic commit size (#3016)
  • @emmanuel-ferdman
    • Migrate to logger.warning usage (#3056)
Apr 8, 2025
v0.30.2: Fix text-generation task in InferenceClient

Fixing some InferenceClient-related bugs:

  • [Inference Providers] Fix text-generation when using an external provider #2982 by @hanouticelina
  • Fix HfInference conversational #2985 by @Wauplin

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.30.1...v0.30.2

Mar 31, 2025
v0.30.1: fix 'sentence-transformers/all-MiniLM-L6-v2' doesn't support task 'feature-extraction'
Mar 28, 2025
Xet is here! (+ many cool Inference-related things!)

🚀 Ready. Xet. Go!

This might just be our biggest update in the past two years! Xet is a groundbreaking new protocol for storing large objects in Git repositories, designed to replace Git LFS. Unlike LFS, which deduplicates files, Xet operates at the chunk level—making it a game-changer for AI builders collaborating on massive models and datasets. Our Python integration is powered by xet-core, a Rust-based package that handles all the low-level details.

You can start using Xet today by installing the optional dependency:

pip install -U huggingface_hub[hf_xet]

With that, you can seamlessly download files from Xet-enabled repositories! And don’t worry—everything remains fully backward-compatible if you’re not ready to upgrade yet.

Blog post: Xet on the Hub
Docs: Storage backends → Xet

[!TIP]
Want to store your own files with Xet? We’re gradually rolling out support on the Hugging Face Hub, so hf_xet uploads may need to be enabled for your repo. Join the waitlist to get onboarded soon!

This is the result of collaborative work by @bpronan, @hanouticelina, @rajatarya, @jsulz, @assafvayner, @Wauplin, + many others on the infra/Hub side!

  • Xet download workflow by @hanouticelina in #2875
  • Add ability to enable/disable xet storage on a repo by @hanouticelina in #2893
  • Xet upload workflow by @hanouticelina in #2887
  • Xet Docs for huggingface_hub by @rajatarya in #2899
  • Adding Token Refresh Xet Tests by @rajatarya in #2932
  • Using a two stage download path for xet files. by @bpronan in #2920
  • add xetEnabled as an expand property by @hanouticelina in #2907
  • Xet integration by @Wauplin in #2958

⚡ Enhanced InferenceClient

The InferenceClient has received significant updates and improvements in this release, making it more robust and easy to work with.

We’re thrilled to introduce Cerebras and Cohere as official inference providers! This expansion strengthens the Hub as the go-to entry point for running inference on open-weight models.

  • Add Cohere as an Inference Provider by @alexrs-cohere in #2888
  • Add Cerebras provider by @Wauplin in #2901
  • remove cohere from testing and fix quality by @hanouticelina in #2902

Novita is now our 3rd provider to support text-to-video task after Fal.ai and Replicate:

from huggingface_hub import InferenceClient

client = InferenceClient(provider="novita")

video = client.text_to_video(
    "A young man walking on the street",
    model="Wan-AI/Wan2.1-T2V-14B",
)
  • [Inference Providers] Add text-to-video support for Novita by @hanouticelina in #2922

It is now possible to centralize billing on your organization rather than individual accounts! This helps companies managing their budget and setting limits at a team level. Organization must be subscribed to Enterprise Hub.

from huggingface_hub import InferenceClient
client = InferenceClient(provider="fal-ai", bill_to="openai")
image = client.text_to_image(
    "A majestic lion in a fantasy forest",
    model="black-forest-labs/FLUX.1-schnell",
)
image.save("lion.png")
  • Support bill_to in InferenceClient by @Wauplin in #2940

Handling long-running inference tasks just got easier! To prevent request timeouts, we’ve introduced asynchronous calls for text-to-video inference. We are expecting more providers to leverage the same structure soon, ensuring better robustness and developer-experience.

  • [Inference Providers] Async calls for fal.ai by @hanouticelina in #2927
  • update polling interval by @hanouticelina in #2937
  • [Inference Providers] Fix status and response URLs when polling text-to-video results with fal-ai by @hanouticelina in #2943

Miscellaneous improvements:

  • [Bot] Update inference types by @HuggingFaceInfra in #2832
  • Update InferenceClient docstring to reflect that token=False is no longer accepted by @abidlabs in #2853
  • [Inference providers] Root-only base URLs by @Wauplin in #2918
  • Add prompt in image_to_image type by @Wauplin in #2956
  • [Inference Providers] fold OpenAI support into provider parameter by @hanouticelina in #2949
  • clean up some inference stuff by @Wauplin in #2941
  • regenerate cassettes by @hanouticelina in #2925
  • Fix payload model name when model id is a URL by @hanouticelina in #2911
  • [InferenceClient] Fix token initialization and add more tests by @hanouticelina in #2921
  • [Inference Providers] check inference provider mapping for HF Inference API by @hanouticelina in #2948

✨ New Features and Improvements

This release also includes several other notable features and improvements.

It's now possible to pass a path with wildcard to the upload command instead of passing --include=... option:

huggingface-cli upload my-cool-model *.safetensors
  • Added support for Wildcards in huggingface-cli upload by @devesh-2002 in #2868

Deploying an Inference Endpoint from the Model Catalog just got 100x easier! Simply select which model to deploy and we handle the rest to guarantee the best hardware and settings for your dedicated endpoints.

from huggingface_hub import create_inference_endpoint_from_catalog

endpoint = create_inference_endpoint_from_catalog("unsloth/DeepSeek-R1-GGUF")
endpoint.wait()

endpoint.client.chat_completion(...)
  • Support deploy Inference Endpoint from model catalog by @Wauplin in #2892

The ModelHubMixin got two small updates:

  • authors can provide a paper URL that will be added to all model cards pushed by the library.
  • dataclasses are now supported for any init arg (was only the case of config until now)
  • Add paper URL to hub mixin by @NielsRogge in #2917
  • [HubMixin] handle dataclasses in all args, not only 'config' by @Wauplin in #2928

You can now sort by name, size, last updated and last used where using the delete-cache command:

huggingface-cli delete-cache --sort=size
  • feat: add --sort arg to delete-cache to sort by size by @AlpinDale in #2815

Since end 2024, it is possible to manage the LFS files stored in a repo from the UI (see docs). This release makes it possible to do the same programmatically. The goal is to enable users to free-up some storage space in their private repositories.

>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> lfs_files = api.list_lfs_files("username/my-cool-repo")

# Filter files files to delete based on a combination of `filename`, `pushed_at`, `ref` or `size`.
# e.g. select only LFS files in the "checkpoints" folder
>>> lfs_files_to_delete = (lfs_file for lfs_file in lfs_files if lfs_file.filename.startswith("checkpoints/"))

# Permanently delete LFS files
>>> api.permanently_delete_lfs_files("username/my-cool-repo", lfs_files_to_delete)

[!WARNING] This is a power-user tool to use carefully. Deleting LFS files from a repo is a non-revertible action.

  • Support permanently deleting LFS files by @Wauplin in #2954

💔 Breaking Changes

labels has been removed from InferenceClient.zero_shot_classification and InferenceClient.zero_shot_image_classification tasks in favor of candidate_labels. There has been a proper deprecation warning for that.

  • Prepare for 0.30 by @Wauplin in #2878

🛠️ Small Fixes and Maintenance

🐛 Bug and Typo Fixes

  • Fix revision bug in _upload_large_folder.py by @yuantuo666 in #2879
  • bug fix in inference_endpoint wait function for proper waiting on update by @Ajinkya-25 in #2867
  • Update SpaceHardware enum by @Wauplin in #2891
  • Fix: Restore sys.stdout in notebook_login after error by @LEEMINJOO in #2896
  • Remove link to unmaintained model card app Space by @davanstrien in #2897
  • Fixing a typo in chat_completion example by @Wauplin in #2910
  • chore: Link to Authentication by @FL33TW00D in #2905
  • Handle file-like objects in curlify by @hanouticelina in #2912
  • Fix typos by @omahs in #2951
  • Add expanduser and expandvars to path envvars by @FredHaa in #2945

🏗️ Internal

Thanks to the work previously introduced by the diffusers team, we've published a GitHub Action that runs code style tooling on demand on Pull Requests, making the life of contributors and reviewers easier.

  • add style bot GitHub action by @hanouticelina in #2898
  • fix style bot GH action by @hanouticelina in #2906
  • Fix bot style GH action (again) by @hanouticelina in #2909

Other minor updates:

  • Fix prerelease CI by @Wauplin in #2877
  • Update update-inference-types.yaml by @Wauplin in #2926
  • [Internal] Fix check parameters script by @hanouticelina in #2957

Significant community contributions

The following contributors have made significant changes to the library over the last release:

  • @Ajinkya-25
    • bug fix in inference_endpoint wait function for proper waiting on update (#2867)
  • @abidlabs
    • Update InferenceClient docstring to reflect that token=False is no longer accepted (#2853)
  • @devesh-2002
    • Added support for Wildcards in huggingface-cli upload (#2868)
  • @alexrs-cohere
    • Add Cohere as an Inference Provider (#2888)
  • @NielsRogge
    • Add paper URL to hub mixin (#2917)
  • @AlpinDale
    • feat: add --sort arg to delete-cache to sort by size (#2815)
  • @FredHaa
    • Add expanduser and expandvars to path envvars (#2945)
  • @omahs
    • Fix typos (#2951)
Mar 11, 2025
[v0.29.3]: Adding 2 new Inference Providers: Cerebras and Cohere 🔥

Added client-side support for Cerebras and Cohere providers for upcoming official launch on the Hub.

Cerebras: https://github.com/huggingface/huggingface_hub/pull/2901. Cohere: https://github.com/huggingface/huggingface_hub/pull/2888.

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.29.2...v0.29.3

Mar 5, 2025
[v0.29.2] Fix payload model name when model id is a URL & Restore `sys.stdout` in `notebook_login()` after error

This patch release includes two fixes:

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.29.1...v0.29.2

Feb 20, 2025
[v0.29.1] Fix revision URL encoding in `upload_large_folder` & Fix endpoint update state handling in `InferenceEndpoint.wait()`

This patch release includes two fixes:

  • Fix revision bug in _upload_large_folder.py #2879
  • bug fix in inference_endpoint wait function for proper waiting on update #2867

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.29.0...v0.29.1

Feb 18, 2025
[v0.29.0]: Introducing 4 new Inference Providers: Fireworks AI, Hyperbolic, Nebius AI Studio, and Novita 🔥

We’re thrilled to announce the addition of three more outstanding serverless Inference Providers to the Hugging Face Hub: Fireworks AI, Hyperbolic, Nebius AI Studio, and Novita. These providers join our growing ecosystem, enhancing the breadth and capabilities of serverless inference directly on the Hub’s model pages. This release adds official support for these 3 providers, making it super easy to use a wide variety of models with your preferred providers.

See our announcement blog for more details: https://huggingface.co/blog/new-inference-providers.

  • Add Fireworks AI provider + instructions for new provider by @Wauplin in #2848
  • Add Hyperbolic provider by @hanouticelina in #2863
  • Add Novita provider by @hanouticelina in #2865
  • Nebius AI Studio provider added by @Aktsvigun in #2866
  • Add Black Forest Labs provider by @hanouticelina in #2864

Note that Black Forest Labs is not yet supported on the Hub. Once we announce it, huggingface_hub 0.29.0 will automatically support it.

⚡ Other Inference updates

  • Default to base_url if provided by @Wauplin in #2805
  • update supported models by @hanouticelina in #2813
  • [InferenceClient] Better handling of task parameters by @hanouticelina in #2812
  • Add YuE (music gen) from fal.ai by @Wauplin in #2801
  • [InferenceClient] Renaming extra_parameters to extra_body by @hanouticelina in #2821
  • fix automatic-speech-recognition output parsing by @hanouticelina in #2826
  • [Bot] Update inference types by @HuggingFaceInfra in #2791
  • Support inferenceProviderMapping as expand property by @Wauplin in #2841
  • Handle extra fields in inference types by @Wauplin in #2839
  • [InferenceClient] Add dynamic inference providers mapping by @hanouticelina in #2836
  • (misc) Deprecate some hf-inference specific features (wait-for-model header, can't override model's task, get_model_status, list_deployed_models) by @Wauplin in #2851
  • Partial revert #2851: allow task override on sentence-similarity by @Wauplin in #2861
  • Fix Inference Client VCR tests by @hanouticelina in #2858
  • update new provider doc by @hanouticelina in #2870

💔 Breaking changes

None.

🛠️ Small fixes and maintenance

😌 QoL improvements

  • dev(narugo): add resume for ranged headers of http_get function by @narugo1992 in #2823

🐛 Bug and typo fixes

  • [Docs] Fix broken link in CLI guide documentation by @hanouticelina in #2799
  • fix by @anael-l in #2806): Replace urljoin for HF_ENDPOINT paths
  • InferenceClient some minor docstrings thingies by @julien-c in #2810
  • Do not send staging token to production by @Wauplin in #2811
  • Add HF_DEBUG environment variable for debugging/reproducibility by @Wauplin in #2819
  • Fix curlify by @Wauplin in #2828
  • Improve whoami() error messages by specifying token source by @aniketqw in #2814
  • Fix error message if invalid token on file download by @Wauplin in #2847
  • Fix test_dataset_info (missing dummy dataset) by @Wauplin in #2850
  • Fix is_jsonable if integer key in dict by @Wauplin in #2857

🏗️ internal

  • another test by @Wauplin (direct commit on main)
  • feat(ci): ignore unverified trufflehog results by @Wauplin in #2837
  • Add datasets and diffusers to prerelease tests by @Wauplin in #2834
  • Always proxy hf-inference calls + update tests by @Wauplin in #2798
  • Skip list_models(inference=...) tests in CI by @Wauplin in #2852
  • Deterministic test_export_folder (dduf testsà by @Wauplin in #2854
  • [cleanup] Unique constants in tests + env variable for inference tests by @Wauplin in #2855
  • feat: Adds a new environment variable HF_HUB_USER_AGENT_ORIGIN to set origin of calls in user-agent by @Hugoch in #2869

Significant community contributions

The following contributors have made significant changes to the library over the last release:

  • @narugo1992
    • dev(narugo): add resume for ranged headers of http_get function (#2823)
  • @Aktsvigun
    • Nebius AI Studio provider added (#2866)
Jan 30, 2025
v0.28.1: FIX path in `HF_ENDPOINT` discarded

Release 0.28.0 introduced a bug making it impossible to set a HF_ENDPOINT env variable with a value with a subpath. This has been fixed in https://github.com/huggingface/huggingface_hub/pull/2807.

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.28.0...v0.28.1

Jan 28, 2025
[v0.28.0]: Third-party Inference Providers on the Hub & multiple quality of life improvements and bug fixes

⚡️Unified Inference Across Multiple Inference Providers

<img width="1406" alt="Screenshot 2025-01-28 at 12 05 42" src="https://github.com/user-attachments/assets/5d0e8515-c895-46ee-8fba-96d31e40c2f3" />

The InferenceClient now supports third-party providers, offering a unified interface to run inference across multiple services while leveraging models from the Hugging Face Hub. This update enables developers to:

  • 🌐 Switch providers seamlessly - Transition between inference providers with a single interface.
  • 🔗 Unified model IDs - Always reference Hugging Face Hub model IDs, even when using external providers.
  • 🔑 Simplified billing and access management - You can use your Hugging Face Token for routing to third-party providers (billed through your HF account).

A list of supported third-party providers can be found here.

Example of text-to-image inference with Replicate:

>>> from huggingface_hub import InferenceClient

>>> replicate_client = InferenceClient(
...    provider="replicate",
...    api_key="my_replicate_api_key", # Using your personal Replicate key
)
>>> image = replicate_client.text_to_image(
...    "A cyberpunk cat hacking neural networks",
...    model="black-forest-labs/FLUX.1-schnell"
)
>>> image.save("cybercat.png")

Another example of chat completion with Together AI:

>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...     provider="together",  # Use Together AI provider
...     api_key="<together_api_key>",  # Pass your Together API key directly
... )
>>> client.chat_completion(
...     model="deepseek-ai/DeepSeek-R1",
...     messages=[{"role": "user", "content": "How many r's are there in strawberry?"}],
... )

When using external providers, you can choose between two access modes: either use the provider's native API key, as shown in the examples above, or route calls through Hugging Face infrastructure (billed to your HF account):

>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient(
...    provider="fal-ai",
...    token="hf_****"  # Your Hugging Face token
)

⚠️ Parameters availability may vary between providers - check provider documentation. 🔜 New providers/models/tasks will be added iteratively in the future. 👉 You can find a list of supported tasks per provider and more details here.

  • [InferenceClient] Add third-party providers support by @hanouticelina in #2757
  • Unified prepare_request method + class-based providers by @Wauplin in #2777
  • [InferenceClient] Support proxy calls for 3rd party providers by @hanouticelina in #2781
  • [InferenceClient] Add text-to-video task and update supported tasks and models by @hanouticelina in #2786
  • Add type hints for providers by @Wauplin in #2788
  • [InferenceClient] Update inference documentation by @hanouticelina in #2776
  • Add text-to-video to supported tasks by @Wauplin in #2790

✨ HfApi

The following change aligns the client with server-side updates by adding new repositories properties: usedStorage and resourceGroup.

[HfApi] update list of repository properties following server side updates by @hanouticelina in #2728

Extends empty commit prevention to file copy operations, preserving clean version histories when no changes are made.

[HfApi] prevent empty commits when copying files by @hanouticelina in #2730

🌐 📚 Documentation

Thanks to @WizKnight, the hindi translation is much better!

Improved Hindi Translation in Documentation📝 by @WizKnight in #2697

💔 Breaking changes

The like endpoint has been removed to prevent misuse. You can still remove existing likes using the unlikeendpoint.

[HfApi] remove like endpoint by @hanouticelina in #2739

🛠️ Small fixes and maintenance

😌 QoL improvements

  • [InferenceClient] flag chat_completion()'s logit_bias as UNUSED by @hanouticelina in #2724
  • Remove unused parameters from method's docstring by @hanouticelina in #2738
  • Add optional rejection_reason when rejecting a user access token by @Wauplin in #2758
  • Add py.typed to be compliant with PEP-561 again by @hanouticelina in #2752

🐛 Bug and typo fixes

  • Fix super_squash_history revision not urlencoded by @Wauplin in #2795
  • Replace model repo with repo in docstrings by @albertvillanova in #2715
  • [BUG] Fix 404 NOT FOUND issue caused by endpoint tail slash by @Mingqi2 in #2721
  • Fix typing.get_type_hints call on a ModelHubMixin by @aliberts in #2729
  • fix typo by @qwertyforce in #2762
  • rejection reason docstring by @Wauplin in #2764
  • Add timeout to WeakFileLock by @Wauplin in #2751
  • FixCardData.get() to respect default values when None by @hanouticelina in #2770
  • Fix RepoCard.load when passing a repo_id that is also a dir path by @Wauplin in #2771
  • Fix filename too long when downloading to local folder by @Wauplin in #2789

🏗️ internal

  • Migrate to new Ruff "2025 style guide" formatter by @hanouticelina in #2749
  • remove org tokens tests by @hanouticelina in #2759
  • Fix RepoCard test on Windows by @hanouticelina in #2774
  • [Bot] Update inference types by @HuggingFaceInfra in #2712
Jan 6, 2025
[v0.27.1]: Fix `typing.get_type_hints` call on a `ModelHubMixin`
Dec 13, 2024
[v0.27.0] DDUF tooling, torch model loading helpers & multiple quality of life improvements and bug fixes

📦 Introducing DDUF tooling

<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/DDUF/DDUF-Banner.svg" alt="DDUF Banner"/>

DDUF (DDUF's Diffusion Unified Format) is a single-file format for diffusion models that aims to unify the different model distribution methods and weight-saving formats by packaging all model components into a single file. We will soon have a detailed documentation for that.

The huggingface_hub library now provides tooling to handle DDUF files in Python. It includes helpers to read and export DDUF files, and built-in rules to validate file integrity.

How to write a DDUF file?

>>> from huggingface_hub import export_folder_as_dduf

# Export "path/to/FLUX.1-dev" folder as a DDUF file
>>> export_folder_as_dduf("FLUX.1-dev.dduf", folder_path="path/to/FLUX.1-dev")

How to read a DDUF file?

>>> import json
>>> import safetensors.torch
>>> from huggingface_hub import read_dduf_file

# Read DDUF metadata (only metadata is loaded, lightweight operation)
>>> dduf_entries = read_dduf_file("FLUX.1-dev.dduf")

# Returns a mapping filename <> DDUFEntry
>>> dduf_entries["model_index.json"]
DDUFEntry(filename='model_index.json', offset=66, length=587)

# Load the `model_index.json` content
>>> json.loads(dduf_entries["model_index.json"].read_text())
{'_class_name': 'FluxPipeline', '_diffusers_version': '0.32.0.dev0', '_name_or_path': 'black-forest-labs/FLUX.1-dev', 'scheduler': ['diffusers', 'FlowMatchEulerDiscreteScheduler'], 'text_encoder': ['transformers', 'CLIPTextModel'], 'text_encoder_2': ['transformers', 'T5EncoderModel'], 'tokenizer': ['transformers', 'CLIPTokenizer'], 'tokenizer_2': ['transformers', 'T5TokenizerFast'], 'transformer': ['diffusers', 'FluxTransformer2DModel'], 'vae': ['diffusers', 'AutoencoderKL']}

# Load VAE weights using safetensors
>>> with dduf_entries["vae/diffusion_pytorch_model.safetensors"].as_mmap() as mm:
...     state_dict = safetensors.torch.load(mm)

⚠️ Note that this is a very early version of the parser. The API and implementation can evolve in the near future. 👉 More details about the API in the documentation here.

DDUF parser v0.1 by @Wauplin in #2692

💾 Serialization

Following the introduction of the torch serialization module in 0.22.* and the support of saving torch state dict to disk in 0.24.*, we now provide helpers to load torch state dicts from disk. By centralizing these functionalities in huggingface_hub, we ensure a consistent implementation across the HF ecosystem while allowing external libraries to benefit from standardized weight handling.

>>> from huggingface_hub import load_torch_model, load_state_dict_from_file

# load state dict from a single file
>>> state_dict = load_state_dict_from_file("path/to/weights.safetensors")

# Directly load weights into a PyTorch model
>>> model = ... # A PyTorch model
>>> load_torch_model(model, "path/to/checkpoint")

More details in the serialization package reference.

[Serialization] support loading torch state dict from disk by @hanouticelina in #2687

We added a flag to save_torch_state_dict() helper to properly handle model saving in distributed environments, aligning with existing implementations across the Hugging Face ecosystem:

[Serialization] Add is_main_process argument to save_torch_state_dict() by @hanouticelina in #2648

A bug with shared tensor handling reported in transformers#35080 has been fixed:

add argument to pass shared tensors keys to discard by @hanouticelina in #2696

✨ HfApi

The following changes align the client with server-side updates in how security metadata is handled and exposed in the API responses. In particular, The repository security status returned by HfApi().model_info() is now available in the security_repo_status field:

from huggingface_hub import HfApi

api = HfApi()

model = api.model_info("your_model_id", securityStatus=True)

# get security status info of your model
- security_info = model.securityStatus
+ security_info = model.security_repo_status
  • Update how file's security metadata is retrieved following changes in the API response by @hanouticelina in #2621
  • Expose repo security status field in ModelInfo by @hanouticelina in #2639

🌐 📚 Documentation

Thanks to @miaowumiaomiaowu, more documentation is now available in Chinese! And thanks @13579606 for reviewing these PRs. Check out the result here.

:memo:Translating docs to Simplified Chinese by @miaowumiaomiaowu in #2689, #2704 and #2705.

💔 Breaking changes

A few breaking changes have been introduced:

  • RepoCardData serialization now preserves None values in nested structures.
  • InferenceClient.image_to_image() now takes a target_size argument instead of height and width arguments. This is has been reflected in the InferenceClient async equivalent as well.
  • InferenceClient.table_question_answering() no longer accepts a parameter argument. This is has been reflected in the InferenceClient async equivalent as well.
  • Due to low usage, list_metrics() has been removed from HfApi.
  • Do not remove None values in RepoCardData serialization by @Wauplin in #2626
  • manually update chat completion params by @hanouticelina in #2682
  • [Bot] Update inference types #2688
  • rm list_metrics by @julien-c in #2702

⏳ Deprecations

Some deprecations have been introduced as well:

  • Legacy token permission checks are deprecated as they are no longer relevant with fine-grained tokens, This includes is_write_action in build_hf_headers(), write_permission=True in login methods. get_token_permission has been deprecated as well.
  • labels argument is deprecated in InferenceClient.zero_shot_classification() and InferenceClient.image_zero_shot_classification(). This is has been reflected in the InferenceClient async equivalent as well.
  • Deprecate is_write_action and write_permission=True when login by @Wauplin in #2632
  • Fix and deprecate get_token_permission by @Wauplin in #2631
  • [Inference Client] fix param docstring and deprecate labels param in zero-shot classification tasks by @hanouticelina in #2668

🛠️ Small fixes and maintenance

😌 QoL improvements

  • Add utf8 encoding to read_text to avoid Windows charmap crash by @tomaarsen in #2627
  • Add user CLI unit tests by @hanouticelina in #2628
  • Update consistent error message (we can't do much about it) by @Wauplin in #2641
  • Warn about upload_large_folder if really large folder by @Wauplin in #2656
  • Support context mananger in commit scheduler by @Wauplin in #2670
  • Fix autocompletion not working with ModelHubMixin by @Wauplin in #2695
  • Enable tqdm progress in cloud environments by @cbensimon in #2698

🐛 Bug and typo fixes

  • bugfix huggingface-cli command execution in python3.8 by @PineApple777 in #2620
  • Fix documentation link formatting in README_cn by @BrickYo in #2615
  • Update hf_file_system.md by @SwayStar123 in #2616
  • Fix download local dir edge case (remove lru_cache) by @Wauplin in #2629
  • Fix typos by @omahs in #2634
  • Fix ModelCardData's datasets typing by @hanouticelina in #2644
  • Fix HfFileSystem.exists() for deleted repos and update documentation by @hanouticelina in #2643
  • Fix max tokens default value in text generation and chat completion by @hanouticelina in #2653
  • Fix sorting properties by @hanouticelina in #2655
  • Don't write the ref file unless necessary by @d8ahazard in #2657
  • update attribute used in delete_collection_item docstring by @davanstrien in #2659
  • 🐛: Fix bug by ignoring specific files in cache manager by @johnmai-dev in #2660
  • Bug in model_card_consistency_reminder.yml by @deanwampler in #2661
  • [Inference Client] fix zero_shot_image_classification's parameters by @hanouticelina in #2665
  • Use asyncio.sleep in AsyncInferenceClient (not time.sleep) by @Wauplin in #2674
  • Make sure create_repo respect organization privacy settings by @Wauplin in #2679
  • Fix timestamp parsing to always include milliseconds by @hanouticelina in #2683
  • will be used by @julien-c in #2701
  • remove context manager when loading shards and handle mlx weights by @hanouticelina in #2709

🏗️ internal

  • prepare for release v0.27 by @hanouticelina in #2622
  • Support python 3.13 by @hanouticelina in #2636
  • Add CI to auto-generate inference types by @Wauplin in #2600
  • [InferenceClient] Automatically handle outdated task parameters by @hanouticelina in #2633
  • Fix logo in README when dark mode is on by @hanouticelina in #2669
  • Fix lint after ruff update by @Wauplin in #2680
  • Fix test_list_spaces_linked by @Wauplin in #2707
Dec 6, 2024
[v0.26.5]: Serialization: Add argument to pass shared tensors names to drop when saving
Nov 28, 2024
[v0.26.3]: Fix timestamp parsing to always include milliseconds
Oct 28, 2024
[v0.26.2] Fix: Reflect API response changes in file and repo security status fields

This patch release includes updates to align with recent API response changes:

  • Update how file's security metadata is retrieved following changes in the API response (#2621).
  • Expose repo security status field in ModelInfo (#2639).

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.26.1...v0.26.2

Oct 21, 2024
[v0.26.1] Hot-fix: fix Python 3.8 support for `huggingface-cli` commands
Oct 17, 2024
v0.26.0: Multi-tokens support, conversational VLMs and quality of life improvements

🔐 Multiple access tokens support

Managing fine-grained access tokens locally just became much easier and more efficient! Fine-grained tokens let you create tokens with specific permissions, making them especially useful in production environments or when working with external organizations, where strict access control is essential.

To make managing these tokens easier, we've added a ✨ new set of CLI commands ✨ that allow you to handle them programmatically:

  • Store multiple tokens on your machine by simply logging in with the login() command with each token:
huggingface-cli login
  • Switch between tokens and choose the one that will be used for all interactions with the Hub:
huggingface-cli auth switch
  • List available access tokens on your machine:
huggingface-cli auth list
  • Delete a specific token from your machine with:
huggingface-cli logout [--token-name TOKEN_NAME]

✅ Nothing changes if you are using the HF_TOKEN environment variable as it takes precedence over the token set via the CLI. More details in the documentation. 🤗

  • Support multiple tokens locally by @hanouticelina in #2549

⚡️ InferenceClient improvements

🖼️ Conversational VLMs support

Conversational vision-language models inference is now supported with InferenceClient's chat completion!

from huggingface_hub import InferenceClient

# works with remote url or base64 encoded url
image_url ="https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"

client = InferenceClient("meta-llama/Llama-3.2-11B-Vision-Instruct")
output = client.chat.completions.create(
    messages=[
        {
            "role": "user",
            "content": [
                {
                    "type": "image_url",
                    "image_url": {"url": image_url},
                },
                {
                    "type": "text",
                    "text": "Describe this image in one sentence.",
                },
            ],
        },
    ],
)

print(output.choices[0].message.content)
#A determine figure of Lady Liberty stands tall, holding a torch aloft, atop a pedestal on an island.

🔧 More complete support for inference parameters

You can now pass additional inference parameters to more task methods in the InferenceClient, including: image_classification, text_classification, image_segmentation, object_detection, document_question_answering and more! For more details, visit the InferenceClient reference guide.

✅ Of course, all of those changes are also available in the AsyncInferenceClient async equivalent 🤗

  • Support VLM in chat completion (+some specs updates) by @Wauplin in #2556
  • [Inference Client] Add task parameters and a maintenance script of these parameters by @hanouticelina in #2561
  • Document vision chat completion with Llama 3.2 11B V by @Wauplin in #2569

✨ HfApi

update_repo_settings can now be used to switch visibility status of a repo. This is a drop-in replacement for update_repo_visibility which is deprecated and will be removed in version v0.29.0.

- update_repo_visibility(repo_id, private=True)
+ update_repo_settings(repo_id, private=True)
  • Feature: switch visibility with update_repo_settings by @WizKnight in #2541

📄 Daily papers API is now supported in huggingface_hub, enabling you to search for papers on the Hub and retrieve detailed paper information.

>>> from huggingface_hub import HfApi

>>> api = HfApi()
# List all papers with "attention" in their title
>>> api.list_papers(query="attention")
# Get paper information for the "Attention Is All You Need" paper
>>> api.paper_info(id="1706.03762")
  • Daily Papers API by @hlky in #2554

🌐 📚 Documentation

Efforts from the Tamil-speaking community to translate guides and package references to TM! Check out the result here.

  • Translated index.md and installation.md to Tamil by @Raghul-M in #2555

💔 Breaking changes

A few breaking changes have been introduced:

  • cached_download(), url_to_filename(), filename_to_url() methods are now completely removed. From now on, you will have to use hf_hub_download() to benefit from the new cache layout.
  • legacy_cache_layout argument from hf_hub_download() has been removed as well.

These breaking changes have been announced with a regular deprecation cycle.

Also, any templating-related utility has been removed from huggingface_hub. Client side templating is not necessary now that all conversational text-generation models in InferenceAPI are served with TGI.

Prepare for release 0.26 by @hanouticelina in #2579 Remove templating utility by @Wauplin in #2611

🛠️ Small fixes and maintenance

😌 QoL improvements

  • docs: move translations to i18n by @SauravMaheshkar in #2566
  • Preserve card metadata format/ordering on load->save by @hlky in #2570
  • Remove raw HTML from error message content and improve request ID capture by @hanouticelina in #2584
  • [Inference Client] Factorize inference payload build by @hanouticelina in #2601
  • Use proper logging in auth module by @hanouticelina in #2604

🐛 fixes

  • Use repo_type in HfApi.grant_access url by @albertvillanova in #2551
  • Raise error if encountered in chat completion SSE stream by @Wauplin in #2558
  • Add 500 HTTP Error to retry list by @farzadab in #2567
  • Add missing documentation by @adiaholic in #2572
  • Serialization: take into account meta tensor when splitting the state_dict by @SunMarc in #2591
  • Fix snapshot download when local_dir is provided. by @hanouticelina in #2592
  • Fix PermissionError while creating '.no_exist/' directory in cache by @Wauplin in #2594
  • Fix 2609 - Import packaging by default by @Wauplin in #2610

🏗️ internal

  • Fix test by @Wauplin in #2582
  • Make SafeTensorsInfo.parameters a Dict instead of List by @adiaholic in #2585
  • Fix tests listing text generation models by @Wauplin in #2593
  • Skip flaky Repository test by @Wauplin in #2595
  • Support python 3.12 by @hanouticelina in #2605

Significant community contributions

The following contributors have made significant changes to the library over the last release:

  • @SauravMaheshkar
    • docs: move translations to i18n (#2566)
  • @WizKnight
    • Feature: switch visibility with update_repo_settings #2537 (#2541)
  • @hlky
    • Preserve card metadata format/ordering on load->save (#2570)
    • Daily Papers API (#2554)
  • @Raghul-M
    • Translated index.md and installation.md to Tamil (#2555)
Oct 9, 2024
[v0.25.2]: Fix snapshot download when `local_dir` is provided

Full Changelog : v0.25.1...v0.25.2 For more details, refer to the related PR https://github.com/huggingface/huggingface_hub/pull/2592

Sep 23, 2024
[v0.25.1]: Raise error if encountered in chat completion SSE stream

Full Changelog : v0.25.0...v0.25.1 For more details, refer to the related PR #2558

Latest
v1.11.0
Tracking Since
Dec 20, 2023
Last fetched Apr 19, 2026