v1.0: Building for the Next Decade
Check out our blog post announcement!
The huggingface_hub library now uses httpx instead of requests for HTTP requests. This change was made to improve performance and to support both synchronous and asynchronous requests the same way. We therefore dropped both requests and aiohttp dependencies.
The get_session and hf_raise_for_status still exist and respectively returns an httpx.Client and processes a httpx.Response object. An additional get_async_client utility has been added for async logic.
The exhaustive list of breaking changes can be found here.
hf_raise_for_status on async stream + tests by @Wauplin in #3442git_vs_http guide by @Wauplin in #3357huggingface_hub 1.0 marks a complete transformation of our command-line experience. We've reimagined the CLI from the ground up, creating a tool that feels native to modern ML workflows while maintaining the simplicity the community love.
huggingface-cliThis release marks the end of an era with the complete removal of the huggingface-cli command. The new hf command (introduced in v0.34.0) takes its place with a cleaner, more intuitive design that follows a logical "resource-action" pattern. This breaking change simplifies the user experience and aligns with modern CLI conventions - no more typing those extra 11 characters!
huggingface-cli entirely in favor of hf by @Wauplin in #3404hf CLI RevampThe new CLI introduces a comprehensive set of commands for repository and file management that expose powerful HfApi functionality directly from the terminal:
> hf repo --help
Usage: hf repo [OPTIONS] COMMAND [ARGS]...
Manage repos on the Hub.
Options:
--help Show this message and exit.
Commands:
branch Manage branches for a repo on the Hub.
create Create a new repo on the Hub.
delete Delete a repo from the Hub.
move Move a repository from a namespace to another namespace.
settings Update the settings of a repository.
tag Manage tags for a repo on the Hub.
A dry run mode has been added to hf download, which lets you preview exactly what will be downloaded before committing to the transfer—showing file sizes, what's already cached, and total bandwidth requirements in a clean table format:
> hf download gpt2 --dry-run
[dry-run] Fetching 26 files: 100%|██████████████████████████████████████████████████████████| 26/26 [00:00<00:00, 50.66it/s]
[dry-run] Will download 26 files (out of 26) totalling 5.6G.
File Bytes to download
--------------------------------- -----------------
.gitattributes 445.0
64-8bits.tflite 125.2M
64-fp16.tflite 248.3M
64.tflite 495.8M
README.md 8.1K
config.json 665.0
flax_model.msgpack 497.8M
generation_config.json 124.0
merges.txt 456.3K
model.safetensors 548.1M
onnx/config.json 879.0
onnx/decoder_model.onnx 653.7M
onnx/decoder_model_merged.onnx 655.2M
...
The CLI now provides intelligent shell auto-completion that suggests available commands, subcommands, options, and arguments as you type - making command discovery effortless and reducing the need to constantly check --help.

The CLI now also checks for updates in the background, ensuring you never miss important improvements or security fixes. Once every 24 hours, the CLI silently checks PyPI for newer versions and notifies you when an update is available - with personalized upgrade instructions based on your installation method.
The cache management CLI has been completely revamped with the removal of hf scan cache and hf scan delete in favor of docker-inspired commands that are more intuitive. The new hf cache ls provides rich filtering capabilities, hf cache rm enables targeted deletion, and hf cache prune cleans up detached revisions.
# List cached repos
>>> hf cache ls
ID SIZE LAST_ACCESSED LAST_MODIFIED REFS
--------------------------- -------- ------------- ------------- -----------
dataset/nyu-mll/glue 157.4M 2 days ago 2 days ago main script
model/LiquidAI/LFM2-VL-1.6B 3.2G 4 days ago 4 days ago main
model/microsoft/UserLM-8b 32.1G 4 days ago 4 days ago main
Found 3 repo(s) for a total of 5 revision(s) and 35.5G on disk.
# List cached repos with filters
>>> hf cache ls --filter "type=model" --filter "size>3G" --filter "accessed>7d"
# Output in different format
>>> hf cache ls --format json
>>> hf cache ls --revisions # Replaces the old --verbose flag
# Cache removal
>>> hf cache rm model/meta-llama/Llama-2-70b-hf
>>> hf cache rm $(hf cache ls --filter "accessed>1y" -q) # Remove old items
# Clean up detached revisions
hf cache prune # Removes all unreferenced revisions
Under the hood, this transformation is powered by Typer, significantly reducing boilerplate and making the CLI easier to maintain and extend with new features.
hf cache by @hanouticelina in #3439The new cross-platform installers simplify CLI installation by creating isolated sandboxed environments without interfering with your existing Python setup or project dependencies. The installers work seamlessly across macOS, Linux, and Windows, automatically handling dependencies and PATH configuration.
# On macOS and Linux
>>> curl -LsSf https://hf.co/cli/install.sh | sh
# On Windows
>>> powershell -ExecutionPolicy ByPass -c "irm https://hf.co/cli/install.ps1 | iex"
Finally, the [cli] extra has been removed - The CLI now ships with the core huggingface_hub package.
[cli] extra by @hanouticelina in #3451The v1.0 release is a major milestone for the huggingface_hub library. It marks our commitment to API stability and the maturity of the library. We have made several improvements and breaking changes to make the library more robust and easier to use. A migration guide has been written to reduce friction as much as possible: https://huggingface.co/docs/huggingface_hub/concepts/migration.
We'll list all breaking changes below:
Minimum Python version is now 3.9 (instead of 3.8).
HTTP backend migrated from requests to httpx. Expect some breaking changes on advances features and errors. The exhaustive list can be found here.
The deprecated huggingface-cli has been removed, hf (introduced in v0.34) replaces it with a clearer ressource-action CLI.
huggingface-cli entirely in favor of hf by @Wauplin in #3404The [cli] extra has been removed - The CLI now ships with the core huggingface_hub package.
[cli] extra by @hanouticelina in #3451Long deprecated classes like HfFolder, InferenceAPI, and Repository have been removed.
HfFolder and InferenceAPI classes by @Wauplin in #3344Repository class by @Wauplin in #3346constant.hf_cache_home have been removed. Use constants.HF_HOME instead.
use_auth_token is not supported anymore. Use token instead. Previously using use_auth_token automatically redirected to token with a warning
removed get_token_permission. Became useless when fine-grained tokens arrived.
removed update_repo_visibility. Use update_repo_settings instead.
removed is_write_action is all build_hf_headers methods. Not relevant since fine-grained tokens arrived.
removed write_permission arg from login method. Not relevant anymore.
renamed login(new_session) to login(skip_if_logged_in) in login methods. Not announced but hopefully very little friction. Only some notebooks to update on the Hub (will do it once released)
removed resume_download / force_filename / local_dir_use_symlinks parameters from hf_hub_download/snapshot_download (and mixins)
removed library / language / tags / task from list_models args
upload_file/upload_folder now returns a url to the commit created on the Hub as any other method creating a commit (create_commit, delete_file, etc.)
require keyword arguments on login methods
Remove any Keras 2.x and tensorflow-related code
hf_transfer support. hf_xet is now the default upload/download manager
Routing for Chat Completion API in Inference Providers is now done server-side. This saves 1 HTTP call + allows us to centralize logic to route requests to the correct provider. In the future, it enables use cases like choosing fastest or cheapest provider directly.
Also some updates in the docs:
We've added support for TypedDict to our @strict framework, which is our data validation tool for dataclasses. Typed dicts are now converted to dataclasses on-the-fly for validation, without mutating the input data. This logic is currently used by transformers to validate config files but is library-agnostic and can therefore be used by anyone. More details in this guide.
from typing import Annotated, TypedDict
from huggingface_hub.dataclasses import validate_typed_dict
def positive_int(value: int):
if not value >= 0:
raise ValueError(f"Value must be positive, got {value}")
class User(TypedDict):
name: str
age: Annotated[int, positive_int]
# Valid data
validate_typed_dict(User, {"name": "John", "age": 30})
Added a HfApi.list_organization_followers endpoint to list followers of an organization, similar to the existing one for user's followers.
sentence_similarity docstring by @tolgaakar in #3374image-to-image by @hanouticelina in #3399ty quality by @hanouticelina in #3441The following contributors have made changes to the library over the last release. Thank you!
sentence_similarity docstring (#3374) (#3375)Fetched April 7, 2026