releases.shpreview
Hugging Face/huggingface_hub

huggingface_hub

$npx -y @buildinternet/releases show huggingface-hub
Mon
Wed
Fri
AprMayJunJulAugSepOctNovDecJanFebMarApr
Less
More
Releases20Avg6/moVersionsv1.3.2 → v1.11.0
Apr 16, 2026
[v1.11.0] Semantic Spaces search, Space logs, and more

🔍 Semantic search for Spaces

Discover Spaces using natural language. The new search_spaces() API and hf spaces search CLI use embedding-based semantic search to find relevant Spaces based on what they do - not just keyword matching on their name.

>>> from huggingface_hub import search_spaces

>>> results = search_spaces("remove background from photo")
>>> for space in results:
...     print(f"{space.id} (score: {space.score:.2f})")
briaai/BRIA-RMBG-1.4 (score: 0.87)

The same capability is available in the CLI:

$ hf spaces search "remove background from photo" --limit 3
ID                           TITLE                 SDK    LIKES STAGE   CATEGORY           SCORE
---------------------------- --------------------- ------ ----- ------- ------------------ -----
not-lain/background-removal  Background Removal    gradio 2794  RUNNING Image Editing      0.85 
briaai/BRIA-RMBG-2.0         BRIA RMBG 2.0         gradio 918   RUNNING Background Removal 0.84 
Xenova/remove-background-web Remove Background Web static 739   RUNNING Background Removal 0.81 
Hint: Use --description to show AI-generated descriptions.

# Filter by SDK, get JSON with descriptions
$ hf spaces search "chatbot" --sdk gradio --description --json --limit 1 | jq
[
  {
    "id": "BarBar288/Chatbot",
    "title": "Chatbot",
    "sdk": "gradio",
    "likes": 4,
    "stage": "RUNNING",
    "category": "Other",
    "score": 0.5,
    "description": "Perform various AI tasks like chat, image generation, and text-to-speech"
  }
]

📜 Programmatic access to Space logs

When a Space fails to build or crashes at runtime, you can now retrieve the logs programmatically — no need to open the browser. This is particularly useful for agentic workflows that need to debug Space failures autonomously.

>>> from huggingface_hub import fetch_space_logs

# Run logs (default)
>>> for line in fetch_space_logs("username/my-space"):
...     print(line, end="")

# Build logs — for BUILD_ERROR debugging
>>> for line in fetch_space_logs("username/my-space", build=True):
...     print(line, end="")

# Stream in real time
>>> for line in fetch_space_logs("username/my-space", follow=True):
...     print(line, end="")

The CLI equivalent:

$ hf spaces logs username/my-space              # run logs
$ hf spaces logs username/my-space --build      # build logs
$ hf spaces logs username/my-space -f           # stream in real time
$ hf spaces logs username/my-space -n 50        # last 50 lines

🖥️ CLI output standardization continues

This release continues the CLI output migration started in v1.9, bringing 11 more command groups to the unified --format flag. The old --quiet flags on migrated commands are replaced by --format quiet.

$ hf cache ls                          # auto-detect (human or agent)
$ hf cache ls --format json            # structured JSON
$ hf cache ls --format quiet           # minimal output, great for piping
$ hf upload my-model . .               # auto-detect (human or agent)

Confirmation prompts (e.g., hf cache rm, hf repos delete, hf buckets delete) are now mode-aware: they prompt in human mode, and require --yes in agent/json/quiet modes - no more hanging scripts.

Commands migrated in this release: collections, discussions, extensions, endpoints, webhooks, cache, repos, repo-files, download, upload, and upload-large-folder. Remaining commands (jobs, buckets, auth login/logout) will follow in a future release.

📦 Space volumes management from the CLI

A new hf spaces volumes command group lets you manage volumes mounted in Spaces directly from the command line — list, set, and delete using the familiar -v/--volume syntax.

# List mounted volumes
$ hf spaces volumes ls username/my-space
TYPE    SOURCE                MOUNT_PATH READ_ONLY
------- --------------------- ---------- ---------
model   gpt2                  /data
dataset badlogicgames/pi-mono /data2

# Set volumes
$ hf spaces volumes set username/my-space -v hf://buckets/username/my-bucket:/data
$ hf spaces volumes set username/my-space -v hf://models/username/my-model:/models

# Delete all volumes
$ hf spaces volumes delete username/my-space

🔧 More CLI improvements

hf auth token - Prints the current token to stdout, handy for piping into other commands:

$ hf auth token
hf_xxxx
Hint: Run `hf auth whoami` to see which account this token belongs to.

# Use it in a curl call
$ hf auth token | xargs -I {} curl -H "Authorization: Bearer {}" https://huggingface.co/api/whoami-v2

💔 Breaking change

model_name deprecated in list_models - Use search instead. Both were always equivalent (both map to ?search=... in the API), but now model_name emits a deprecation warning. Removal is planned for 2.0.

# Before
>>> list_models(model_name="gemma")

# After
>>> list_models(search="gemma")

The CLI is not affected - hf models ls already uses --search.

🔧 Other improvements

🐛 Bug fixes

📖 Documentation

🏗️ Internal

Apr 14, 2026
[v1.10.2] Fix reference cycle in hf_raise_for_status
  • Fix reference cycle in hf_raise_for_status causing delayed object destruction by @Wauplin in #4092

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v1.10.1...v1.10.2

Apr 9, 2026
[v1.10.1] Fix copy file to folder
  • Fix copy file to folder (#4075)
  • [CLI ]Improving a bit hf CLI discoverability (#4079)
  • Support kernels in list_liked_repos (#4078)

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v1.10.0...v1.10.1

[v1.10.0] Instant file copy and new Kernel repo type

📁 Instant file copy between Buckets and Repositories

This release introduces server-side file copy operations that let you move data between Hugging Face storage without downloading and re-uploading. You can now copy files from one Bucket to another, from a repository (model, dataset, or Space) to a Bucket, or between Buckets — all without bandwidth costs. Files tracked with Xet are copied directly by hash (no data transfer), while small text files not tracked with Xet are automatically downloaded and re-uploaded.

>>> from huggingface_hub import copy_files

# Bucket to bucket (same or different bucket)
>>> copy_files(
...     "hf://buckets/username/source-bucket/checkpoints/model.safetensors",
...     "hf://buckets/username/destination-bucket/archive/model.safetensors",
... )

# Repo to bucket
>>> copy_files(
...     "hf://datasets/username/my-dataset/processed/",
...     "hf://buckets/username/my-bucket/datasets/processed/",
... )

The same capability is available in the CLI:

# Bucket to bucket
>>> hf buckets cp hf://buckets/username/source-bucket/logs/ hf://buckets/username/archives/logs/

# Repo to bucket
>>> hf buckets cp hf://datasets/username/my-dataset/data/train/ hf://buckets/username/my-bucket/datasets/train/

Note that copying files from a Bucket to a Repository is not yet supported.

📚 Documentation: Buckets guide

⚛️ Introducing Kernel repositories

[!TIP] For building, publishing, and using kernel repos, please use the dedicated kernels package.

The Hub now supports a new kernel repository type for hosting compute kernels. This release adds first-class (but explicitly limited) support for interacting with kernel repos via the Python API. Only a subset of methods are officially supported: kernel_info, hf_hub_download, snapshot_download, list_repo_refs, list_repo_files, and list_repo_tree. Creation and deletion are also supported but restricted to a small subset of allowed users and organizations on the Hub.

>>> from huggingface_hub import kernel_info
>>> kernel_info("kernels-community/yoso")
KernelInfo(id='kernels-community/yoso', author='kernels-community', downloads=0, gated=False, last_modified=datetime.datetime(2026, 4, 3, 22, 27, 25, tzinfo=datetime.timezone.utc), likes=0, private=False)

📚 Documentation: Repository guide

📖 Documentation

🐛 Bug and typo fixes

🏗️ Internal

Apr 8, 2026
[v1.9.2] Fix set_space_volume / delete_space_volume return types
  • Fix set_space_volume / delete_space_volume return types #4061 by @abidlabs @Wauplin

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v1.9.1...v1.9.2

Apr 7, 2026
[v1.9.1] Fix: `set_space_volumes` sending bare array instead of object
  • Fix set_space_volumes sending bare array instead of object #4054 by @davanstrien

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v1.9.0...v1.9.1

Apr 2, 2026
[v1.9.0] Agent-Aware CLI, Spaces Volumes, and more

🚀 Spaces Volumes: Mount Models, Datasets, and Buckets Directly

Hugging Face Spaces now support mounting volumes, giving your Space direct filesystem access to models, datasets, and storage buckets. This replaces the deprecated persistent storage feature.

from huggingface_hub import HfApi, Volume

api = HfApi()
api.set_space_volumes(
    repo_id="username/my-space",
    volumes=[
        Volume(type="model", source="username/my-model", mount_path="/models", read_only=True),
        Volume(type="bucket", source="username/my-bucket", mount_path="/data"),
    ],
)

Volumes can also be set at creation time via create_repo(space_volumes=...) and duplicate_repo(space_volumes=...), and from the CLI with the --volume / -v flag:

# Create a Space with volumes mounted
hf repos create my-space --type space --space-sdk gradio \
    -v hf://gpt2:/models -v hf://buckets/org/b:/data

# Duplicate a Space with volumes
hf repos duplicate org/my-space my-space --type space \
    -v hf://gpt2:/models -v hf://buckets/org/b:/data
  • Add support for mounted volumes by @Wauplin in #4018
  • Support volumes at repo creation and duplication by @Wauplin in #4035

🤖 The hf CLI Now Auto-Detects AI Agents and Adapts Its Output

AI coding agents (Claude Code, Cursor, Codex, Copilot, Gemini, ...) increasingly use the hf CLI to interact with the Hub. Until now, the output was designed for humans - ANSI colors, padded tables, emoji booleans, truncated cells - making it hard for agents to parse reliably.

Starting with v1.9, the CLI automatically detects when it's running inside an agent and adapts its output: no ANSI, no truncation, tab-separated tables, compact JSON, full timestamps. No configuration needed - it just works. This is only a first step toward making the hf CLI the primary entry point to the Hugging Face Hub for AI agents!

Agent mode is auto-detected but you can also force a mode explicitly with --format:

hf models ls --limit 5                  # auto-detect
hf models ls --limit 5 --format agent   # force agent-friendly output
hf models ls --limit 5 --format json    # structured JSON
hf models ls --limit 5 --format quiet   # IDs only, great for piping

Here's what an agent sees compared to a human:

hf auth whoami

# Human
✓ Logged in
  user: Wauplin
  orgs: huggingface, awesome-org

# Agent
user=Wauplin orgs=huggingface,awesome-org

# JSON
{"user": "Wauplin", "orgs": ["huggingface", "awesome-org"]}

hf models ls --author google --limit 3

# Human
ID                         DOWNLOADS TRENDING_SCORE
-------------------------- --------- --------------
google/embeddinggemma-300m 1213145   17            
google/gemma-3-4b-it       1512637   16            
google/gemma-3-27b-it      988618    12   

# Agent (TSV, no truncation, no ANSI)
id      downloads       trending_score
google/embeddinggemma-300m      1213145 17
google/gemma-3-4b-it    1512637 16
google/gemma-3-27b-it   988618  12

hf models info google/gemma-3-27b-it

# Human — pretty-printed JSON (indent=2)
{
  "id": "google/gemma-3-27b-it",
  "author": "google",
  ...
}

# Agent — compact JSON (~40% fewer tokens)
{"id": "google/gemma-3-27b-it", "author": "google", "card_data": ...}

Commands migrated so far: hf models ls|info, hf datasets ls|info|parquet|sql, hf spaces ls|info, hf papers ls|search|info, hf auth whoami. More commands will be migrated soon

  • Add out output singleton with agent/human mode rendering by @hanouticelina in #4005
  • Migrate models, datasets, spaces, papers to out singleton by @hanouticelina in #4026
  • Add FormatWithAutoOpt with callback to auto-set output mode by @hanouticelina in #4028
  • Add tests for out output singleton by @hanouticelina in #4020
  • Add agent detection helpers by @hanouticelina in #4015
  • Enrich CLI errors with available options and commands by @hanouticelina in #4034

🧩 Install Agent Skills from the Hugging Face Marketplace

The hf skills add command now supports installing skills directly from the Hugging Face skills marketplace (https://github.com/huggingface/skills) - pre-built tools that give AI agents new capabilities.

# Install a marketplace skill
hf skills add gradio

# Install with Claude Code integration
hf skills add huggingface-gradio --claude

# Upgrade all installed skills
hf skills upgrade
  • Support skills from hf skills by @burtenshaw in #3956
  • Improve hf CLI skill description for better agent triggering by @hanouticelina in #3973

🔧 More CLI Improvements

  • Auto-install official HF CLI extensions on first invocation by @hanouticelina in #4007
  • Add summary field to hf papers search CLI output by @Wauplin in #4006
  • Interactive CLI autoupdate prompt by @Wauplin in #3983

🔧 Other Improvements

  • Clarify 404 access guidance in errors by @Pierrci in #4010
  • Add HF_HUB_DISABLE_SYMLINKS env variable to force no-symlink cache by @Wauplin in #4032
  • Add CACHEDIR.TAG to cache directories by @Wauplin in #4030
  • Support None type in strict dataclass by @Wauplin in #3987
  • Reject bool/int cross-type confusion in @strict dataclass validation by @Wauplin in #3992

🐛 Bug Fixes

  • Fix PyTorchModelHubMixin not calling eval() on safetensors load by @joaquinhuigomez in #3997
  • Bump to hf-xet 1.4.3 and add regression test by @Wauplin in #4019
  • Validate shard filenames in sharded checkpoint index files by @Wauplin in #4033
  • Fix test_create_commit_conflict test by @Wauplin in #3986
  • Do not scan CACHEDIR.TAG file in cache by @Wauplin in #4036
  • Deduplicate repo folder name generation logic by @cphlipot in #4024

📖 Documentation

  • Add tip about AI agents skill to CLI guide by @gary149 in #3970
  • Link to Hub local cache docs from manage-cache guide by @Wauplin in #3989
  • Note that environment variables are read at import time by @Wauplin in #3990
  • Add DatasetLeaderboardEntry and EvalResultEntry to docs reference by @pcuenca in #3982
  • Fix typos and outdated references in CONTRIBUTING.md by @GopalGB in #4009
  • no explicit models/ in hf:// protocol by @lhoestq in #3980
  • Add CLAUDE.md symlink pointing to AGENTS.md by @hanouticelina in #4013

🏗️ Internal

  • Bump minimum Python version from 3.9 to 3.10 by @hanouticelina in #4008
  • Use match/case statements where appropriate by @hanouticelina in #4012
  • Fix ty type-checking errors after latest release by @hanouticelina in #3978
  • Prepare for v1.9 release by @Wauplin in #3988
  • Update python-release.yml by @hf-security-analysis[bot] in #4011
  • Pin GitHub Actions to commit SHAs by @paulinebm in #4029
  • Remove claude.yml workflow file by @hf-security-analysis[bot] in #4031
  • Generate slack message for prerelease by @Wauplin in #3976
Mar 25, 2026
[v1.8.0] Mounted volumes on Jobs, complete papers CLI, and more

🚀 Jobs can now mount volumes

Jobs can now access Hugging Face repositories (models, datasets, Spaces) and Storage Buckets directly as mounted volumes in their containers. This enables powerful workflows like running queries directly against datasets, loading models without explicit downloads, and persisting training checkpoints to buckets.

from huggingface_hub import run_job, Volume

job = run_job(
    image="duckdb/duckdb",
    command=["duckdb", "-c", "SELECT * FROM '/data/**/*.parquet' LIMIT 5"],
    volumes=[
        Volume(type="dataset", source="HuggingFaceFW/fineweb", mount_path="/data"),
    ],
)
hf jobs run -v hf://datasets/HuggingFaceFW/fineweb:/data duckdb/duckdb duckdb -c "SELECT * FROM '/data/**/*.parquet' LIMIT 5"
  • Add volume mounting support for buckets and repos by @XciD in #3936

📖 Papers CLI is now complete

The hf papers command now has full functionality: search papers by keyword, get structured JSON metadata, and read the full paper content as markdown. The ls command is also enhanced with new filters for week, month, and submitter.

# Search papers
hf papers search "vision language"

# Get metadata
hf papers info 2601.15621

# Read as markdown
hf papers read 2601.15621
  • Complete hf papers with search, info, read + ls filters by @mishig25 in #3952

🖥️ CLI repo ID shorthand

You can now use repo ID prefixes like spaces/user/repo, datasets/user/repo, and models/user/repo as a shorthand for user/repo --type space. This works automatically for all CLI commands that accept a --type flag.

# Before
hf download user/my-space --type space
hf discussions list user/my-dataset --type dataset

# After
hf download spaces/user/my-space
hf discussions list datasets/user/my-dataset
  • Accept spaces/user/repo as repo ID prefix shorthand by @Wauplin in #3929

🔧 More repo visibility options

Repositories can now be created or updated with explicit visibility settings (--public, --protected) alongside the existing --private flag. This adds a visibility parameter to HfApi.create_repo, update_repo_settings, and duplicate_repo, with --protected available for Spaces only.

Protected Spaces allow for private code while being publicly accessible.

  • Add visibility parameter to HfApi repo create/update/duplicate methods by @hanouticelina in #3951

🖥️ CLI

  • Add space-specific options to hf repos create and hf repos duplicate by @Wauplin in #3888
  • Add --format json to hf auth whoami by @hanouticelina in #3938 — docs
  • Expand nested groups, inline flags & common options glossary in SKILL.md by @hanouticelina in #3941
  • Include common options inline in generated SKILL.md by @hanouticelina in #3955
  • Fix hf extensions install on uv-managed Python by using uv when available by @hanouticelina in #3957
  • Add dataset leaderboard method to HfApi by @davanstrien in #3953
  • More explicit spaces hot-reload docs by @cbensimon in #3964
  • Update hardware flavors with HF Hub (cpu-performance, sprx8, h200, inf2x6) by @cbensimon in #3965 — docs

🔧 Other QoL Improvements

  • Rename .env to .venv in virtual environment instructions by @julien-c in #3939 — docs
  • Fix typo in --every help text by @julien-c in #3950
  • More robust stream to stdout in hf cp command by @Wauplin in #3968

🐛 Bug and typo fixes

  • Use module logger consistently and narrow bare except clauses by @mango766 in #3924
  • Fix HfFileSystem glob in missing subdir by @lhoestq in #3935

🏗️ Internal

  • Remove conda workflow by @Wauplin in #3928
  • Replace huggingface-cli with hf in brew upgrade command by @hanouticelina in #3946
  • Fix version check message leaking into generated SKILL.md by @hanouticelina in #3949
  • Mention hf-mount in CLI skill by @hanouticelina in #3966
  • Use huggingface-hub-bot for post-release PR creation in release.yml by @Wauplin in #3967
Mar 20, 2026
[1.7.2] `hf` CLI skill improvements, `uv` extension installs & bug fixes

🛠️ CLI improvements

hf CLI skill now fully expands subcommand groups and inlines all flags and options, making the CLI self-describing and easier for agents to discover.

  • Expand nested groups, inline flags & common options glossary in SKILL.md (#3941) by @hanouticelina
  • include common options inline (#3955) by @hanouticelina

hf extension install now uses uv for Python extension installation when available making extension installation faster:


> hyperfine "hf extensions install alvarobartt/hf-mem --force"
# Before
Benchmark 1: hf extensions install alvarobartt/hf-mem --force
  Time (mean ± σ):      3.490 s ±  0.220 s    [User: 1.925 s, System: 0.445 s]
  Range (min  max):    3.348 s …  4.097 s    10 runs

# After
Benchmark 1: hf extensions install alvarobartt/hf-mem --force
  Time (mean ± σ):     519.6 ms ± 119.7 ms    [User: 216.6 ms, System: 95.2 ms]
  Range (min  max):   371.6 ms … 655.2 ms    10 runs

  • Use uv python extension installation when available (#3957) by @hanouticelina

Other QoL improvements:

  • Add --format json to hf auth whoami (#3938) by @hanouticelina
  • Replace huggingface-cli with hf in brew upgrade command (#3946) by @hanouticelina

🐛 Bug & Typo fixes

  • Fix HfFileSystem glob in missing subdirectory (#3935) by @lhoestq
  • Fix: use module logger consistently and narrow bare except clauses (#3924) by @mango766
  • Fix typo in --every help text (#3950) by @julien-c

📚 Docs

  • Rename .env to .venv in virtual environment instructions (#3939) by @julien-c

🏗️ Internal

  • Remove conda workflow (#3928) by @Wauplin
  • Fix version check message leaking into generated SKILL.md (#3949) by @hanouticelina

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v1.7.1...v1.7.2

Mar 12, 2026
[v1.7.0] pip-installable CLI extensions and multiple QoL improvements

This release brings major improvements to the hf CLI with extension discoverability, unified list commands, and multiple QoL improvements in the CLI.

🎉 The Homebrew formula of the Hugging Face CLI has been renamed to hf. Existing users just need to run brew update - Homebrew handles the rename automatically. New users can install with brew install hf.

🧩 CLI Extensions: pip-installable packages and discoverability

The hf CLI extensions system gets a major upgrade in this release. Extensions can now be full Python packages (with a pyproject.toml) installed in isolated virtual environments, in addition to the existing shell script approach. This means extension authors can use Python dependencies without conflicting with the user's system. The install command auto-detects whether a GitHub repo is a script or a Python package and handles both transparently.

A new hf extensions search command lets users discover available extensions directly from the terminal by querying GitHub repositories tagged with the hf-extension topic. Results are sorted by stars and show whether each extension is already installed locally. Additionally, a comprehensive guide on how to build, publish, and make extensions discoverable has been added to the documentation.

# Install a Python-based extension
hf extensions install alvarobartt/hf-mem

# Discover available extensions
hf extensions search
NAME   REPO                    STARS DESCRIPTION                         INSTALLED
------ ----------------------- ----- ----------------------------------- ---------
claude hanouticelina/hf-claude     2 Extension for `hf` CLI to launch... yes
agents hanouticelina/hf-agents       HF extension to run local coding...
  • [CLI] Add pip installable repos support to hf extensions by @Wauplin in #3892
  • [CLI] Add hf extensions search command by @julien-c in #3905
  • [Docs] How to build a CLI extension guide by @Wauplin in #3908

📚 Documentation: Create a CLI extension

🔐 hf auth login CLI update

A new --force flag lets you explicitly go through the full login flow again when needed, for example to switch tokens.

# Already logged in — returns immediately
hf auth login

# Force re-login to switch tokens
hf auth login --force
  • Default to skipping login if already logged in and add --force flag by @hanouticelina in #3920

📚 Documentation: CLI guide

📦 Xet optimizations and fixes

hf-xet has been bumped to v1.4.2 with some optimizations:

  • Avoid duplicate sha256 computation when uploading to a model/dataset repo
  • Skip sha256 computation when uploading to a bucket This should greatly improve upload speed of large files.

The hf-xet bump also comes with a fix for deadlocks / stall on large file downloads.

See hf-xet release notes for more details.

  • feat: pass pre-computed SHA-256 to hf_xet upload by @XciD in #3876
  • feat: pass skip_sha256=True to hf_xet for bucket uploads by @Wauplin in #3900

🖥️ CLI QoL Improvements

  • Add num_parameters filtering to hf API and CLI by @evalstate in #3897 — docs
  • [CLI] Normalize all list/ls commands to use list | ls alias by @julien-c in #3901
  • [CLI] Add --format and --quiet options to hf papers ls by @julien-c in #3903
  • [CLI] Add hidden --json shorthand for --format json by @Wauplin in #3919
  • Allow 'hf skills add' default directory by @Wauplin in #3923

🔧 Other QoL Improvements

  • Add used_storage field to ModelInfo, DatasetInfo, and SpaceInfo by @julien-c in #3911
  • Make sure all expand attributes are official ModelInfo/DatasetInfo/SpaceInfo by @Wauplin in #3918

📖 Documentation

  • [Docs] Update some community CLI examples by @Wauplin in #3899
  • [Docs] Update Homebrew install command to hf by @julien-c in #3902

🐛 Bug and typo fixes

  • snapshot_download operation raises the generic exception even when actual error is different. by @pavankumarch470 in #3914

🏗️ Internal

  • [Internal] Don't trigger Skills sync workflow on release candidate by @hanouticelina in #3893
  • [Internal] Fix skills path in Skills sync workflow by @hanouticelina in #3894
  • [CI] All-in-one Github Action for releases by @Wauplin in #3916
Mar 6, 2026
[v1.6.0] New CLI commands, Bucket fsspec support, and more

This release brings significant new CLI commands for managing Spaces, Datasets, Discussions, and Webhooks, along with HfFileSystem support for Buckets and a CLI extension system.

🚀 New CLI commands

We've added several new CLI command groups to make interacting with the Hub even easier from your terminal.

New hf spaces dev-mode command

You can now enable or disable dev mode on Spaces directly from the CLI. When enabling dev mode, the command waits for the Space to be ready and prints connection instructions (web VSCode, SSH, local VSCode/Cursor). This makes iterating on Spaces much faster by allowing you to restart your application without stopping the Space container.

# Enable dev mode
hf spaces dev-mode username/my-space

# Disable dev mode
hf spaces dev-mode username/my-space --stop
  • Add hf spaces dev-mode command by @lhoestq in #3824

New hf discussions command group

You can now manage discussions and pull requests on the Hub directly from the CLI. This includes listing, viewing, creating, commenting on, closing, reopening, renaming, and merging discussions and PRs.

# List open discussions and PRs on a repo
hf discussions list username/my-model

# Create a new discussion
hf discussions create username/my-model --title "Feature request" --body "Description"

# Create a pull request
hf discussions create username/my-model --title "Fix bug" --pull-request

# Merge a pull request
hf discussions merge username/my-model 5 --yes
  • Add hf discussions command group by @Wauplin in #3855
  • Rename hf discussions view to hf discussions info by @Wauplin in #3878

New hf webhooks command group

Full CLI support for managing Hub webhooks is now available. You can list, inspect, create, update, enable/disable, and delete webhooks directly from the terminal.

# List all webhooks
hf webhooks ls

# Create a webhook
hf webhooks create --url https://example.com/hook --watch model:bert-base-uncased

# Enable / disable a webhook
hf webhooks enable webhook_id
hf webhooks disable webhook_id

# Delete a webhook
hf webhooks delete webhook_id
  • Add hf webhooks CLI commands by @omkar-334 in #3866

New hf datasets parquet and hf datasets sql commands

Two new commands make it easy to work with dataset parquet files. Use hf datasets parquet to discover parquet file URLs, then query them with hf datasets sql using DuckDB.

# List parquet URLs for a dataset
hf datasets parquet cfahlgren1/hub-stats
hf datasets parquet cfahlgren1/hub-stats --subset models --split train

# Run SQL queries on dataset parquet
hf datasets sql "SELECT COUNT(*) FROM read_parquet('https://huggingface.co/api/datasets/...')"
  • Add hf datasets parquet and hf datasets sql commands by @cfahlgren1 in #3833

New hf repos duplicate command

You can now duplicate any repository (model, dataset, or Space) using a unified command. This replaces the previous duplicate_space method with a more general solution.

# Duplicate a Space
hf repos duplicate multimodalart/dreambooth-training --type space

# Duplicate a dataset
hf repos duplicate openai/gdpval --type dataset
  • Add duplicate_repo method and hf repos duplicate command by @Wauplin in #3880

🪣 Bucket support in HfFileSystem

The HfFileSystem now supports buckets, providing S3-like object storage on Hugging Face. You can list, glob, download, stream, and upload files in buckets using the familiar fsspec interface.

from huggingface_hub import hffs

# List files in a bucket
hffs.ls("buckets/my-username/my-bucket/data")

# Read a remote file
with hffs.open("buckets/my-username/my-bucket/data/file.txt", "r") as f:
    content = f.read()

# Read file content as string
hffs.read_text("buckets/my-username/my-bucket/data/file.txt")
  • Add bucket API support in HfFileSystem by @lhoestq in #3807
  • Add docs on hf://buckets by @lhoestq in #3875
  • Remove bucket warning in docs by @Wauplin in #3854

📦 Extensions now support pip install

The hf extensions system now supports installing extensions as Python packages in addition to standalone executables. This makes it easier to distribute and install CLI extensions.

# Install an extension
> hf extensions install hanouticelina/hf-claude
> hf extensions install alvarobartt/hf-mem

# List them
> hf extensions list
COMMAND   SOURCE                  TYPE   INSTALLED  DESCRIPTION                        
--------- ----------------------- ------ ---------- -----------------------------------
hf claude hanouticelina/hf-claude binary 2026-03-06 Launch Claude Code with Hugging ...
hf mem    alvarobartt/hf-mem      python 2026-03-06 A CLI to estimate inference memo...

# Run extension
> hf claude --help
Usage: claude [options] [command] [prompt]

Claude Code - starts an interactive session by default, use -p/--print for non-interactive output
  • Add pip installable repos support to hf extensions by @Wauplin in #3892

Show installed extensions in hf --help

The CLI now shows installed extensions under an "Extension commands" section in the help output.

  • Show installed extensions in hf --help by @hanouticelina in #3884

Other QoL improvements

  • Add NVIDIA provider support to InferenceClient by @manojkilaru97 in #3886
  • Bump hf_xet minimal package version to >=1.3.2 for better throughput by @Wauplin in #3873
  • Fix CLI errors formatting to include repo_id, repo_type, bucket_id by @Wauplin in #3889

📚 Documentation updates

  • Fixed sub-headings for hf cache commands in the doc by @mostafatouny in #3877

🐛 Bug and typo fixes

  • Fix: quote uv args in bash -c to prevent shell redirection by @XciD in #3857
  • Fix typo in generated Skill by @hanouticelina in #3890
  • Fix ty diagnostics in upload, filesystem, and repocard helpers by @hanouticelina in #3891

💔 Breaking changes

  • Remove deprecated direction argument in list_models/datasets/spaces by @Wauplin in #3882

🏗️ Internal

  • Release note skill attempt by @Wauplin in #3853
  • Prepare for v1.6 by @Wauplin in #3860
  • Skip git clone test by @Wauplin in #3881
  • Add Sync hf CLI Skill workflow by @hanouticelina in #3885
  • [Release notes] doc diffs, better skill, concurrent fetching by @Wauplin in #3887
  • Propagate filtered headers to xet by @bpronan in #3858
Feb 26, 2026
[v1.5.0]: Buckets API, Agent-first CLI, Spaces Hot-Reload and more

This release introduces major new features including Buckets (xet-based large scale object storage), CLI Extensions, Space Hot-Reload, and significant improvements for AI coding agents. The CLI has been completely overhauled with centralized error handling, better help output, and new commands for collections, papers, and more.

🪣 Buckets: S3-like Object Storage on the Hub

Buckets provide S3-like object storage on Hugging Face, powered by the Xet storage backend. Unlike repositories (which are git-based and track file history), buckets are remote object storage containers designed for large-scale files with content-addressable deduplication. Use them for training checkpoints, logs, intermediate artifacts, or any large collection of files that doesn't need version control.

# Create a bucket
hf buckets create my-bucket --private

# Upload a directory
hf buckets sync ./data hf://buckets/username/my-bucket

# Download from bucket
hf buckets sync hf://buckets/username/my-bucket ./data

# List files
hf buckets list username/my-bucket -R --tree

The Buckets API includes full CLI and Python support for creating, listing, moving, and deleting buckets; uploading, downloading, and syncing files; and managing bucket contents with include/exclude patterns.

  • Buckets API and CLI by @Wauplin in #3673
  • Support bucket rename/move in API + CLI by @Wauplin in #3843
  • Add 'sync_bucket' to HfApi by @Wauplin in #3845
  • hf buckets file deletion by @Wauplin in #3849
  • Update message when no buckets found by @Wauplin in #3850
  • Buckets doc hf install by @julien-c in #3846

📚 Documentation: Buckets guide

🤖 AI Agent Support

This release includes several features designed to improve the experience for AI coding agents (Claude Code, OpenCode, Cursor, etc.):

  • Centralized CLI error handling: Clean user-facing messages without tracebacks (set HF_DEBUG=1 for full traces) by @hanouticelina in #3754
  • Token-efficient skill: The hf skills add command now installs a compact skill (~1.2k tokens vs ~12k before) by @hanouticelina in #3802
  • Agent-friendly hf jobs logs: Prints available logs and exits by default; use -f to stream by @davanstrien in #3783
  • Add AGENTS.md: Dev setup and codebase guide for AI agents by @Wauplin in #3789
# Install the hf-cli skill for Claude
hf skills add --claude

# Install for project-level
hf skills add --project
  • Add hf skills add CLI command by @julien-c in #3741
  • hf skills add installs to central location with symlinks by @hanouticelina in #3755
  • Add Cursor skills support by @NielsRogge in #3810

🔥 Space Hot-Reload (Experimental)

Hot-reload Python files in a Space without a full rebuild and restart. This is useful for rapid iteration on Gradio apps.

# Open an interactive editor to modify a remote file
hf spaces hot-reload username/repo-name app.py

# Take local version and patch remote
hf spaces hot-reload username/repo-name -f app.py
  • feat(spaces): hot-reload by @cbensimon in #3776
  • fix hot reload reference part.2 by @cbensimon in #3820

🖥️ CLI Improvements

New Commands

  • Add hf papers ls to list daily papers on the Hub by @julien-c in #3723
  • Add hf collections commands (ls, info, create, update, delete, add-item, update-item, delete-item) by @Wauplin in #3767

CLI Extensions

Introduce an extension mechanism to the hf CLI. Extensions are standalone executables hosted in GitHub repositories that users can install, run, and remove with simple commands. Inspired by gh extension.

# Install an extension (defaults to huggingface org)
hf extensions install hf-claude

# Install from any GitHub owner
hf extensions install hanouticelina/hf-claude

# Run an extension
hf claude

# List installed extensions
hf extensions list
  • Add hf extension by @hanouticelina in #3805
  • Add hf ext alias by @hanouticelina in #3836

Output Format Options

  • Add --format {table,json} and -q/--quiet to hf models ls, hf datasets ls, hf spaces ls, hf endpoints ls by @hanouticelina in #3735
  • Align hf jobs ps output with standard CLI pattern by @davanstrien in #3799
  • Dynamic table columns based on --expand field by @hanouticelina in #3760

Usability

  • Improve hf CLI help output with examples and documentation links by @hanouticelina in #3743
  • Add -h as short alias for --help by @assafvayner in #3800
  • Add hidden --version flag by @Wauplin in #3784
  • Add --type as alias for --repo-type by @Wauplin in #3835
  • Better handling of aliases in documentation by @Wauplin in #3840
  • Print first example only in group command --help by @Wauplin in #3841
  • Subfolder download: hf download repo_id subfolder/ now works as expected by @Wauplin in #3822

Jobs CLI

List available hardware:

 hf jobs hardware
NAME            PRETTY NAME            CPU      RAM     ACCELERATOR       COST/MIN COST/HOUR 
--------------- ---------------------- -------- ------- ----------------- -------- --------- 
cpu-basic       CPU Basic              2 vCPU   16 GB   N/A               $0.0002  $0.01     
cpu-upgrade     CPU Upgrade            8 vCPU   32 GB   N/A               $0.0005  $0.03     
cpu-performance CPU Performance        32 vCPU  256 GB  N/A               $0.3117  $18.70    
cpu-xl          CPU XL                 16 vCPU  124 GB  N/A               $0.0167  $1.00     
t4-small        Nvidia T4 - small      4 vCPU   15 GB   1x T4 (16 GB)     $0.0067  $0.40     
t4-medium       Nvidia T4 - medium     8 vCPU   30 GB   1x T4 (16 GB)     $0.0100  $0.60     
a10g-small      Nvidia A10G - small    4 vCPU   15 GB   1x A10G (24 GB)   $0.0167  $1.00  
...

Also added a ton of fixes and small QoL improvements.

  • Support multi GPU training commands (torchrun, accelerate launch) by @lhoestq in #3674
  • Pass local script and config files to job by @lhoestq in #3724
  • List available hardware with hf jobs hardware by @Wauplin in #3693
  • Better jobs filtering in CLI: labels and negation (!=) by @lhoestq in #3742
  • Accept namespace/job_id format in jobs CLI commands by @davanstrien in #3811
  • Pass namespace parameter to fetch job logs by @Praful932 in #3736
  • Add more error handling output to hf jobs cli commands by @davanstrien in #3744
  • Fix hf jobs commands crashing without a TTY by @davanstrien in #3782

🤖 Inference

  • Add dimensions & encoding_format parameter to InferenceClient for output embedding size by @mishig25 in #3671
  • feat: zai-org provider supports text to image by @tomsun28 in #3675
  • Fix fal image urls payload by @hanouticelina in #3746
  • Fix Replicate image-to-image compatibility with different model schemas by @hanouticelina in #3749
  • Accelerator parameter support for inference endpoints by @Wauplin in #3817

🔧 Other QoL Improvements

  • Support setting Label in Jobs API by @Wauplin in #3719
  • Document built-in environment variables in Jobs docs (JOB_ID, ACCELERATOR, CPU_CORES, MEMORY) by @Wauplin in #3834
  • Fix ReadTimeout crash in no-follow job logs by @davanstrien in #3793
  • Add evaluation results module (EvalResultEntry, parse_eval_result_entries) by @hanouticelina in #3633
  • Add source org field to EvalResultEntry by @hanouticelina in #3694
  • Add limit param to list_papers API method by @Wauplin in #3697
  • Add num_papers field to Organization class by @cfahlgren1 in #3695
  • Update MAX_FILE_SIZE_GB from 50 to 200 by @davanstrien in #3696
  • List datasets benchmark alias (benchmark=Truebenchmark="official") by @Wauplin in #3734
  • Add notes field to EvalResultEntry by @Wauplin in #3738
  • Make task_id required in EvalResultEntry by @Wauplin in #3718
  • Repo commit count warning for upload_large_folder by @Wauplin in #3698
  • Replace deprecated is_enterprise boolean by plan string in org info by @Wauplin in #3753
  • Update hardware list in SpaceHardware enum by @lhoestq in #3756
  • Use HF_HUB_DOWNLOAD_TIMEOUT as default httpx timeout by @Wauplin in #3751
  • No timeout by default when using httpx by @Wauplin in #3790
  • Log 'x-amz-cf-id' on http error (if no request id) by @Wauplin in #3759
  • Parse xet hash from tree listing by @seanses in #3780
  • Require filelock>=3.10.0 for mode= parameter support by @Wauplin in #3785
  • Add overload decorators to HfApi.snapshot_download for dry_run typing by @Wauplin in #3788
  • Dataclass doesn't call original __init__ by @zucchini-nlp in #3818
  • Strict dataclass sequence validation by @Wauplin in #3819
  • Check if dataclass.repr=True before wrapping by @zucchini-nlp in #3823

💔 Breaking Changes

  • hf jobs ps removes old Go-template --format '{{.id}}' syntax. Use -q for IDs or --format json | jq for custom extraction by @davanstrien in #3799
  • Migrate to hf repos instead of hf repo (old command still works but shows deprecation warning) by @Wauplin in #3848
  • Migrate hf repo-files delete to hf repo delete-files (old command hidden from help, shows deprecation warning) by @Wauplin in #3821

🐛 Bug and typo fixes

  • Fix severe performance regression in streaming by keeping a byte iterator in HfFileSystemStreamFile by @leq6c in #3685
  • Fix endpoint not forwarded in CommitUrl by @Wauplin in #3679
  • Fix HfFileSystem.resolve_path() with special char @ by @lhoestq in #3704
  • Fix cache verify incorrectly reporting folders as missing files by @Mitix-EPI in #3707
  • Fix multi user cache lock permissions by @hanouticelina in #3714
  • Default _endpoint to None in CommitInfo, fixes tiny regression from v1.3.3 by @tomaarsen in #3737
  • Filter datasets by benchmark:official by @Wauplin in #3761
  • Fix file corruption when server ignores Range header on download retry by @XciD in #3778
  • Fix Xet token invalid on repo recreation by @Wauplin in #3847
  • Correct typo 'occured' to 'occurred' by @thecaptain789 in #3787
  • Fix typo in CLI error handling by @hanouticelina in #3757

📖 Documentation

  • Add link to Hub Jobs documentation by @gary149 in #3712
  • Update HTTP backend configuration link to main branch by @IliasAarab in #3713
  • Update CLI help output in docs to include new commands by @julien-c in #3722
  • Wildcard pattern documentation by @hanouticelina in #3710
  • Deprecate hf_transfer references in Korean and German translations by @davanstrien in #3804
  • Use SPDX license identifier 'Apache-2.0' by @yesudeep in #3814
  • Correct img tag style in README.md by @sadnesslovefreedom-debug in #3689

🏗️ Internal

  • Change external dependency from typer-slim to typer by @svlandeg in #3797
  • Remove shellingham from the required dependencies by @hanouticelina in #3798
  • Ignore unused-ignore-comment warnings in ty for mypy compatibility by @hanouticelina in #3691
  • Remove new unused-type-ignore-comment warning from ty by @hanouticelina in #3803
  • Fix curlify when debug logging is enabled for streaming requests by @hanouticelina in #3692
  • Remove canonical dataset test case by @hanouticelina in #3740
  • Remove broad exception handling from CLI job commands by @hanouticelina in #3748
  • CI windows permission error by @Wauplin in #3700
  • Upgrade GitHub Actions to latest versions by @salmanmkc in #3729
  • Stabilize lockfile test in file_download tests by @hanouticelina in #3815
  • Fix ty invalid assignment in CollectionItem by @hanouticelina in #3831
  • Use inference_provider instead of inference in tests by @hanouticelina in #3826
  • Fix tqdm windows test failure by @Wauplin in #3844
  • Add test for check if dataclass.repr=True before wrapping by @Wauplin in #3852
  • Prepare for v1.5 by @Wauplin in #3781
Feb 6, 2026
[v0.36.2] Fix file corruption when server ignores Range header on download retry

Fix file corruption when server ignores Range header on download retry. Full details in https://github.com/huggingface/huggingface_hub/pull/3778 by @XciD.

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.36.1...v0.36.2

[v1.4.1] Fix file corruption when server ignores Range header on download retry

Fix file corruption when server ignores Range header on download retry. Full details in https://github.com/huggingface/huggingface_hub/pull/3778 by @XciD.

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v1.4.0...v1.4.1

Feb 3, 2026
[v1.4.0] Building the HF CLI for You and your AI Agents

🧠 hf skills add CLI Command

A new hf skills add command installs the hf-cli skill for AI coding assistants (Claude Code, Codex, OpenCode). Your AI Agent now knows how to search the Hub, download models, run Jobs, manage repos, and more.

> hf skills add --help
Usage: hf skills add [OPTIONS]

  Download a skill and install it for an AI assistant.

Options:
  --claude      Install for Claude.
  --codex       Install for Codex.
  --opencode    Install for OpenCode.
  -g, --global  Install globally (user-level) instead of in the current
                project directory.
  --dest PATH   Install into a custom destination (path to skills directory).
  --force       Overwrite existing skills in the destination.
  --help        Show this message and exit.

Examples
  $ hf skills add --claude
  $ hf skills add --claude --global
  $ hf skills add --codex --opencode

Learn more
  Use `hf <command> --help` for more information about a command.
  Read the documentation at
  https://huggingface.co/docs/huggingface_hub/en/guides/cli

The skill is composed of two files fetched from the huggingface_hub docs: a CLI guide (SKILL.md) and the full CLI reference (references/cli.md). Files are installed to a central .agents/skills/hf-cli/ directory, and relative symlinks are created from agent-specific directories (e.g., .claude/skills/hf-cli/../../.agents/skills/hf-cli/). This ensures a single source of truth when installing for multiple agents.

  • Add hf skills add CLI command by @julien-c in #3741
  • [CLI] hf skills add installs hf-cli skill to central location with symlinks by @hanouticelina in #3755

🖥️ Improved CLI Help Output

The CLI help output has been reorganized to be more informative and agent-friendly:

  • Commands are now grouped into Main commands and Help commands
  • Examples section showing common usage patterns
  • Learn more section with links to documentation
> hf cache --help
Usage: hf cache [OPTIONS] COMMAND [ARGS]...

  Manage local cache directory.

Options:
  --help  Show this message and exit.

Main commands:
  ls      List cached repositories or revisions.
  prune   Remove detached revisions from the cache.
  rm      Remove cached repositories or revisions.
  verify  Verify checksums for a single repo revision from cache or a local
          directory.

Examples
  $ hf cache ls
  $ hf cache ls --revisions
  $ hf cache ls --filter "size>1GB" --limit 20
  $ hf cache ls --format json
  $ hf cache prune
  $ hf cache prune --dry-run
  $ hf cache rm model/gpt2
  $ hf cache rm <revision_hash>
  $ hf cache rm model/gpt2 --dry-run
  $ hf cache rm model/gpt2 --yes
  $ hf cache verify gpt2
  $ hf cache verify gpt2 --revision refs/pr/1
  $ hf cache verify my-dataset --repo-type dataset

Learn more
  Use `hf <command> --help` for more information about a command.
  Read the documentation at
  https://huggingface.co/docs/huggingface_hub/en/guides/cli
  • [CLI] improve hf CLI help output by @hanouticelina in #3743

📊 Evaluation Results Module

The Hub now has a decentralized system for tracking model evaluation results. Benchmark datasets (like MMLU-Pro, HLE, GPQA) host leaderboards, and model repos store evaluation scores in .eval_results/*.yaml files. These results automatically appear on both the model page and the benchmark's leaderboard. See the Evaluation Results documentation for more details.

We added helpers in huggingface_hub to work with this format:

  • EvalResultEntry dataclass representing evaluation scores
  • eval_result_entries_to_yaml() to serialize entries to YAML format
  • parse_eval_result_entries() to parse YAML data back into EvalResultEntry objects
import yaml
from huggingface_hub import EvalResultEntry, eval_result_entries_to_yaml, upload_file

entries = [
    EvalResultEntry(dataset_id="cais/hle", task_id="default", value=20.90),
    EvalResultEntry(dataset_id="Idavidrein/gpqa", task_id="gpqa_diamond", value=0.412),
]
yaml_content = yaml.dump(eval_result_entries_to_yaml(entries))
upload_file(
    path_or_fileobj=yaml_content.encode(),
    path_in_repo=".eval_results/results.yaml",
    repo_id="your-username/your-model",
)
  • Add evaluation results module by @hanouticelina in #3633
  • Eval results synchronization by @Wauplin in #3718
  • Eval results notes by @Wauplin in #3738

🖥️ Other CLI Improvements

New hf papers ls command to list daily papers on the Hub, with support for filtering by date and sorting by trending or publication date.

hf papers ls                       # List most recent daily papers
hf papers ls --sort=trending       # List trending papers
hf papers ls --date=2025-01-23     # List papers from a specific date
hf papers ls --date=today          # List today's papers
  • Add hf papers ls CLI command by @julien-c in #3723

New hf collections commands for managing collections from the CLI:

# List collections
hf collections ls --owner nvidia --limit 5
hf collections ls --sort trending

# Create a collection
hf collections create "My Models" --description "Favorites" --private

# Add items
hf collections add-item user/my-coll models/gpt2 model
hf collections add-item user/my-coll datasets/squad dataset --note "QA dataset"

# Get info
hf collections info user/my-coll

# Delete
hf collections delete user/my-coll
  • [CLI] Add hf collections commands by @Wauplin in #3767

Other CLI-related improvements:

  • [CLI] output format option for ls CLIs by @hanouticelina in #3735
  • [CLI] Dynamic table columns based on --expand field by @hanouticelina in #3760
  • [CLI] Adds centralized error handling by @hanouticelina in #3754
  • [CLI] exception handling scope by @hanouticelina in #3748
  • Update CLI help output in docs to include new commands by @julien-c in #3722

📊 Jobs

Multi-GPU training commands are now supported with torchrun and accelerate launch:

> hf jobs uv run --with torch -- torchrun train.py
> hf jobs uv run --with accelerate -- accelerate launch train.py

You can also pass local config files alongside your scripts:

> hf jobs uv run script.py config.yml
> hf jobs uv run --with torch torchrun script.py config.yml

New hf jobs hardware command to list available hardware options:

> hf jobs hardware
NAME         PRETTY NAME            CPU      RAM     ACCELERATOR      COST/MIN COST/HOUR 
------------ ---------------------- -------- ------- ---------------- -------- --------- 
cpu-basic    CPU Basic              2 vCPU   16 GB   N/A              $0.0002  $0.01     
cpu-upgrade  CPU Upgrade            8 vCPU   32 GB   N/A              $0.0005  $0.03     
t4-small     Nvidia T4 - small      4 vCPU   15 GB   1x T4 (16 GB)    $0.0067  $0.40     
t4-medium    Nvidia T4 - medium     8 vCPU   30 GB   1x T4 (16 GB)    $0.0100  $0.60     
a10g-small   Nvidia A10G - small    4 vCPU   15 GB   1x A10G (24 GB)  $0.0167  $1.00     
a10g-large   Nvidia A10G - large    12 vCPU  46 GB   1x A10G (24 GB)  $0.0250  $1.50     
a10g-largex2 2x Nvidia A10G - large 24 vCPU  92 GB   2x A10G (48 GB)  $0.0500  $3.00     
a10g-largex4 4x Nvidia A10G - large 48 vCPU  184 GB  4x A10G (96 GB)  $0.0833  $5.00     
a100-large   Nvidia A100 - large    12 vCPU  142 GB  1x A100 (80 GB)  $0.0417  $2.50     
a100x4       4x Nvidia A100         48 vCPU  568 GB  4x A100 (320 GB) $0.1667  $10.00    
a100x8       8x Nvidia A100         96 vCPU  1136 GB 8x A100 (640 GB) $0.3333  $20.00    
l4x1         1x Nvidia L4           8 vCPU   30 GB   1x L4 (24 GB)    $0.0133  $0.80     
l4x4         4x Nvidia L4           48 vCPU  186 GB  4x L4 (96 GB)    $0.0633  $3.80     
l40sx1       1x Nvidia L40S         8 vCPU   62 GB   1x L40S (48 GB)  $0.0300  $1.80     
l40sx4       4x Nvidia L40S         48 vCPU  382 GB  4x L40S (192 GB) $0.1383  $8.30     
l40sx8       8x Nvidia L40S         192 vCPU 1534 GB 8x L40S (384 GB) $0.3917  $23.50  

Better filtering with label support and negation:

> hf jobs ps -a --filter status!=error
> hf jobs ps -a --filter label=fine-tuning
> hf jobs ps -a --filter label=model=Qwen3-06B
  • [Jobs] Support multi gpu training commands by @lhoestq in #3674
  • [Jobs] List available hardware by @Wauplin in #3693
  • [Jobs] Better jobs filtering in CLI: labels and negation by @lhoestq in #3742
  • Pass local script and config files to job by @lhoestq in #3724
  • Support setting Label in Jobs API by @Wauplin in #3719
  • Pass namespace parameter to fetch job logs in jobs CLI by @Praful932 in #3736
  • Add more error handling output to hf jobs cli commands by @davanstrien in #3744

⚡️ Inference

  • Add dimensions & encoding_format parameter to InferenceClient for output embedding size by @mishig25 in #3671
  • feat: zai-org provider supports text to image by @tomsun28 in #3675
  • [Inference Providers] fix fal image urls payload by @hanouticelina in #3746
  • Fix Replicate image-to-image compatibility with different model schemas by @hanouticelina in #3749

🔧 QoL Improvements

  • add source org field by @hanouticelina in #3694
  • add num_papers field to Organization class by @cfahlgren1 in #3695
  • Add limit param to list_papers API method by @Wauplin in #3697
  • Repo commit count warning by @Wauplin in #3698
  • List datasets benchmark alias by @Wauplin in #3734
  • List repo files repoType by @Wauplin in #3753
  • Update hardware list in SpaceHardware enum by @lhoestq in #3756
  • Use HF_HUB_DOWNLOAD_TIMEOUT as default httpx timeout by @Wauplin in #3751
  • Default _endpoint to None in CommitInfo by @tomaarsen in #3737
  • Update MAX_FILE_SIZE_GB from 50 to 200 to match hub-docs PR #2169 by @davanstrien in #3696
  • Pass kwargs to post init in dataclasses by @zucchini-nlp in #3771
  • Add retry/backoff when fetching Xet connection info to handle 502 errors by @aabhathanki in #3768

📖 Documentation

  • Wildcard pattern documentation by @hanouticelina in #3710
  • Add link to Hub Jobs documentation by @gary149 in #3712
  • Update HTTP backend configuration link to main branch by @IliasAarab in #3713
  • Correct img tag style in README.md by @sadnesslovefreedom-debug in #3689

🐛 Bug and typo fixes

  • Fix endpoint not forwarded in CommitUrl by @Wauplin in #3679
  • fix curlify with streaming request by @hanouticelina in #3692
  • Fix severe performance regression in streaming by keeping a byte iterator in HfFileSystemStreamFile by @leq6c in #3685
  • fix resolve_path() with special char @ by @lhoestq in #3704
  • Fix cache verify incorrectly reporting folders as missing files by @Mitix-EPI in #3707
  • Fix multi user cache lock permissions by @hanouticelina in #3714
  • [CLI] Fix typo in CLI error handling by @hanouticelina in #3757
  • Log 'x-amz-cf-id' on http error (if no request id) by @Wauplin in #3759
  • [Fix] Filter datasets by benchmark official by @Wauplin in #3761

🏗️ Internal

  • Ignore unused-ignore-comment warnings in ty for mypy compatibility by @hanouticelina in #3691
  • Skip sync test on Windows Python 3.14 by @Wauplin in #3700
  • Upgrade GitHub Actions to latest versions by @salmanmkc in #3729
  • Remove canonical dataset test case from test_access_repositories_lists by @hanouticelina in #3740
  • Fix style issues in CI by @Wauplin in #3773

Significant community contributions

The following contributors have made significant changes to the library over the last release:

  • @tomsun28
    • feat: zai-org provider supports text to image (#3675)
  • @leq6c
    • Fix severe performance regression in streaming by keeping a byte iterator in HfFileSystemStreamFile (#3685)
  • @Mitix-EPI
    • Fix cache verify incorrectly reporting folders as missing files (#3707)
  • @Praful932
    • Pass namespace parameter to fetch job logs in jobs CLI (#3736)
  • @aabhathanki
    • Add retry/backoff when fetching Xet connection info to handle 502 errors (#3768)
Feb 2, 2026
[v1.3.7] Log 'x-amz-cf-id' on http error if no request id
Jan 29, 2026
[v1.3.5] Configurable default timeout for HTTP calls
  • Use HF_HUB_DOWNLOAD_TIMEOUT as default httpx timeout by @Wauplin in #3751

Default timeout is 10s. This is ok in most use cases but can trigger errors in CIs making a lot of requests to the Hub. Solution is to set HF_HUB_DOWNLOAD_TIMEOUT=60 as environment variable in these cases.

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v1.3.4...v1.3.5

Jan 26, 2026
[v1.3.4] Fix `CommitUrl._endpoint` default to None

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v1.3.3...v1.3.4

Jan 22, 2026
[v1.3.3] List Jobs Hardware & Bug Fixes

⚙️ List Jobs Hardware

You can now list all available hardware options for Hugging Face Jobs, both from the CLI and programmatically.

From the CLI:

➜ hf jobs hardware                           
NAME            PRETTY NAME            CPU      RAM     ACCELERATOR      COST/MIN COST/HOUR 
--------------- ---------------------- -------- ------- ---------------- -------- --------- 
cpu-basic       CPU Basic              2 vCPU   16 GB   N/A              $0.0002  $0.01     
cpu-upgrade     CPU Upgrade            8 vCPU   32 GB   N/A              $0.0005  $0.03     
cpu-performance CPU Performance        8 vCPU   32 GB   N/A              $0.0000  $0.00     
cpu-xl          CPU XL                 16 vCPU  124 GB  N/A              $0.0000  $0.00     
t4-small        Nvidia T4 - small      4 vCPU   15 GB   1x T4 (16 GB)    $0.0067  $0.40     
t4-medium       Nvidia T4 - medium     8 vCPU   30 GB   1x T4 (16 GB)    $0.0100  $0.60     
a10g-small      Nvidia A10G - small    4 vCPU   15 GB   1x A10G (24 GB)  $0.0167  $1.00     
a10g-large      Nvidia A10G - large    12 vCPU  46 GB   1x A10G (24 GB)  $0.0250  $1.50     
a10g-largex2    2x Nvidia A10G - large 24 vCPU  92 GB   2x A10G (48 GB)  $0.0500  $3.00     
a10g-largex4    4x Nvidia A10G - large 48 vCPU  184 GB  4x A10G (96 GB)  $0.0833  $5.00     
a100-large      Nvidia A100 - large    12 vCPU  142 GB  1x A100 (80 GB)  $0.0417  $2.50     
a100x4          4x Nvidia A100         48 vCPU  568 GB  4x A100 (320 GB) $0.1667  $10.00    
a100x8          8x Nvidia A100         96 vCPU  1136 GB 8x A100 (640 GB) $0.3333  $20.00    
l4x1            1x Nvidia L4           8 vCPU   30 GB   1x L4 (24 GB)    $0.0133  $0.80     
l4x4            4x Nvidia L4           48 vCPU  186 GB  4x L4 (96 GB)    $0.0633  $3.80     
l40sx1          1x Nvidia L40S         8 vCPU   62 GB   1x L40S (48 GB)  $0.0300  $1.80     
l40sx4          4x Nvidia L40S         48 vCPU  382 GB  4x L40S (192 GB) $0.1383  $8.30     
l40sx8          8x Nvidia L40S         192 vCPU 1534 GB 8x L40S (384 GB) $0.3917  $23.50 

Programmatically:

>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> hardware_list = api.list_jobs_hardware()
>>> hardware_list[0]
JobHardware(name='cpu-basic', pretty_name='CPU Basic', cpu='2 vCPU', ram='16 GB', accelerator=None, unit_cost_micro_usd=167, unit_cost_usd=0.000167, unit_label='minute')
>>> hardware_list[0].name
'cpu-basic'
  • [Jobs] List available hardware in #3693 by @Wauplin

🐛 Bug Fixes

  • Fix severe performance regression in streaming by keeping a byte iterator in HfFileSystemStreamFile in #3685 by @leq6c
  • Fix verify incorrectly reporting folders as missing files in #3707 by @Mitix-EPI
  • Fix resolve_path() with special char @ in #3704 by @lhoestq
  • Fix curlify with streaming request in #3692 by @hanouticelina

✨ Various Improvements

  • Add num_papers field to Organization class in #3695 by @cfahlgren1
  • Add limit param to list_papers API method in #3697 by @Wauplin
  • Add repo commit count warning when exceeding recommended limits in #3698 by @Wauplin
  • Update MAX_FILE_SIZE_GB from 50 to 200 GB in #3696 by @davanstrien

📚 Documentation

  • Wildcard pattern documentation in #3710 by @hanouticelina
Jan 14, 2026
[v1.3.2] Zai provider support for `text-to-image` and fix custom endpoint not forwarded
  • Fix endpoint not forwarded in CommitUrl #3679 by @Wauplin
  • feat: zai-org provider supports text to image #3675 by @tomsun28

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v1.3.1...v1.3.2

Previous123Next
Latest
v1.11.0
Tracking Since
Dec 20, 2023
Last fetched Apr 19, 2026