Discover Spaces using natural language. The new search_spaces() API and hf spaces search CLI use embedding-based semantic search to find relevant Spaces based on what they do - not just keyword matching on their name.
>>> from huggingface_hub import search_spaces
>>> results = search_spaces("remove background from photo")
>>> for space in results:
... print(f"{space.id} (score: {space.score:.2f})")
briaai/BRIA-RMBG-1.4 (score: 0.87)
The same capability is available in the CLI:
$ hf spaces search "remove background from photo" --limit 3
ID TITLE SDK LIKES STAGE CATEGORY SCORE
---------------------------- --------------------- ------ ----- ------- ------------------ -----
not-lain/background-removal Background Removal gradio 2794 RUNNING Image Editing 0.85
briaai/BRIA-RMBG-2.0 BRIA RMBG 2.0 gradio 918 RUNNING Background Removal 0.84
Xenova/remove-background-web Remove Background Web static 739 RUNNING Background Removal 0.81
Hint: Use --description to show AI-generated descriptions.
# Filter by SDK, get JSON with descriptions
$ hf spaces search "chatbot" --sdk gradio --description --json --limit 1 | jq
[
{
"id": "BarBar288/Chatbot",
"title": "Chatbot",
"sdk": "gradio",
"likes": 4,
"stage": "RUNNING",
"category": "Other",
"score": 0.5,
"description": "Perform various AI tasks like chat, image generation, and text-to-speech"
}
]
hf spaces command with semantic search by @Wauplin in #4094When a Space fails to build or crashes at runtime, you can now retrieve the logs programmatically — no need to open the browser. This is particularly useful for agentic workflows that need to debug Space failures autonomously.
>>> from huggingface_hub import fetch_space_logs
# Run logs (default)
>>> for line in fetch_space_logs("username/my-space"):
... print(line, end="")
# Build logs — for BUILD_ERROR debugging
>>> for line in fetch_space_logs("username/my-space", build=True):
... print(line, end="")
# Stream in real time
>>> for line in fetch_space_logs("username/my-space", follow=True):
... print(line, end="")
The CLI equivalent:
$ hf spaces logs username/my-space # run logs
$ hf spaces logs username/my-space --build # build logs
$ hf spaces logs username/my-space -f # stream in real time
$ hf spaces logs username/my-space -n 50 # last 50 lines
fetch_space_logs + hf spaces logs command by @davanstrien in #4091This release continues the CLI output migration started in v1.9, bringing 11 more command groups to the unified --format flag. The old --quiet flags on migrated commands are replaced by --format quiet.
$ hf cache ls # auto-detect (human or agent)
$ hf cache ls --format json # structured JSON
$ hf cache ls --format quiet # minimal output, great for piping
$ hf upload my-model . . # auto-detect (human or agent)
Confirmation prompts (e.g., hf cache rm, hf repos delete, hf buckets delete) are now mode-aware: they prompt in human mode, and require --yes in agent/json/quiet modes - no more hanging scripts.
Commands migrated in this release: collections, discussions, extensions, endpoints, webhooks, cache, repos, repo-files, download, upload, and upload-large-folder. Remaining commands (jobs, buckets, auth login/logout) will follow in a future release.
collections, discussions, extensions, endpoints and webhooks to out singleton by @hanouticelina in #4057hf cache to out singleton by @hanouticelina in #4070out.confirm() and migrate all confirmation prompts by @hanouticelina in #4083repos and repo-files to out singleton + add confirmation to hf repos delete by @hanouticelina in #4097download, upload, upload-large-folder to out singleton by @hanouticelina in #4100A new hf spaces volumes command group lets you manage volumes mounted in Spaces directly from the command line — list, set, and delete using the familiar -v/--volume syntax.
# List mounted volumes
$ hf spaces volumes ls username/my-space
TYPE SOURCE MOUNT_PATH READ_ONLY
------- --------------------- ---------- ---------
model gpt2 /data ✔
dataset badlogicgames/pi-mono /data2 ✔
# Set volumes
$ hf spaces volumes set username/my-space -v hf://buckets/username/my-bucket:/data
$ hf spaces volumes set username/my-space -v hf://models/username/my-model:/models
# Delete all volumes
$ hf spaces volumes delete username/my-space
hf spaces volumes commands by @Wauplin in #4109hf auth token - Prints the current token to stdout, handy for piping into other commands:
$ hf auth token
hf_xxxx
Hint: Run `hf auth whoami` to see which account this token belongs to.
# Use it in a curl call
$ hf auth token | xargs -I {} curl -H "Authorization: Bearer {}" https://huggingface.co/api/whoami-v2
hf auth token command by @Wauplin in #4104model_name deprecated in list_models - Use search instead. Both were always equivalent (both map to ?search=... in the API), but now model_name emits a deprecation warning. Removal is planned for 2.0.
# Before
>>> list_models(model_name="gemma")
# After
>>> list_models(search="gemma")
The CLI is not affected - hf models ls already uses --search.
model_name in favor of search in list_models by @Wauplin in #4112list_liked_repos by @Wauplin in #4078cp -r nesting semantics in copy_files by @Wauplin in #4081.gitattributes when copying repo files to a bucket by @Wauplin in #4082hf_raise_for_status causing delayed object destruction by @Wauplin in #4092repo delete tests missing --yes flag by @hanouticelina in #4101-v/--volume accepts multiple volumes by @davanstrien in #4113Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v1.10.1...v1.10.2
Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v1.10.0...v1.10.1
This release introduces server-side file copy operations that let you move data between Hugging Face storage without downloading and re-uploading. You can now copy files from one Bucket to another, from a repository (model, dataset, or Space) to a Bucket, or between Buckets — all without bandwidth costs. Files tracked with Xet are copied directly by hash (no data transfer), while small text files not tracked with Xet are automatically downloaded and re-uploaded.
>>> from huggingface_hub import copy_files
# Bucket to bucket (same or different bucket)
>>> copy_files(
... "hf://buckets/username/source-bucket/checkpoints/model.safetensors",
... "hf://buckets/username/destination-bucket/archive/model.safetensors",
... )
# Repo to bucket
>>> copy_files(
... "hf://datasets/username/my-dataset/processed/",
... "hf://buckets/username/my-bucket/datasets/processed/",
... )
The same capability is available in the CLI:
# Bucket to bucket
>>> hf buckets cp hf://buckets/username/source-bucket/logs/ hf://buckets/username/archives/logs/
# Repo to bucket
>>> hf buckets cp hf://datasets/username/my-dataset/data/train/ hf://buckets/username/my-bucket/datasets/train/
Note that copying files from a Bucket to a Repository is not yet supported.
📚 Documentation: Buckets guide
HfApi.copy_files method to copy files remotely and update 'hf buckets cp' by @Wauplin in #3874[!TIP] For building, publishing, and using kernel repos, please use the dedicated
kernelspackage.
The Hub now supports a new kernel repository type for hosting compute kernels. This release adds first-class (but explicitly limited) support for interacting with kernel repos via the Python API. Only a subset of methods are officially supported: kernel_info, hf_hub_download, snapshot_download, list_repo_refs, list_repo_files, and list_repo_tree. Creation and deletion are also supported but restricted to a small subset of allowed users and organizations on the Hub.
>>> from huggingface_hub import kernel_info
>>> kernel_info("kernels-community/yoso")
KernelInfo(id='kernels-community/yoso', author='kernels-community', downloads=0, gated=False, last_modified=datetime.datetime(2026, 4, 3, 22, 27, 25, tzinfo=datetime.timezone.utc), likes=0, private=False)
📚 Documentation: Repository guide
tqdm_class silently broken in non-TTY environments by @hanouticelina in #4056Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v1.9.1...v1.9.2
set_space_volumes sending bare array instead of object #4054 by @davanstrienFull Changelog: https://github.com/huggingface/huggingface_hub/compare/v1.9.0...v1.9.1
Hugging Face Spaces now support mounting volumes, giving your Space direct filesystem access to models, datasets, and storage buckets. This replaces the deprecated persistent storage feature.
from huggingface_hub import HfApi, Volume
api = HfApi()
api.set_space_volumes(
repo_id="username/my-space",
volumes=[
Volume(type="model", source="username/my-model", mount_path="/models", read_only=True),
Volume(type="bucket", source="username/my-bucket", mount_path="/data"),
],
)
Volumes can also be set at creation time via create_repo(space_volumes=...) and duplicate_repo(space_volumes=...), and from the CLI with the --volume / -v flag:
# Create a Space with volumes mounted
hf repos create my-space --type space --space-sdk gradio \
-v hf://gpt2:/models -v hf://buckets/org/b:/data
# Duplicate a Space with volumes
hf repos duplicate org/my-space my-space --type space \
-v hf://gpt2:/models -v hf://buckets/org/b:/data
hf CLI Now Auto-Detects AI Agents and Adapts Its OutputAI coding agents (Claude Code, Cursor, Codex, Copilot, Gemini, ...) increasingly use the hf CLI to interact with the Hub. Until now, the output was designed for humans - ANSI colors, padded tables, emoji booleans, truncated cells - making it hard for agents to parse reliably.
Starting with v1.9, the CLI automatically detects when it's running inside an agent and adapts its output: no ANSI, no truncation, tab-separated tables, compact JSON, full timestamps. No configuration needed - it just works. This is only a first step toward making the hf CLI the primary entry point to the Hugging Face Hub for AI agents!
Agent mode is auto-detected but you can also force a mode explicitly with --format:
hf models ls --limit 5 # auto-detect
hf models ls --limit 5 --format agent # force agent-friendly output
hf models ls --limit 5 --format json # structured JSON
hf models ls --limit 5 --format quiet # IDs only, great for piping
Here's what an agent sees compared to a human:
hf auth whoami
# Human
✓ Logged in
user: Wauplin
orgs: huggingface, awesome-org
# Agent
user=Wauplin orgs=huggingface,awesome-org
# JSON
{"user": "Wauplin", "orgs": ["huggingface", "awesome-org"]}
hf models ls --author google --limit 3
# Human
ID DOWNLOADS TRENDING_SCORE
-------------------------- --------- --------------
google/embeddinggemma-300m 1213145 17
google/gemma-3-4b-it 1512637 16
google/gemma-3-27b-it 988618 12
# Agent (TSV, no truncation, no ANSI)
id downloads trending_score
google/embeddinggemma-300m 1213145 17
google/gemma-3-4b-it 1512637 16
google/gemma-3-27b-it 988618 12
hf models info google/gemma-3-27b-it
# Human — pretty-printed JSON (indent=2)
{
"id": "google/gemma-3-27b-it",
"author": "google",
...
}
# Agent — compact JSON (~40% fewer tokens)
{"id": "google/gemma-3-27b-it", "author": "google", "card_data": ...}
Commands migrated so far: hf models ls|info, hf datasets ls|info|parquet|sql, hf spaces ls|info, hf papers ls|search|info, hf auth whoami. More commands will be migrated soon
out output singleton with agent/human mode rendering by @hanouticelina in #4005models, datasets, spaces, papers to out singleton by @hanouticelina in #4026FormatWithAutoOpt with callback to auto-set output mode by @hanouticelina in #4028out output singleton by @hanouticelina in #4020The hf skills add command now supports installing skills directly from the Hugging Face skills marketplace (https://github.com/huggingface/skills) - pre-built tools that give AI agents new capabilities.
# Install a marketplace skill
hf skills add gradio
# Install with Claude Code integration
hf skills add huggingface-gradio --claude
# Upgrade all installed skills
hf skills upgrade
hf CLI skill description for better agent triggering by @hanouticelina in #3973summary field to hf papers search CLI output by @Wauplin in #4006HF_HUB_DISABLE_SYMLINKS env variable to force no-symlink cache by @Wauplin in #4032bool/int cross-type confusion in @strict dataclass validation by @Wauplin in #3992CLAUDE.md symlink pointing to AGENTS.md by @hanouticelina in #4013match/case statements where appropriate by @hanouticelina in #4012ty type-checking errors after latest release by @hanouticelina in #3978Jobs can now access Hugging Face repositories (models, datasets, Spaces) and Storage Buckets directly as mounted volumes in their containers. This enables powerful workflows like running queries directly against datasets, loading models without explicit downloads, and persisting training checkpoints to buckets.
from huggingface_hub import run_job, Volume
job = run_job(
image="duckdb/duckdb",
command=["duckdb", "-c", "SELECT * FROM '/data/**/*.parquet' LIMIT 5"],
volumes=[
Volume(type="dataset", source="HuggingFaceFW/fineweb", mount_path="/data"),
],
)
hf jobs run -v hf://datasets/HuggingFaceFW/fineweb:/data duckdb/duckdb duckdb -c "SELECT * FROM '/data/**/*.parquet' LIMIT 5"
The hf papers command now has full functionality: search papers by keyword, get structured JSON metadata, and read the full paper content as markdown. The ls command is also enhanced with new filters for week, month, and submitter.
# Search papers
hf papers search "vision language"
# Get metadata
hf papers info 2601.15621
# Read as markdown
hf papers read 2601.15621
hf papers with search, info, read + ls filters by @mishig25 in #3952You can now use repo ID prefixes like spaces/user/repo, datasets/user/repo, and models/user/repo as a shorthand for user/repo --type space. This works automatically for all CLI commands that accept a --type flag.
# Before
hf download user/my-space --type space
hf discussions list user/my-dataset --type dataset
# After
hf download spaces/user/my-space
hf discussions list datasets/user/my-dataset
spaces/user/repo as repo ID prefix shorthand by @Wauplin in #3929Repositories can now be created or updated with explicit visibility settings (--public, --protected) alongside the existing --private flag. This adds a visibility parameter to HfApi.create_repo, update_repo_settings, and duplicate_repo, with --protected available for Spaces only.
Protected Spaces allow for private code while being publicly accessible.
visibility parameter to HfApi repo create/update/duplicate methods by @hanouticelina in #3951hf repos create and hf repos duplicate by @Wauplin in #3888--format json to hf auth whoami by @hanouticelina in #3938 — docsSKILL.md by @hanouticelina in #3941SKILL.md by @hanouticelina in #3955hf extensions install on uv-managed Python by using uv when available by @hanouticelina in #3957.env to .venv in virtual environment instructions by @julien-c in #3939 — docs--every help text by @julien-c in #3950hf cp command by @Wauplin in #3968huggingface-cli with hf in brew upgrade command by @hanouticelina in #3946SKILL.md by @hanouticelina in #3949hf-mount in CLI skill by @hanouticelina in #3966huggingface-hub-bot for post-release PR creation in release.yml by @Wauplin in #3967hf CLI skill now fully expands subcommand groups and inlines all flags and options, making the CLI self-describing and easier for agents to discover.
hf extension install now uses uv for Python extension installation when available making extension installation faster:
> hyperfine "hf extensions install alvarobartt/hf-mem --force"
# Before
Benchmark 1: hf extensions install alvarobartt/hf-mem --force
Time (mean ± σ): 3.490 s ± 0.220 s [User: 1.925 s, System: 0.445 s]
Range (min … max): 3.348 s … 4.097 s 10 runs
# After
Benchmark 1: hf extensions install alvarobartt/hf-mem --force
Time (mean ± σ): 519.6 ms ± 119.7 ms [User: 216.6 ms, System: 95.2 ms]
Range (min … max): 371.6 ms … 655.2 ms 10 runs
Other QoL improvements:
--format json to hf auth whoami (#3938) by @hanouticelinahuggingface-cli with hf in brew upgrade command (#3946) by @hanouticelinaFull Changelog: https://github.com/huggingface/huggingface_hub/compare/v1.7.1...v1.7.2
This release brings major improvements to the hf CLI with extension discoverability, unified list commands, and multiple QoL improvements in the CLI.
🎉 The Homebrew formula of the Hugging Face CLI has been renamed to hf. Existing users just need to run brew update - Homebrew handles the rename automatically. New users can install with brew install hf.
The hf CLI extensions system gets a major upgrade in this release. Extensions can now be full Python packages (with a pyproject.toml) installed in isolated virtual environments, in addition to the existing shell script approach. This means extension authors can use Python dependencies without conflicting with the user's system. The install command auto-detects whether a GitHub repo is a script or a Python package and handles both transparently.
A new hf extensions search command lets users discover available extensions directly from the terminal by querying GitHub repositories tagged with the hf-extension topic. Results are sorted by stars and show whether each extension is already installed locally. Additionally, a comprehensive guide on how to build, publish, and make extensions discoverable has been added to the documentation.
# Install a Python-based extension
hf extensions install alvarobartt/hf-mem
# Discover available extensions
hf extensions search
NAME REPO STARS DESCRIPTION INSTALLED
------ ----------------------- ----- ----------------------------------- ---------
claude hanouticelina/hf-claude 2 Extension for `hf` CLI to launch... yes
agents hanouticelina/hf-agents HF extension to run local coding...
hf extensions search command by @julien-c in #3905📚 Documentation: Create a CLI extension
hf auth login CLI updateA new --force flag lets you explicitly go through the full login flow again when needed, for example to switch tokens.
# Already logged in — returns immediately
hf auth login
# Force re-login to switch tokens
hf auth login --force
--force flag by @hanouticelina in #3920📚 Documentation: CLI guide
hf-xet has been bumped to v1.4.2 with some optimizations:
The hf-xet bump also comes with a fix for deadlocks / stall on large file downloads.
See hf-xet release notes for more details.
list | ls alias by @julien-c in #3901hf papers ls by @julien-c in #3903--json shorthand for --format json by @Wauplin in #3919used_storage field to ModelInfo, DatasetInfo, and SpaceInfo by @julien-c in #3911hf by @julien-c in #3902This release brings significant new CLI commands for managing Spaces, Datasets, Discussions, and Webhooks, along with HfFileSystem support for Buckets and a CLI extension system.
We've added several new CLI command groups to make interacting with the Hub even easier from your terminal.
hf spaces dev-mode commandYou can now enable or disable dev mode on Spaces directly from the CLI. When enabling dev mode, the command waits for the Space to be ready and prints connection instructions (web VSCode, SSH, local VSCode/Cursor). This makes iterating on Spaces much faster by allowing you to restart your application without stopping the Space container.
# Enable dev mode
hf spaces dev-mode username/my-space
# Disable dev mode
hf spaces dev-mode username/my-space --stop
hf spaces dev-mode command by @lhoestq in #3824hf discussions command groupYou can now manage discussions and pull requests on the Hub directly from the CLI. This includes listing, viewing, creating, commenting on, closing, reopening, renaming, and merging discussions and PRs.
# List open discussions and PRs on a repo
hf discussions list username/my-model
# Create a new discussion
hf discussions create username/my-model --title "Feature request" --body "Description"
# Create a pull request
hf discussions create username/my-model --title "Fix bug" --pull-request
# Merge a pull request
hf discussions merge username/my-model 5 --yes
hf discussions command group by @Wauplin in #3855hf discussions view to hf discussions info by @Wauplin in #3878hf webhooks command groupFull CLI support for managing Hub webhooks is now available. You can list, inspect, create, update, enable/disable, and delete webhooks directly from the terminal.
# List all webhooks
hf webhooks ls
# Create a webhook
hf webhooks create --url https://example.com/hook --watch model:bert-base-uncased
# Enable / disable a webhook
hf webhooks enable webhook_id
hf webhooks disable webhook_id
# Delete a webhook
hf webhooks delete webhook_id
hf webhooks CLI commands by @omkar-334 in #3866hf datasets parquet and hf datasets sql commandsTwo new commands make it easy to work with dataset parquet files. Use hf datasets parquet to discover parquet file URLs, then query them with hf datasets sql using DuckDB.
# List parquet URLs for a dataset
hf datasets parquet cfahlgren1/hub-stats
hf datasets parquet cfahlgren1/hub-stats --subset models --split train
# Run SQL queries on dataset parquet
hf datasets sql "SELECT COUNT(*) FROM read_parquet('https://huggingface.co/api/datasets/...')"
hf datasets parquet and hf datasets sql commands by @cfahlgren1 in #3833hf repos duplicate commandYou can now duplicate any repository (model, dataset, or Space) using a unified command. This replaces the previous duplicate_space method with a more general solution.
# Duplicate a Space
hf repos duplicate multimodalart/dreambooth-training --type space
# Duplicate a dataset
hf repos duplicate openai/gdpval --type dataset
duplicate_repo method and hf repos duplicate command by @Wauplin in #3880The HfFileSystem now supports buckets, providing S3-like object storage on Hugging Face. You can list, glob, download, stream, and upload files in buckets using the familiar fsspec interface.
from huggingface_hub import hffs
# List files in a bucket
hffs.ls("buckets/my-username/my-bucket/data")
# Read a remote file
with hffs.open("buckets/my-username/my-bucket/data/file.txt", "r") as f:
content = f.read()
# Read file content as string
hffs.read_text("buckets/my-username/my-bucket/data/file.txt")
hf://buckets by @lhoestq in #3875The hf extensions system now supports installing extensions as Python packages in addition to standalone executables. This makes it easier to distribute and install CLI extensions.
# Install an extension
> hf extensions install hanouticelina/hf-claude
> hf extensions install alvarobartt/hf-mem
# List them
> hf extensions list
COMMAND SOURCE TYPE INSTALLED DESCRIPTION
--------- ----------------------- ------ ---------- -----------------------------------
hf claude hanouticelina/hf-claude binary 2026-03-06 Launch Claude Code with Hugging ...
hf mem alvarobartt/hf-mem python 2026-03-06 A CLI to estimate inference memo...
# Run extension
> hf claude --help
Usage: claude [options] [command] [prompt]
Claude Code - starts an interactive session by default, use -p/--print for non-interactive output
hf --helpThe CLI now shows installed extensions under an "Extension commands" section in the help output.
hf --help by @hanouticelina in #3884hf_xet minimal package version to >=1.3.2 for better throughput by @Wauplin in #3873direction argument in list_models/datasets/spaces by @Wauplin in #3882hf CLI Skill workflow by @hanouticelina in #3885This release introduces major new features including Buckets (xet-based large scale object storage), CLI Extensions, Space Hot-Reload, and significant improvements for AI coding agents. The CLI has been completely overhauled with centralized error handling, better help output, and new commands for collections, papers, and more.
Buckets provide S3-like object storage on Hugging Face, powered by the Xet storage backend. Unlike repositories (which are git-based and track file history), buckets are remote object storage containers designed for large-scale files with content-addressable deduplication. Use them for training checkpoints, logs, intermediate artifacts, or any large collection of files that doesn't need version control.
# Create a bucket
hf buckets create my-bucket --private
# Upload a directory
hf buckets sync ./data hf://buckets/username/my-bucket
# Download from bucket
hf buckets sync hf://buckets/username/my-bucket ./data
# List files
hf buckets list username/my-bucket -R --tree
The Buckets API includes full CLI and Python support for creating, listing, moving, and deleting buckets; uploading, downloading, and syncing files; and managing bucket contents with include/exclude patterns.
hf install by @julien-c in #3846📚 Documentation: Buckets guide
This release includes several features designed to improve the experience for AI coding agents (Claude Code, OpenCode, Cursor, etc.):
HF_DEBUG=1 for full traces) by @hanouticelina in #3754hf skills add command now installs a compact skill (~1.2k tokens vs ~12k before) by @hanouticelina in #3802hf jobs logs: Prints available logs and exits by default; use -f to stream by @davanstrien in #3783# Install the hf-cli skill for Claude
hf skills add --claude
# Install for project-level
hf skills add --project
hf skills add CLI command by @julien-c in #3741hf skills add installs to central location with symlinks by @hanouticelina in #3755Hot-reload Python files in a Space without a full rebuild and restart. This is useful for rapid iteration on Gradio apps.
# Open an interactive editor to modify a remote file
hf spaces hot-reload username/repo-name app.py
# Take local version and patch remote
hf spaces hot-reload username/repo-name -f app.py
hf papers ls to list daily papers on the Hub by @julien-c in #3723hf collections commands (ls, info, create, update, delete, add-item, update-item, delete-item) by @Wauplin in #3767Introduce an extension mechanism to the hf CLI. Extensions are standalone executables hosted in GitHub repositories that users can install, run, and remove with simple commands. Inspired by gh extension.
# Install an extension (defaults to huggingface org)
hf extensions install hf-claude
# Install from any GitHub owner
hf extensions install hanouticelina/hf-claude
# Run an extension
hf claude
# List installed extensions
hf extensions list
hf extension by @hanouticelina in #3805hf ext alias by @hanouticelina in #3836--format {table,json} and -q/--quiet to hf models ls, hf datasets ls, hf spaces ls, hf endpoints ls by @hanouticelina in #3735hf jobs ps output with standard CLI pattern by @davanstrien in #3799--expand field by @hanouticelina in #3760hf CLI help output with examples and documentation links by @hanouticelina in #3743-h as short alias for --help by @assafvayner in #3800--version flag by @Wauplin in #3784--type as alias for --repo-type by @Wauplin in #3835hf download repo_id subfolder/ now works as expected by @Wauplin in #3822List available hardware:
✗ hf jobs hardware
NAME PRETTY NAME CPU RAM ACCELERATOR COST/MIN COST/HOUR
--------------- ---------------------- -------- ------- ----------------- -------- ---------
cpu-basic CPU Basic 2 vCPU 16 GB N/A $0.0002 $0.01
cpu-upgrade CPU Upgrade 8 vCPU 32 GB N/A $0.0005 $0.03
cpu-performance CPU Performance 32 vCPU 256 GB N/A $0.3117 $18.70
cpu-xl CPU XL 16 vCPU 124 GB N/A $0.0167 $1.00
t4-small Nvidia T4 - small 4 vCPU 15 GB 1x T4 (16 GB) $0.0067 $0.40
t4-medium Nvidia T4 - medium 8 vCPU 30 GB 1x T4 (16 GB) $0.0100 $0.60
a10g-small Nvidia A10G - small 4 vCPU 15 GB 1x A10G (24 GB) $0.0167 $1.00
...
Also added a ton of fixes and small QoL improvements.
torchrun, accelerate launch) by @lhoestq in #3674hf jobs hardware by @Wauplin in #3693!=) by @lhoestq in #3742hf jobs commands crashing without a TTY by @davanstrien in #3782dimensions & encoding_format parameter to InferenceClient for output embedding size by @mishig25 in #3671image-to-image compatibility with different model schemas by @hanouticelina in #3749EvalResultEntry, parse_eval_result_entries) by @hanouticelina in #3633EvalResultEntry by @hanouticelina in #3694num_papers field to Organization class by @cfahlgren1 in #3695benchmark=True → benchmark="official") by @Wauplin in #3734EvalResultEntry by @Wauplin in #3738task_id required in EvalResultEntry by @Wauplin in #3718upload_large_folder by @Wauplin in #3698plan string in org info by @Wauplin in #3753mode= parameter support by @Wauplin in #3785HfApi.snapshot_download for dry_run typing by @Wauplin in #3788__init__ by @zucchini-nlp in #3818dataclass.repr=True before wrapping by @zucchini-nlp in #3823hf jobs ps removes old Go-template --format '{{.id}}' syntax. Use -q for IDs or --format json | jq for custom extraction by @davanstrien in #3799hf repos instead of hf repo (old command still works but shows deprecation warning) by @Wauplin in #3848hf repo-files delete to hf repo delete-files (old command hidden from help, shows deprecation warning) by @Wauplin in #3821HfFileSystem.resolve_path() with special char @ by @lhoestq in #3704hf_transfer references in Korean and German translations by @davanstrien in #3804typer-slim to typer by @svlandeg in #3797shellingham from the required dependencies by @hanouticelina in #3798unused-ignore-comment warnings in ty for mypy compatibility by @hanouticelina in #3691unused-type-ignore-comment warning from ty by @hanouticelina in #3803file_download tests by @hanouticelina in #3815CollectionItem by @hanouticelina in #3831inference_provider instead of inference in tests by @hanouticelina in #3826Fix file corruption when server ignores Range header on download retry. Full details in https://github.com/huggingface/huggingface_hub/pull/3778 by @XciD.
Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.36.1...v0.36.2
Fix file corruption when server ignores Range header on download retry. Full details in https://github.com/huggingface/huggingface_hub/pull/3778 by @XciD.
Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v1.4.0...v1.4.1
hf skills add CLI CommandA new hf skills add command installs the hf-cli skill for AI coding assistants (Claude Code, Codex, OpenCode). Your AI Agent now knows how to search the Hub, download models, run Jobs, manage repos, and more.
> hf skills add --help
Usage: hf skills add [OPTIONS]
Download a skill and install it for an AI assistant.
Options:
--claude Install for Claude.
--codex Install for Codex.
--opencode Install for OpenCode.
-g, --global Install globally (user-level) instead of in the current
project directory.
--dest PATH Install into a custom destination (path to skills directory).
--force Overwrite existing skills in the destination.
--help Show this message and exit.
Examples
$ hf skills add --claude
$ hf skills add --claude --global
$ hf skills add --codex --opencode
Learn more
Use `hf <command> --help` for more information about a command.
Read the documentation at
https://huggingface.co/docs/huggingface_hub/en/guides/cli
The skill is composed of two files fetched from the huggingface_hub docs: a CLI guide (SKILL.md) and the full CLI reference (references/cli.md). Files are installed to a central .agents/skills/hf-cli/ directory, and relative symlinks are created from agent-specific directories (e.g., .claude/skills/hf-cli/ → ../../.agents/skills/hf-cli/). This ensures a single source of truth when installing for multiple agents.
hf skills add CLI command by @julien-c in #3741hf skills add installs hf-cli skill to central location with symlinks by @hanouticelina in #3755The CLI help output has been reorganized to be more informative and agent-friendly:
> hf cache --help
Usage: hf cache [OPTIONS] COMMAND [ARGS]...
Manage local cache directory.
Options:
--help Show this message and exit.
Main commands:
ls List cached repositories or revisions.
prune Remove detached revisions from the cache.
rm Remove cached repositories or revisions.
verify Verify checksums for a single repo revision from cache or a local
directory.
Examples
$ hf cache ls
$ hf cache ls --revisions
$ hf cache ls --filter "size>1GB" --limit 20
$ hf cache ls --format json
$ hf cache prune
$ hf cache prune --dry-run
$ hf cache rm model/gpt2
$ hf cache rm <revision_hash>
$ hf cache rm model/gpt2 --dry-run
$ hf cache rm model/gpt2 --yes
$ hf cache verify gpt2
$ hf cache verify gpt2 --revision refs/pr/1
$ hf cache verify my-dataset --repo-type dataset
Learn more
Use `hf <command> --help` for more information about a command.
Read the documentation at
https://huggingface.co/docs/huggingface_hub/en/guides/cli
hf CLI help output by @hanouticelina in #3743The Hub now has a decentralized system for tracking model evaluation results. Benchmark datasets (like MMLU-Pro, HLE, GPQA) host leaderboards, and model repos store evaluation scores in .eval_results/*.yaml files. These results automatically appear on both the model page and the benchmark's leaderboard. See the Evaluation Results documentation for more details.
We added helpers in huggingface_hub to work with this format:
EvalResultEntry dataclass representing evaluation scoreseval_result_entries_to_yaml() to serialize entries to YAML formatparse_eval_result_entries() to parse YAML data back into EvalResultEntry objectsimport yaml
from huggingface_hub import EvalResultEntry, eval_result_entries_to_yaml, upload_file
entries = [
EvalResultEntry(dataset_id="cais/hle", task_id="default", value=20.90),
EvalResultEntry(dataset_id="Idavidrein/gpqa", task_id="gpqa_diamond", value=0.412),
]
yaml_content = yaml.dump(eval_result_entries_to_yaml(entries))
upload_file(
path_or_fileobj=yaml_content.encode(),
path_in_repo=".eval_results/results.yaml",
repo_id="your-username/your-model",
)
New hf papers ls command to list daily papers on the Hub, with support for filtering by date and sorting by trending or publication date.
hf papers ls # List most recent daily papers
hf papers ls --sort=trending # List trending papers
hf papers ls --date=2025-01-23 # List papers from a specific date
hf papers ls --date=today # List today's papers
hf papers ls CLI command by @julien-c in #3723New hf collections commands for managing collections from the CLI:
# List collections
hf collections ls --owner nvidia --limit 5
hf collections ls --sort trending
# Create a collection
hf collections create "My Models" --description "Favorites" --private
# Add items
hf collections add-item user/my-coll models/gpt2 model
hf collections add-item user/my-coll datasets/squad dataset --note "QA dataset"
# Get info
hf collections info user/my-coll
# Delete
hf collections delete user/my-coll
hf collections commands by @Wauplin in #3767Other CLI-related improvements:
--expand field by @hanouticelina in #3760Multi-GPU training commands are now supported with torchrun and accelerate launch:
> hf jobs uv run --with torch -- torchrun train.py
> hf jobs uv run --with accelerate -- accelerate launch train.py
You can also pass local config files alongside your scripts:
> hf jobs uv run script.py config.yml
> hf jobs uv run --with torch torchrun script.py config.yml
New hf jobs hardware command to list available hardware options:
> hf jobs hardware
NAME PRETTY NAME CPU RAM ACCELERATOR COST/MIN COST/HOUR
------------ ---------------------- -------- ------- ---------------- -------- ---------
cpu-basic CPU Basic 2 vCPU 16 GB N/A $0.0002 $0.01
cpu-upgrade CPU Upgrade 8 vCPU 32 GB N/A $0.0005 $0.03
t4-small Nvidia T4 - small 4 vCPU 15 GB 1x T4 (16 GB) $0.0067 $0.40
t4-medium Nvidia T4 - medium 8 vCPU 30 GB 1x T4 (16 GB) $0.0100 $0.60
a10g-small Nvidia A10G - small 4 vCPU 15 GB 1x A10G (24 GB) $0.0167 $1.00
a10g-large Nvidia A10G - large 12 vCPU 46 GB 1x A10G (24 GB) $0.0250 $1.50
a10g-largex2 2x Nvidia A10G - large 24 vCPU 92 GB 2x A10G (48 GB) $0.0500 $3.00
a10g-largex4 4x Nvidia A10G - large 48 vCPU 184 GB 4x A10G (96 GB) $0.0833 $5.00
a100-large Nvidia A100 - large 12 vCPU 142 GB 1x A100 (80 GB) $0.0417 $2.50
a100x4 4x Nvidia A100 48 vCPU 568 GB 4x A100 (320 GB) $0.1667 $10.00
a100x8 8x Nvidia A100 96 vCPU 1136 GB 8x A100 (640 GB) $0.3333 $20.00
l4x1 1x Nvidia L4 8 vCPU 30 GB 1x L4 (24 GB) $0.0133 $0.80
l4x4 4x Nvidia L4 48 vCPU 186 GB 4x L4 (96 GB) $0.0633 $3.80
l40sx1 1x Nvidia L40S 8 vCPU 62 GB 1x L40S (48 GB) $0.0300 $1.80
l40sx4 4x Nvidia L40S 48 vCPU 382 GB 4x L40S (192 GB) $0.1383 $8.30
l40sx8 8x Nvidia L40S 192 vCPU 1534 GB 8x L40S (384 GB) $0.3917 $23.50
Better filtering with label support and negation:
> hf jobs ps -a --filter status!=error
> hf jobs ps -a --filter label=fine-tuning
> hf jobs ps -a --filter label=model=Qwen3-06B
The following contributors have made significant changes to the library over the last release:
Log 'x-amz-cf-id' on http error (if no request id) (https://github.com/huggingface/huggingface_hub/pull/3759)
Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v1.3.5...v1.3.7
Default timeout is 10s. This is ok in most use cases but can trigger errors in CIs making a lot of requests to the Hub. Solution is to set HF_HUB_DOWNLOAD_TIMEOUT=60 as environment variable in these cases.
Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v1.3.4...v1.3.5
Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v1.3.3...v1.3.4
You can now list all available hardware options for Hugging Face Jobs, both from the CLI and programmatically.
From the CLI:
➜ hf jobs hardware
NAME PRETTY NAME CPU RAM ACCELERATOR COST/MIN COST/HOUR
--------------- ---------------------- -------- ------- ---------------- -------- ---------
cpu-basic CPU Basic 2 vCPU 16 GB N/A $0.0002 $0.01
cpu-upgrade CPU Upgrade 8 vCPU 32 GB N/A $0.0005 $0.03
cpu-performance CPU Performance 8 vCPU 32 GB N/A $0.0000 $0.00
cpu-xl CPU XL 16 vCPU 124 GB N/A $0.0000 $0.00
t4-small Nvidia T4 - small 4 vCPU 15 GB 1x T4 (16 GB) $0.0067 $0.40
t4-medium Nvidia T4 - medium 8 vCPU 30 GB 1x T4 (16 GB) $0.0100 $0.60
a10g-small Nvidia A10G - small 4 vCPU 15 GB 1x A10G (24 GB) $0.0167 $1.00
a10g-large Nvidia A10G - large 12 vCPU 46 GB 1x A10G (24 GB) $0.0250 $1.50
a10g-largex2 2x Nvidia A10G - large 24 vCPU 92 GB 2x A10G (48 GB) $0.0500 $3.00
a10g-largex4 4x Nvidia A10G - large 48 vCPU 184 GB 4x A10G (96 GB) $0.0833 $5.00
a100-large Nvidia A100 - large 12 vCPU 142 GB 1x A100 (80 GB) $0.0417 $2.50
a100x4 4x Nvidia A100 48 vCPU 568 GB 4x A100 (320 GB) $0.1667 $10.00
a100x8 8x Nvidia A100 96 vCPU 1136 GB 8x A100 (640 GB) $0.3333 $20.00
l4x1 1x Nvidia L4 8 vCPU 30 GB 1x L4 (24 GB) $0.0133 $0.80
l4x4 4x Nvidia L4 48 vCPU 186 GB 4x L4 (96 GB) $0.0633 $3.80
l40sx1 1x Nvidia L40S 8 vCPU 62 GB 1x L40S (48 GB) $0.0300 $1.80
l40sx4 4x Nvidia L40S 48 vCPU 382 GB 4x L40S (192 GB) $0.1383 $8.30
l40sx8 8x Nvidia L40S 192 vCPU 1534 GB 8x L40S (384 GB) $0.3917 $23.50
Programmatically:
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> hardware_list = api.list_jobs_hardware()
>>> hardware_list[0]
JobHardware(name='cpu-basic', pretty_name='CPU Basic', cpu='2 vCPU', ram='16 GB', accelerator=None, unit_cost_micro_usd=167, unit_cost_usd=0.000167, unit_label='minute')
>>> hardware_list[0].name
'cpu-basic'
HfFileSystemStreamFile in #3685 by @leq6cresolve_path() with special char @ in #3704 by @lhoestqnum_papers field to Organization class in #3695 by @cfahlgren1limit param to list_papers API method in #3697 by @WauplinMAX_FILE_SIZE_GB from 50 to 200 GB in #3696 by @davanstrienFull Changelog: https://github.com/huggingface/huggingface_hub/compare/v1.3.1...v1.3.2