[v1.4.0] Building the HF CLI for You and your AI Agents
hf skills add CLI CommandA new hf skills add command installs the hf-cli skill for AI coding assistants (Claude Code, Codex, OpenCode). Your AI Agent now knows how to search the Hub, download models, run Jobs, manage repos, and more.
> hf skills add --help
Usage: hf skills add [OPTIONS]
Download a skill and install it for an AI assistant.
Options:
--claude Install for Claude.
--codex Install for Codex.
--opencode Install for OpenCode.
-g, --global Install globally (user-level) instead of in the current
project directory.
--dest PATH Install into a custom destination (path to skills directory).
--force Overwrite existing skills in the destination.
--help Show this message and exit.
Examples
$ hf skills add --claude
$ hf skills add --claude --global
$ hf skills add --codex --opencode
Learn more
Use `hf <command> --help` for more information about a command.
Read the documentation at
https://huggingface.co/docs/huggingface_hub/en/guides/cli
The skill is composed of two files fetched from the huggingface_hub docs: a CLI guide (SKILL.md) and the full CLI reference (references/cli.md). Files are installed to a central .agents/skills/hf-cli/ directory, and relative symlinks are created from agent-specific directories (e.g., .claude/skills/hf-cli/ → ../../.agents/skills/hf-cli/). This ensures a single source of truth when installing for multiple agents.
hf skills add CLI command by @julien-c in #3741hf skills add installs hf-cli skill to central location with symlinks by @hanouticelina in #3755The CLI help output has been reorganized to be more informative and agent-friendly:
> hf cache --help
Usage: hf cache [OPTIONS] COMMAND [ARGS]...
Manage local cache directory.
Options:
--help Show this message and exit.
Main commands:
ls List cached repositories or revisions.
prune Remove detached revisions from the cache.
rm Remove cached repositories or revisions.
verify Verify checksums for a single repo revision from cache or a local
directory.
Examples
$ hf cache ls
$ hf cache ls --revisions
$ hf cache ls --filter "size>1GB" --limit 20
$ hf cache ls --format json
$ hf cache prune
$ hf cache prune --dry-run
$ hf cache rm model/gpt2
$ hf cache rm <revision_hash>
$ hf cache rm model/gpt2 --dry-run
$ hf cache rm model/gpt2 --yes
$ hf cache verify gpt2
$ hf cache verify gpt2 --revision refs/pr/1
$ hf cache verify my-dataset --repo-type dataset
Learn more
Use `hf <command> --help` for more information about a command.
Read the documentation at
https://huggingface.co/docs/huggingface_hub/en/guides/cli
hf CLI help output by @hanouticelina in #3743The Hub now has a decentralized system for tracking model evaluation results. Benchmark datasets (like MMLU-Pro, HLE, GPQA) host leaderboards, and model repos store evaluation scores in .eval_results/*.yaml files. These results automatically appear on both the model page and the benchmark's leaderboard. See the Evaluation Results documentation for more details.
We added helpers in huggingface_hub to work with this format:
EvalResultEntry dataclass representing evaluation scoreseval_result_entries_to_yaml() to serialize entries to YAML formatparse_eval_result_entries() to parse YAML data back into EvalResultEntry objectsimport yaml
from huggingface_hub import EvalResultEntry, eval_result_entries_to_yaml, upload_file
entries = [
EvalResultEntry(dataset_id="cais/hle", task_id="default", value=20.90),
EvalResultEntry(dataset_id="Idavidrein/gpqa", task_id="gpqa_diamond", value=0.412),
]
yaml_content = yaml.dump(eval_result_entries_to_yaml(entries))
upload_file(
path_or_fileobj=yaml_content.encode(),
path_in_repo=".eval_results/results.yaml",
repo_id="your-username/your-model",
)
New hf papers ls command to list daily papers on the Hub, with support for filtering by date and sorting by trending or publication date.
hf papers ls # List most recent daily papers
hf papers ls --sort=trending # List trending papers
hf papers ls --date=2025-01-23 # List papers from a specific date
hf papers ls --date=today # List today's papers
hf papers ls CLI command by @julien-c in #3723New hf collections commands for managing collections from the CLI:
# List collections
hf collections ls --owner nvidia --limit 5
hf collections ls --sort trending
# Create a collection
hf collections create "My Models" --description "Favorites" --private
# Add items
hf collections add-item user/my-coll models/gpt2 model
hf collections add-item user/my-coll datasets/squad dataset --note "QA dataset"
# Get info
hf collections info user/my-coll
# Delete
hf collections delete user/my-coll
hf collections commands by @Wauplin in #3767Other CLI-related improvements:
--expand field by @hanouticelina in #3760Multi-GPU training commands are now supported with torchrun and accelerate launch:
> hf jobs uv run --with torch -- torchrun train.py
> hf jobs uv run --with accelerate -- accelerate launch train.py
You can also pass local config files alongside your scripts:
> hf jobs uv run script.py config.yml
> hf jobs uv run --with torch torchrun script.py config.yml
New hf jobs hardware command to list available hardware options:
> hf jobs hardware
NAME PRETTY NAME CPU RAM ACCELERATOR COST/MIN COST/HOUR
------------ ---------------------- -------- ------- ---------------- -------- ---------
cpu-basic CPU Basic 2 vCPU 16 GB N/A $0.0002 $0.01
cpu-upgrade CPU Upgrade 8 vCPU 32 GB N/A $0.0005 $0.03
t4-small Nvidia T4 - small 4 vCPU 15 GB 1x T4 (16 GB) $0.0067 $0.40
t4-medium Nvidia T4 - medium 8 vCPU 30 GB 1x T4 (16 GB) $0.0100 $0.60
a10g-small Nvidia A10G - small 4 vCPU 15 GB 1x A10G (24 GB) $0.0167 $1.00
a10g-large Nvidia A10G - large 12 vCPU 46 GB 1x A10G (24 GB) $0.0250 $1.50
a10g-largex2 2x Nvidia A10G - large 24 vCPU 92 GB 2x A10G (48 GB) $0.0500 $3.00
a10g-largex4 4x Nvidia A10G - large 48 vCPU 184 GB 4x A10G (96 GB) $0.0833 $5.00
a100-large Nvidia A100 - large 12 vCPU 142 GB 1x A100 (80 GB) $0.0417 $2.50
a100x4 4x Nvidia A100 48 vCPU 568 GB 4x A100 (320 GB) $0.1667 $10.00
a100x8 8x Nvidia A100 96 vCPU 1136 GB 8x A100 (640 GB) $0.3333 $20.00
l4x1 1x Nvidia L4 8 vCPU 30 GB 1x L4 (24 GB) $0.0133 $0.80
l4x4 4x Nvidia L4 48 vCPU 186 GB 4x L4 (96 GB) $0.0633 $3.80
l40sx1 1x Nvidia L40S 8 vCPU 62 GB 1x L40S (48 GB) $0.0300 $1.80
l40sx4 4x Nvidia L40S 48 vCPU 382 GB 4x L40S (192 GB) $0.1383 $8.30
l40sx8 8x Nvidia L40S 192 vCPU 1534 GB 8x L40S (384 GB) $0.3917 $23.50
Better filtering with label support and negation:
> hf jobs ps -a --filter status!=error
> hf jobs ps -a --filter label=fine-tuning
> hf jobs ps -a --filter label=model=Qwen3-06B
The following contributors have made significant changes to the library over the last release:
Fetched April 7, 2026