releases.shpreview

v1.4.0

[v1.4.0] Building the HF CLI for You and your AI Agents

$npx -y @buildinternet/releases show rel_jio90hRJS0dhu0pW4vPTa

🧠 hf skills add CLI Command

A new hf skills add command installs the hf-cli skill for AI coding assistants (Claude Code, Codex, OpenCode). Your AI Agent now knows how to search the Hub, download models, run Jobs, manage repos, and more.

> hf skills add --help
Usage: hf skills add [OPTIONS]

  Download a skill and install it for an AI assistant.

Options:
  --claude      Install for Claude.
  --codex       Install for Codex.
  --opencode    Install for OpenCode.
  -g, --global  Install globally (user-level) instead of in the current
                project directory.
  --dest PATH   Install into a custom destination (path to skills directory).
  --force       Overwrite existing skills in the destination.
  --help        Show this message and exit.

Examples
  $ hf skills add --claude
  $ hf skills add --claude --global
  $ hf skills add --codex --opencode

Learn more
  Use `hf <command> --help` for more information about a command.
  Read the documentation at
  https://huggingface.co/docs/huggingface_hub/en/guides/cli

The skill is composed of two files fetched from the huggingface_hub docs: a CLI guide (SKILL.md) and the full CLI reference (references/cli.md). Files are installed to a central .agents/skills/hf-cli/ directory, and relative symlinks are created from agent-specific directories (e.g., .claude/skills/hf-cli/../../.agents/skills/hf-cli/). This ensures a single source of truth when installing for multiple agents.

  • Add hf skills add CLI command by @julien-c in #3741
  • [CLI] hf skills add installs hf-cli skill to central location with symlinks by @hanouticelina in #3755

🖥️ Improved CLI Help Output

The CLI help output has been reorganized to be more informative and agent-friendly:

  • Commands are now grouped into Main commands and Help commands
  • Examples section showing common usage patterns
  • Learn more section with links to documentation
> hf cache --help
Usage: hf cache [OPTIONS] COMMAND [ARGS]...

  Manage local cache directory.

Options:
  --help  Show this message and exit.

Main commands:
  ls      List cached repositories or revisions.
  prune   Remove detached revisions from the cache.
  rm      Remove cached repositories or revisions.
  verify  Verify checksums for a single repo revision from cache or a local
          directory.

Examples
  $ hf cache ls
  $ hf cache ls --revisions
  $ hf cache ls --filter "size>1GB" --limit 20
  $ hf cache ls --format json
  $ hf cache prune
  $ hf cache prune --dry-run
  $ hf cache rm model/gpt2
  $ hf cache rm <revision_hash>
  $ hf cache rm model/gpt2 --dry-run
  $ hf cache rm model/gpt2 --yes
  $ hf cache verify gpt2
  $ hf cache verify gpt2 --revision refs/pr/1
  $ hf cache verify my-dataset --repo-type dataset

Learn more
  Use `hf <command> --help` for more information about a command.
  Read the documentation at
  https://huggingface.co/docs/huggingface_hub/en/guides/cli
  • [CLI] improve hf CLI help output by @hanouticelina in #3743

📊 Evaluation Results Module

The Hub now has a decentralized system for tracking model evaluation results. Benchmark datasets (like MMLU-Pro, HLE, GPQA) host leaderboards, and model repos store evaluation scores in .eval_results/*.yaml files. These results automatically appear on both the model page and the benchmark's leaderboard. See the Evaluation Results documentation for more details.

We added helpers in huggingface_hub to work with this format:

  • EvalResultEntry dataclass representing evaluation scores
  • eval_result_entries_to_yaml() to serialize entries to YAML format
  • parse_eval_result_entries() to parse YAML data back into EvalResultEntry objects
import yaml
from huggingface_hub import EvalResultEntry, eval_result_entries_to_yaml, upload_file

entries = [
    EvalResultEntry(dataset_id="cais/hle", task_id="default", value=20.90),
    EvalResultEntry(dataset_id="Idavidrein/gpqa", task_id="gpqa_diamond", value=0.412),
]
yaml_content = yaml.dump(eval_result_entries_to_yaml(entries))
upload_file(
    path_or_fileobj=yaml_content.encode(),
    path_in_repo=".eval_results/results.yaml",
    repo_id="your-username/your-model",
)
  • Add evaluation results module by @hanouticelina in #3633
  • Eval results synchronization by @Wauplin in #3718
  • Eval results notes by @Wauplin in #3738

🖥️ Other CLI Improvements

New hf papers ls command to list daily papers on the Hub, with support for filtering by date and sorting by trending or publication date.

hf papers ls                       # List most recent daily papers
hf papers ls --sort=trending       # List trending papers
hf papers ls --date=2025-01-23     # List papers from a specific date
hf papers ls --date=today          # List today's papers
  • Add hf papers ls CLI command by @julien-c in #3723

New hf collections commands for managing collections from the CLI:

# List collections
hf collections ls --owner nvidia --limit 5
hf collections ls --sort trending

# Create a collection
hf collections create "My Models" --description "Favorites" --private

# Add items
hf collections add-item user/my-coll models/gpt2 model
hf collections add-item user/my-coll datasets/squad dataset --note "QA dataset"

# Get info
hf collections info user/my-coll

# Delete
hf collections delete user/my-coll
  • [CLI] Add hf collections commands by @Wauplin in #3767

Other CLI-related improvements:

  • [CLI] output format option for ls CLIs by @hanouticelina in #3735
  • [CLI] Dynamic table columns based on --expand field by @hanouticelina in #3760
  • [CLI] Adds centralized error handling by @hanouticelina in #3754
  • [CLI] exception handling scope by @hanouticelina in #3748
  • Update CLI help output in docs to include new commands by @julien-c in #3722

📊 Jobs

Multi-GPU training commands are now supported with torchrun and accelerate launch:

> hf jobs uv run --with torch -- torchrun train.py
> hf jobs uv run --with accelerate -- accelerate launch train.py

You can also pass local config files alongside your scripts:

> hf jobs uv run script.py config.yml
> hf jobs uv run --with torch torchrun script.py config.yml

New hf jobs hardware command to list available hardware options:

> hf jobs hardware
NAME         PRETTY NAME            CPU      RAM     ACCELERATOR      COST/MIN COST/HOUR 
------------ ---------------------- -------- ------- ---------------- -------- --------- 
cpu-basic    CPU Basic              2 vCPU   16 GB   N/A              $0.0002  $0.01     
cpu-upgrade  CPU Upgrade            8 vCPU   32 GB   N/A              $0.0005  $0.03     
t4-small     Nvidia T4 - small      4 vCPU   15 GB   1x T4 (16 GB)    $0.0067  $0.40     
t4-medium    Nvidia T4 - medium     8 vCPU   30 GB   1x T4 (16 GB)    $0.0100  $0.60     
a10g-small   Nvidia A10G - small    4 vCPU   15 GB   1x A10G (24 GB)  $0.0167  $1.00     
a10g-large   Nvidia A10G - large    12 vCPU  46 GB   1x A10G (24 GB)  $0.0250  $1.50     
a10g-largex2 2x Nvidia A10G - large 24 vCPU  92 GB   2x A10G (48 GB)  $0.0500  $3.00     
a10g-largex4 4x Nvidia A10G - large 48 vCPU  184 GB  4x A10G (96 GB)  $0.0833  $5.00     
a100-large   Nvidia A100 - large    12 vCPU  142 GB  1x A100 (80 GB)  $0.0417  $2.50     
a100x4       4x Nvidia A100         48 vCPU  568 GB  4x A100 (320 GB) $0.1667  $10.00    
a100x8       8x Nvidia A100         96 vCPU  1136 GB 8x A100 (640 GB) $0.3333  $20.00    
l4x1         1x Nvidia L4           8 vCPU   30 GB   1x L4 (24 GB)    $0.0133  $0.80     
l4x4         4x Nvidia L4           48 vCPU  186 GB  4x L4 (96 GB)    $0.0633  $3.80     
l40sx1       1x Nvidia L40S         8 vCPU   62 GB   1x L40S (48 GB)  $0.0300  $1.80     
l40sx4       4x Nvidia L40S         48 vCPU  382 GB  4x L40S (192 GB) $0.1383  $8.30     
l40sx8       8x Nvidia L40S         192 vCPU 1534 GB 8x L40S (384 GB) $0.3917  $23.50  

Better filtering with label support and negation:

> hf jobs ps -a --filter status!=error
> hf jobs ps -a --filter label=fine-tuning
> hf jobs ps -a --filter label=model=Qwen3-06B
  • [Jobs] Support multi gpu training commands by @lhoestq in #3674
  • [Jobs] List available hardware by @Wauplin in #3693
  • [Jobs] Better jobs filtering in CLI: labels and negation by @lhoestq in #3742
  • Pass local script and config files to job by @lhoestq in #3724
  • Support setting Label in Jobs API by @Wauplin in #3719
  • Pass namespace parameter to fetch job logs in jobs CLI by @Praful932 in #3736
  • Add more error handling output to hf jobs cli commands by @davanstrien in #3744

⚡️ Inference

  • Add dimensions & encoding_format parameter to InferenceClient for output embedding size by @mishig25 in #3671
  • feat: zai-org provider supports text to image by @tomsun28 in #3675
  • [Inference Providers] fix fal image urls payload by @hanouticelina in #3746
  • Fix Replicate image-to-image compatibility with different model schemas by @hanouticelina in #3749

🔧 QoL Improvements

  • add source org field by @hanouticelina in #3694
  • add num_papers field to Organization class by @cfahlgren1 in #3695
  • Add limit param to list_papers API method by @Wauplin in #3697
  • Repo commit count warning by @Wauplin in #3698
  • List datasets benchmark alias by @Wauplin in #3734
  • List repo files repoType by @Wauplin in #3753
  • Update hardware list in SpaceHardware enum by @lhoestq in #3756
  • Use HF_HUB_DOWNLOAD_TIMEOUT as default httpx timeout by @Wauplin in #3751
  • Default _endpoint to None in CommitInfo by @tomaarsen in #3737
  • Update MAX_FILE_SIZE_GB from 50 to 200 to match hub-docs PR #2169 by @davanstrien in #3696
  • Pass kwargs to post init in dataclasses by @zucchini-nlp in #3771
  • Add retry/backoff when fetching Xet connection info to handle 502 errors by @aabhathanki in #3768

📖 Documentation

  • Wildcard pattern documentation by @hanouticelina in #3710
  • Add link to Hub Jobs documentation by @gary149 in #3712
  • Update HTTP backend configuration link to main branch by @IliasAarab in #3713
  • Correct img tag style in README.md by @sadnesslovefreedom-debug in #3689

🐛 Bug and typo fixes

  • Fix endpoint not forwarded in CommitUrl by @Wauplin in #3679
  • fix curlify with streaming request by @hanouticelina in #3692
  • Fix severe performance regression in streaming by keeping a byte iterator in HfFileSystemStreamFile by @leq6c in #3685
  • fix resolve_path() with special char @ by @lhoestq in #3704
  • Fix cache verify incorrectly reporting folders as missing files by @Mitix-EPI in #3707
  • Fix multi user cache lock permissions by @hanouticelina in #3714
  • [CLI] Fix typo in CLI error handling by @hanouticelina in #3757
  • Log 'x-amz-cf-id' on http error (if no request id) by @Wauplin in #3759
  • [Fix] Filter datasets by benchmark official by @Wauplin in #3761

🏗️ Internal

  • Ignore unused-ignore-comment warnings in ty for mypy compatibility by @hanouticelina in #3691
  • Skip sync test on Windows Python 3.14 by @Wauplin in #3700
  • Upgrade GitHub Actions to latest versions by @salmanmkc in #3729
  • Remove canonical dataset test case from test_access_repositories_lists by @hanouticelina in #3740
  • Fix style issues in CI by @Wauplin in #3773

Significant community contributions

The following contributors have made significant changes to the library over the last release:

  • @tomsun28
    • feat: zai-org provider supports text to image (#3675)
  • @leq6c
    • Fix severe performance regression in streaming by keeping a byte iterator in HfFileSystemStreamFile (#3685)
  • @Mitix-EPI
    • Fix cache verify incorrectly reporting folders as missing files (#3707)
  • @Praful932
    • Pass namespace parameter to fetch job logs in jobs CLI (#3736)
  • @aabhathanki
    • Add retry/backoff when fetching Xet connection info to handle 502 errors (#3768)

Fetched April 7, 2026