[v1.13.0] new CLI commands and formatting, and HF URI parsing
This release adds three new CLI capabilities for exploring Hub content. hf models card, hf datasets card, and hf spaces card fetch the README of any repo and print it to stdout, with --metadata (YAML frontmatter as JSON) and --text (prose only) flags for splitting the card into its structured and unstructured parts. Calling hf models ls <repo_id>, hf datasets ls <repo_id>, or hf spaces ls <repo_id> now switches from listing repos to listing files inside that repo, with --tree, -R, -h, and --revision options mirroring the existing hf buckets ls behavior. And hf datasets leaderboard <dataset_id> surfaces model scores submitted to a benchmark dataset, making it easy to compare models by score from the terminal.
# Get model card metadata as JSON
hf models card google/gemma-4-31B-it --metadata --format json
# List files in a model repo (tree view with sizes)
hf models ls meta-llama/Llama-3.2-1B-Instruct --tree -h
# Show top 5 models on SWE-bench
hf datasets leaderboard SWE-bench/SWE-bench_Verified --limit 5
π Documentation: CLI guide
hf datasets leaderboard by @hanouticelina in #4154Three new hf spaces subcommands bring full lifecycle control to the terminal. hf spaces pause and hf spaces restart stop or rebuild a Space (with --factory-reboot for a clean rebuild), and hf spaces settings lets you configure sleep time and hardware in one call. A companion hf spaces hardware command lists all available hardware flavors with pricing, so you can discover options before changing settings. Pause and restart include a confirmation prompt (-y to skip) since they tear down the running container.
# Pause a Space when not in use (not billed while paused)
hf spaces pause username/my-space
# Restart with a GPU
hf spaces settings username/my-space --hardware t4-medium --sleep-time 3600
# List available hardware options
hf spaces hardware
π Documentation: CLI guide β Spaces
hf spaces hardware command by @Wauplin in #4169--hardware flag to hf spaces settings by @davanstrien in #4163hf update replaces the auto-update promptThe blocking interactive Y/n auto-update prompt at CLI startup is gone. It was catching too many non-interactive contexts (CI runners, Homebrew post-install hooks, Jupyter notebooks) and hanging automation. In its place, a single yellow stderr warning suggests running hf update β a new command that detects how hf was installed (Homebrew, standalone installer, or pip) and runs the right upgrade command. Set HF_HUB_DISABLE_UPDATE_CHECK=1 to silence the startup check entirely, for example in offline CI.
hf update
π Documentation: CLI guide β Updating
hf update + drop interactive update prompt by @Wauplin in #4131The --format, --json, and -q / --quiet flags are now handled globally by the CLI framework instead of being declared individually on each command. This means every hf command automatically accepts them β no more per-command --format boilerplate, and the flags are properly documented in a dedicated "Formatting options" section in every --help page. --format auto (the default) picks human for interactive terminals and agent when invoked by an AI agent, making CLI output automatically suitable for both people and tools.
# JSON output for scripting
hf models ls --search bert --limit 2 --json | jq '.[].id'
# IDs only, one per line
hf collections ls --owner nvidia -q
π Documentation: CLI guide β Output formatting
hf:// URI parsingA new parse_hf_uri function and HfUri dataclass provide a single source of truth for parsing hf://... strings across the library. Whether you reference a model, dataset, space, bucket, or file inside a repo, the parser handles all valid URI shapes β type prefixes, revisions, and paths β and rejects invalid ones with clear error messages. A companion parse_hf_mount / HfMount handles volume mount specifications (hf://...:/mnt:ro). Both are pure string parsers (no network calls) and round-trippable via .to_uri().
from huggingface_hub import parse_hf_uri, parse_hf_mount
parse_hf_uri("hf://datasets/namespace/my-dataset@refs/pr/3/train.json")
# HfUri(type='dataset', id='namespace/my-dataset', revision='refs/pr/3', path_in_repo='train.json')
parse_hf_mount("hf://buckets/my-org/my-bucket/sub/dir:/mnt:ro")
# HfMount(source=HfUri(type='bucket', id='my-org/my-bucket', ...), mount_path='/mnt', read_only=True)
π Documentation: HF URIs reference
Local scripts uploaded by hf jobs uv run are now stored in a {namespace}/jobs-artifacts bucket and mounted into the job container at /data instead of being base64-encoded into an environment variable. The old bash -c + xargs + base64 -d pipeline was fragile and required manual shell quoting. Bucket transport is simpler, easier to debug, and supports write-back: jobs can persist output artifacts to /data/ since the mount is read-write. The base64 transport path has been fully removed with no fallback.
Fetched April 30, 2026