releases.shpreview
Hugging Face/huggingface_hub

huggingface_hub

$npx -y @buildinternet/releases show huggingface-hub
Mon
Wed
Fri
AprMayJunJulAugSepOctNovDecJanFebMarApr
Less
More
Releases20Avg6/moVersionsv1.3.2 → v1.11.0
Sep 15, 2025
[v0.34.5]: Welcoming Scaleway as Inference Providers!

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.34.4...v0.34.5

⚡ New provider: Scaleway

[!Tip] All supported Scaleway models can be found here. For more details, check out its documentation page.

Scaleway is a European cloud provider, serving latest LLM models through its Generative APIs alongside a complete cloud ecosystem.

from huggingface_hub import InferenceClient

client = InferenceClient(provider="scaleway")

completion = client.chat.completions.create(
    model="Qwen/Qwen3-235B-A22B-Instruct-2507",
    messages=[
        {
            "role": "user",
            "content": "What is the capital of France?"
        }
    ],
)

print(completion.choices[0].message)
  • feat: add scaleway inference provider by @Gnoale in #1925
Aug 8, 2025
[v0.34.4] Support Image to Video inference + QoL in jobs API, auth and utilities

Biggest update is the support of Image-To-Video task with inference provider Fal AI

  • [Inference] Support image to video task #3289 by @hanouticelina
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> video = client.image_to_video("cat.jpg", model="Wan-AI/Wan2.2-I2V-A14B", prompt="turn the cat into a tiger")
>>> with open("tiger.mp4", "wb") as f:
 ...     f.write(video)

And some quality of life improvements:

  • Add type to job owner #3291 by @drbh
  • Include HF_HUB_DISABLE_XET in the environment dump #3290 by @hanouticelina
  • Whoami: custom message only on unauthorized #3288 by @Wauplin
  • Add validation warnings for repository limits in upload_large_folder #3280 by @davanstrien
  • Add timeout info to Jobs guide docs #3281 by @davanstrien
  • [Jobs] Use current or stored token in a Job secrets #3272 by @lhoestq
  • Fix bash history expansion in hf jobs example #3277 by @nyuuzyou

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.34.3...v0.34.4

Jul 29, 2025
[v0.34.3] Jobs improvements and `whoami` user prefix
  • [Jobs] Update uv image #3270 by @lhoestq
  • [Update] HF Jobs Documentation #3268 by @ariG23498
  • Add 'user:' prefix to whoami command output #3267 by @gary149

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.34.2...v0.34.3

Jul 28, 2025
[v0.34.2] Bug fixes: Windows path handling & resume download size fix
  • bug fix: only extend path on window sys in #3265 by @vealocia
  • Fix bad total size after resuming download in #3234 by @DKingAlpha

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.34.1...v0.34.2

Jul 25, 2025
[v0.34.1] [CLI] print help if no command provided
Jul 24, 2025
[v0.34.0] Announcing Jobs: a new way to run compute on Hugging Face!

🔥🔥🔥 Announcing Jobs: a new way to run compute on Hugging Face!

We're thrilled to introduce a powerful new command-line interface for running and managing compute jobs on Hugging Face infrastructure! With the new hf jobs command, you can now seamlessly launch, monitor, and manage jobs using a familiar Docker-like experience. Run any command in Docker images (from Docker Hub, Hugging Face Spaces, or your own custom images) on a variety of hardware including CPUs, GPUs, and TPUs - all with simple, intuitive commands.

Key features:

  • 🐳 Docker-like CLI: Familiar commands (run, ps, logs, inspect, cancel) to run and manage jobs
  • 🔥 Any Hardware: Instantly access CPUs, T4/A10G/A100 GPUs, and TPUs with a simple flag
  • 📦 Run Anything: Use Docker images, HF Spaces, or custom containers
  • 📊 Live Monitoring: Stream logs in real-time, just like running locally
  • 💰 Pay-as-you-go: Only pay for the seconds you use
  • 🧬 UV Runner: Run Python scripts with inline dependencies using uv (experimental)

All features are available both from Python (run_job, list_jobs, etc.) and the CLI (hf jobs).

Example usage:

# Run a Python script on the cloud
hf jobs run python:3.12 python -c "print('Hello from the cloud!')"

# Use a GPU
hf jobs run --flavor=t4-small --namespace=huggingface ubuntu nvidia-smi

# List your jobs
hf jobs ps

# Stream logs from a job
hf jobs logs <job-id>

# Inspect job details
hf jobs inspect <job-id>

# Cancel a running job
hf jobs cancel <job-id>

# Run a UV script (experimental)
hf jobs uv run my_script.py --flavor=a10g-small --with=trl

You can also pass environment variables and secrets, select hardware flavors, run jobs in organizations, and use the experimental uv runner for Python scripts with inline dependencies.

Check out the Jobs guide for more examples and details.

  • [Jobs] Add huggingface-cli jobs commands by @lhoestq #3211
  • Rename huggingface-cli jobs to hf jobs by @Wauplin #3250
  • Docs: link to jobs cli docs by @lhoestq #3253
  • [Jobs] Mention PRO is required by @Wauplin #3257

🚀 The CLI is now hf! (formerly huggingface-cli)

Glad to announce a long awaited quality-of-life improvement: the Hugging Face CLI has been officially renamed from huggingface-cli to hf! The legacy huggingface-cli remains available without any breaking change, but is officially deprecated. We took the opportunity update the syntax to a more modern command format hf <resource> <action> [options] (e.g. hf auth login, hf repo create, hf jobs run).

Run hf --help to know more about the CLI options.

 hf --help
usage: hf <command> [<args>]

positional arguments:
  {auth,cache,download,jobs,repo,repo-files,upload,upload-large-folder,env,version,lfs-enable-largefiles,lfs-multipart-upload}
                        hf command helpers
    auth                Manage authentication (login, logout, etc.).
    cache               Manage local cache directory.
    download            Download files from the Hub
    jobs                Run and manage Jobs on the Hub.
    repo                Manage repos on the Hub.
    repo-files          Manage files in a repo on the Hub.
    upload              Upload a file or a folder to the Hub. Recommended for single-commit uploads.
    upload-large-folder
                        Upload a large folder to the Hub. Recommended for resumable uploads.
    env                 Print information about the environment.
    version             Print information about the hf version.

options:
  -h, --help            show this help message and exit
  • Rename CLI to 'hf' + reorganize syntax by @Wauplin in #3229
  • Rename huggingface-cli jobs to hf jobs by @Wauplin in #3250

⚡ Inference

🖼️ Image-to-image

Added support for image-to-image task in the InferenceClient for Replicate and fal.ai providers, allowing quick image generation using FLUX.1-Kontext-dev:

from huggingface_hub import InferenceClient

client = InferenceClient(provider="fal-ai")
client = InferenceClient(provider="replicate")

with open("cat.png", "rb") as image_file:
   input_image = image_file.read()

# output is a PIL.Image object
image = client.image_to_image(
    input_image,
    prompt="Turn the cat into a tiger.",
    model="black-forest-labs/FLUX.1-Kontext-dev",
)
  • [Inference Providers] add image-to-image support for Replicate provider by @hanouticelina in #3188
  • [Inference Providers] add image-to-image support for fal.ai provider by @hanouticelina in #3187

In addition to this, it is now possible to directly pass a PIL.Image as input to the InferenceClient.

  • Add PIL Image support to InferenceClient by @NielsRogge in #3199

🤖 Tiny-Agents

tiny-agents got a nice update to deal with environment variables and secrets. We've also changed its input format to follow more closely the config format from VSCode. Here is an up to date config to run Github MCP Server with a token:

{
  "model": "Qwen/Qwen2.5-72B-Instruct",
  "provider": "nebius",
  "inputs": [
    {
      "type": "promptString",
      "id": "github-personal-access-token",
      "description": "Github Personal Access Token (read-only)",
      "password": true
    }
  ],
  "servers": [
    {
     "type": "stdio",
     "command": "docker",
     "args": [
       "run",
       "-i",
       "--rm",
       "-e",
       "GITHUB_PERSONAL_ACCESS_TOKEN",
       "-e",
       "GITHUB_TOOLSETS=repos,issues,pull_requests",
       "ghcr.io/github/github-mcp-server"
     ],
     "env": {
       "GITHUB_PERSONAL_ACCESS_TOKEN": "${input:github-personal-access-token}"
     }
    }
  ]
}
  • [Tiny-Agent] Fix headers handling + secrets management by @Wauplin in #3166
  • [tiny-agents] Configure inference API key from inputs + keep empty dicts in chat completion payload by @hanouticelina in #3226

🐛 Bug fixes

InferenceClient and tiny-agents got a few quality of life improvements and bug fixes:

  • Recursive filter_none in Inference Providers by @Wauplin in #3178
  • [Inference] Remove default params values for text generation by @hanouticelina in #3192
  • [Inference] Correctly build chat completion URL with query parameters by @hanouticelina in #3200
  • Update tiny-agents example by @Wauplin in #3205
  • Fix "failed to parse tools" due to mcp EXIT_LOOP_TOOLS not following the ChatCompletionInputFunctionDefinition model by @nicoloddo in #3219
  • [Tiny agents] Add tool call to messages by @NielsRogge in #3159
  • omit parameters for default tools in tiny-agent by @hanouticelina in #3214

📤 Xet

Integration of Xet is now stable and production-ready. A majority of file transfer are now handled using this protocol on new repos. A few improvements have been shipped to ease developer experience during uploads:

  • Improved progress reporting for Xet uploads by @hoytak in #3096
  • upload large folder operations uses batches of files for preupload-lfs jobs for xet-enabled repositories by @assafvayner in #3228
  • Override xet refresh route's base URL with HF Endpoint by @hanouticelina in #3180

Documentation has already been written to explain better the protocol and its options:

  • Updates to Xet upload/download docs by @jsulz in #3174
  • Updating Xet caching docs by @jsulz in #3190
  • Suppress xet install WARN if HF_HUB_DISABLE_XET by @rajatarya in #3206

🛠️ Small fixes and maintenance

🐛 Bug and typo fixes

  • fix: update payload preparation to merge parameters into the output dictionary by @mishig25 in #3160
  • fix(inference_endpoints): use GET healthRoute instead of GET / to check status by @mfuntowicz in #3165
  • Update hf_api.py by @andimarafioti in #3194
  • [Docs] Remove Inference API references in docs by @hanouticelina in #3197
  • Align HfFileSystem and HfApi for the expand argument when listing files in repos by @lhoestq in #3195
  • Solve encoding issue of repocard.py by @WilliamRabuel in #3235
  • Fix pagination test by @Wauplin in #3246
  • Fix Incomplete File Not found on windows systems by @JorgeMIng in #3247
  • [Internal] Fix docstring param spacing check and libcst incompatibility with Python 3.13 by @hanouticelina in #3251
  • [Bot] Update inference types by @HuggingFaceInfra in #3104
  • Fix snapshot_download when unreliable number of files by @Wauplin in #3241
  • fix typo by @Wauplin (direct commit on main)
  • fix sessions closing warning with AsyncInferenceClient by @hanouticelina in #3252
  • Deprecate missing_mfa, missing_sso, adding security_restrictions @Kakulukian #3254

🏗️ internal

  • swap gh style bot action token by @hanouticelina in #3171
  • improve style bot comment (notify earlier and update later) by @ydshieh in #3179
  • Update tests following server-side changes by @hanouticelina in #3181
  • [FIX DOCSTRING] Update hf_api.py by @cakiki in #3182
  • Bump to 0.34.0.dev0 by @Wauplin in #3222
  • Do not generate Chat Completion types anymore by @Wauplin in #3231
[v0.33.5] [Inference] Fix a `UserWarning` when streaming with `AsyncInferenceClient`

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.33.4...v0.33.5

Jul 11, 2025
[v0.33.4] [Tiny-Agent]: Fix schema validation error for default MCP tools
[v0.33.3] [Tiny-Agent]: Update tiny-agents example
  • Update tiny-agents example #3205

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.33.2...v0.33.3

Jul 2, 2025
[v0.33.2] [Tiny-Agent]: Switch to VSCode MCP format

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.33.1...v0.33.2

  • [Tiny-Agent] Switch to VSCode MCP format + fix headers handling #3166 by @Wauplin

Breaking changes:

  • no more config nested mapping => everything at root level
  • headers at root level instead of inside options.requestInit
  • updated the way values are pulled from ENV (based on input id)

Example of agent.json:

{
  "model": "Qwen/Qwen2.5-72B-Instruct",
  "provider": "nebius",
  "inputs": [
    {
      "type": "promptString",
      "id": "hf-token",
      "description": "Token for Hugging Face API access",
      "password": true
    }
  ],
  "servers": [
    {
      "type": "http",
      "url": "https://huggingface.co/mcp",
      "headers": {
        "Authorization": "Bearer ${input:hf-token}"
      }
    }
  ]
}

Find more examples in https://huggingface.co/datasets/tiny-agents/tiny-agents

Jun 25, 2025
[v0.33.1]: Inference Providers Bug Fixes, Tiny-Agents Message handling Improvement, and Inference Endpoints Health Check Update

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.33.0...v0.33.1

This release introduces bug fixes for chat completion type compatibility and feature extraction parameters, enhanced message handling in tiny-agents, and updated inference endpoint health check:

  • [Tiny agents] Add tool call to messages #3159 by @NielsRogge
  • fix: update payload preparation to merge parameters into the output dictionary #3160 by @mishig25
  • fix(inference_endpoints): use GET healthRoute instead of GET / to check status #3165 by @mfuntowicz
  • Recursive filter_none in Inference Providers #3178 by @Wauplin
Jun 11, 2025
[v0.33.0]: Welcoming Featherless.AI and Groq as Inference Providers!

⚡ New provider: Featherless.AI

Featherless AI is a serverless AI inference provider with unique model loading and GPU orchestration abilities that makes an exceptionally large catalog of models available for users. Providers often offer either a low cost of access to a limited set of models, or an unlimited range of models with users managing servers and the associated costs of operation. Featherless provides the best of both worlds offering unmatched model range and variety but with serverless pricing. Find the full list of supported models on the models page.

from huggingface_hub import InferenceClient

client = InferenceClient(provider="featherless-ai")

completion = client.chat.completions.create(
    model="deepseek-ai/DeepSeek-R1-0528", 
    messages=[
        {
            "role": "user",
            "content": "What is the capital of France?"
        }
    ], 
)

print(completion.choices[0].message)
  • ✨ Support for Featherless.ai as inference provider by @pohnean in #3081

⚡ New provider: Groq

At the heart of Groq's technology is the Language Processing Unit (LPU™), a new type of end-to-end processing unit system that provides the fastest inference for computationally intensive applications with a sequential component, such as Large Language Models (LLMs). LPUs are designed to overcome the limitations of GPUs for inference, offering significantly lower latency and higher throughput. This makes them ideal for real-time AI applications.

Groq offers fast AI inference for openly-available models. They provide an API that allows developers to easily integrate these models into their applications. It offers an on-demand, pay-as-you-go model for accessing a wide range of openly-available LLMs.

from huggingface_hub import InferenceClient

client = InferenceClient(provider="groq")

completion = client.chat.completions.create(
    model="meta-llama/Llama-4-Scout-17B-16E-Instruct",
    messages=[
        {
            "role": "user",
            "content": [
                {"type": "text", "text": "Describe this image in one sentence."},
                {
                    "type": "image_url",
                    "image_url": {"url": "https://vagabundler.com/wp-content/uploads/2019/06/P3160166-Copy.jpg"},
                },
            ],
        }
    ],
)

print(completion.choices[0].message)
  • ADd Groq provider by @Wauplin in #3157

🤖 MCP and Tiny-agents

It is now possible to run tiny-agents using a local server e.g. llama.cpp. 100% local agents are right behind the corner!

  • [MCP] Add local/remote endpoint inference support by @hanouticelina in #3121

Fixing some DX issues in the tiny-agents CLI.

  • Fix tiny-agents cli exit issues by @Wauplin in #3125
  • [MCP] reinject JSON parse & runtime tool errors back into the chat history by @hanouticelina in #3137

📚 Documentation

New translation from the Hindi-speaking community, for the community!

  • Added Hindi translation for git_vs_http.md in concepts section by @february-king in #3156

🛠️ Small fixes and maintenance

😌 QoL improvements

  • Make hf-xet more silent by @Wauplin in #3124
  • [HfApi] Collections in collections by @hanouticelina in #3120
  • Fix inference search by @Wauplin in #3022
  • [Inference Providers] Raise warning if provider's status is in error mode by @hanouticelina in #3141

🐛 Bug and typo fixes

  • Fix snapshot_download on very large repo (>50k files) by @Wauplin in #3122
  • fix tqdm_class argument of subclass of tqdm by @andyxning in #3111
  • fix quality by @hanouticelina in #3128
  • second example in oauth documentation by @thanosKivertzikidis in #3136
  • fix table question answering by @hanouticelina in #3154

🏗️ internal

  • Create claude.yml by @julien-c in #3118
  • [Internal] prepare for 0.33.0 release by @hanouticelina in #3138

Significant community contributions

The following contributors have made significant changes to the library over the last release:

  • @pohnean
    • ✨ Support for Featherless.ai as inference provider (#3081)
  • @february-king
    • Added Hindi translation for git_vs_http.md in concepts section (#3156)
[v0.32.6] [Upload large folder] fix for wrongly saved upload_mode/remote_oid
  • Fix for wrongly saved upload_mode/remote_oid #3113

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.32.5...v0.32.6

Jun 10, 2025
[v0.32.5] [Tiny-Agents] inject environment variables in headers
  • Inject env var in headers + better type annotations #3142

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.32.4...v0.32.5

Jun 3, 2025
[v0.32.4]: Bug fixes in `tiny-agents`, and fix input handling for question-answering task.

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.32.3...v0.32.4

This release introduces bug fixes to tiny-agents and InferenceClient.question_answering:

  • [MCP] asyncio.wait() does not accept bare coroutines #3135 by @hanouticelina
  • [MCP] Fix vestigial token yield on early exit #3132 by @danielholanda
  • Fix question_answering #3134 by @eugenos-programos
May 30, 2025
[v0.32.3]: Handle env variables in `tiny-agents`, better CLI exit and handling of MCP tool calls arguments

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.32.2...v0.32.3

This release introduces some improvements and bug fixes to tiny-agents:

  • [tiny-agents] Handle env variables in tiny-agents (Python client) #3129
  • [Fix] tiny-agents cli exit issues #3125
  • Improve Handling of MCP Tool Call Arguments #3127
May 27, 2025
[v0.32.2]: Add endpoint support in Tiny-Agent + fix `snapshot_download` on large repos

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.32.1...v0.32.2

May 26, 2025
[v0.32.1]: hot-fix: Fix tiny agents on Windows
May 22, 2025
[v0.32.0]: MCP Client, Tiny Agents CLI and more!

🤖 Powering LLMs with Tools: MCP Client & Tiny Agents CLI

✨ The huggingface_hub library now includes an MCP Client, designed to empower Large Language Models (LLMs) with the ability to interact with external Tools via Model Context Protocol (MCP). This client extends the InfrenceClient and provides a seamless way to connect LLMs to both local and remote tool servers!

pip install -U huggingface_hub[mcp]

In the following example, we use the Qwen/Qwen2.5-72B-Instruct model via the Nebius inference provider. We then add a remote MCP server, in this case, an SSE server which makes the Flux image generation tool available to the LLM:

import os

from huggingface_hub import ChatCompletionInputMessage, ChatCompletionStreamOutput, MCPClient

async def main():
    async with MCPClient(
        provider="nebius",
        model="Qwen/Qwen2.5-72B-Instruct",
        api_key=os.environ["HF_TOKEN"],
    ) as client:
        await client.add_mcp_server(type="sse", url="https://evalstate-flux1-schnell.hf.space/gradio_api/mcp/sse")
        messages = [
            {
                "role": "user",
                "content": "Generate a picture of a cat on the moon",
            }
        ]
        async for chunk in client.process_single_turn_with_tools(messages):
            # Log messages
            if isinstance(chunk, ChatCompletionStreamOutput):
                delta = chunk.choices[0].delta
                if delta.content:
                    print(delta.content, end="")

            # Or tool calls
            elif isinstance(chunk, ChatCompletionInputMessage):
                print(
                    f"\nCalled tool '{chunk.name}'. Result: '{chunk.content if len(chunk.content) < 1000 else chunk.content[:1000] + '...'}'"
                )

if __name__ == "__main__":
    import asyncio
    asyncio.run(main())

For even simpler development, we now also offer a higher-level Agent class. These 'Tiny Agents' simplify creating conversational Agents by managing the chat loop and state, essentially acting as a user-friendly wrapper around MCPClient. It's designed to be a simple while loop built right on top of an MCPClient.

You can run these Agents directly from the command line:

> tiny-agents run --help
                                                                                                                                                                                     
 Usage: tiny-agents run [OPTIONS] [PATH] COMMAND [ARGS]...                                                                                                                           
                                                                                                                                                                                     
 Run the Agent in the CLI                                                                                                                                                            
                                                                                                                                                                                     
                                                                                                                                                                                     
╭─ Arguments ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
   path      [PATH]  Path to a local folder containing an agent.json file or a built-in agent stored in the 'tiny-agents/tiny-agents' Hugging Face dataset                         │
                     (https://huggingface.co/datasets/tiny-agents/tiny-agents)                                                                                                     │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Options ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
 --help          Show this message and exit.
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

You can run these Agents using your own local configs or load them directly from the Hugging Face dataset tiny-agents.

This is an early version of the MCPClient, and community contributions are welcome 🤗

  • [MCP] Add documentation by @hanouticelina in #3102
  • [MCP] add support for SSE + HTTP by @Wauplin in #3099
  • [MCP] Tiny Agents in Python by @hanouticelina in #3098
  • PoC: InferenceClient is also a MCPClient by @julien-c in #2986

⚡ Inference Providers

Thanks to @diadorer, feature extraction (embeddings) inference is now supported with Nebius provider!

  • [Inference Providers] Add feature extraction task for Nebius by @diadorer in #3057

We’re thrilled to introduce Nscale as an official inference provider! This expansion strengthens the Hub as the go-to entry point for running inference on open-weight models 🔥

  • 🗿 adding support for Nscale inference provider by @nbarr07 in #3068

We also fixed compatibility issues with structured outputs across providers by ensuring the InferenceClient follows the OpenAI API specs structured output.

  • [Inference Providers] Fix structured output schema in chat completion by @hanouticelina in #3082

💾 Serialization

We've introduced a new @strict decorator for dataclasses, providing robust validation capabilities to ensure data integrity both at initialization and during assignment. Here is a basic example:

from dataclasses import dataclass
from huggingface_hub.dataclasses import strict, as_validated_field

# Custom validator to ensure a value is positive
def positive_int(value: int):
    if not value > 0:
        raise ValueError(f"Value must be positive, got {value}")


class Config:
    model_type: str
    hidden_size: int = positive_int(default=16)
    vocab_size: int = 32  # Default value

    # Methods named `validate_xxx` are treated as class-wise validators
    def validate_big_enough_vocab(self):
        if self.vocab_size < self.hidden_size:
            raise ValueError(f"vocab_size ({self.vocab_size}) must be greater than hidden_size ({self.hidden_size})")

config = Config(model_type="bert", hidden_size=24)   # Valid
config = Config(model_type="bert", hidden_size=-1)   # Raises StrictDataclassFieldValidationError

# `vocab_size` too small compared to `hidden_size`
config = Config(model_type="bert", hidden_size=32, vocab_size=16)   # Raises StrictDataclassClassValidationError

This feature also includes support for custom validators, class-wise validation logic, handling of additional keyword arguments, and automatic validation based on type hints. Documentation can be found here.

  • New @strict decorator for dataclass validation by @Wauplin in #2895

This release brings also support for DTensor in _get_unique_id / get_torch_storage_size helpers, allowing transformers to seamlessly use save_pretrained with DTensor.

  • Feat: support DTensor when saving by @S1ro1 in #3042

✨ HF API

When creating an Endpoint, the default for scale_to_zero_timeout is now None, meaning endpoints will no longer scale to zero by default unless explicitly configured.

  • Dont set scale to zero as default when creating an Endpoint by @tomaarsen in #3062

We've also introduced experimental helpers to manage OAuth within FastAPI applications, bringing functionality previously used in Gradio to a wider range of frameworks for easier integration.

  • Add helpers to handle OAuth in a FastAPI app by @Wauplin in #2684

📚 Documentation

We now have much more detailed documentation for Inference! This includes more detailed explanations and examples to clarify that the InferenceClient can also be effectively used with local endpoints (llama.cpp, vllm, MLX..etc).

  • [Inference] Mention local endpoints inference + remove separate HF Inference API mentions by @hanouticelina in #3085

🛠️ Small fixes and maintenance

😌 QoL improvements

  • bump hf-xet min version by @hanouticelina in #3078
  • Add api.endpoint to arguments for _get_upload_mode by @matthewgrossman in #3077
  • surface 401 unauthorized errors more directly in snapshot_download by @hanouticelina in #3092

🐛 Bug and typo fixes

  • [HfFileSystem] Fix end-of-file read() by @lhoestq in #3080
  • [Inference Endpoints] fix inference endpoint creation with custom image by @hanouticelina in #3076
  • Expand file lock scope to resolve concurrency issues during downloads by @humengyu2012 in #3063
  • Documentation Issue by @thanosKivertzikidis in #3091
  • Do not fetch /preupload if already done in upload-large-folder by @Wauplin in #3100

🏗️ internal

  • [Internal] make hf-xet (again) a required dependency #3103
  • fix conda by @hanouticelina in #3058
  • fix create branch failure test by @hanouticelina in #3074
  • [Internal] make hf-xet optional by @hanouticelina in #3079

Community contributions

  • Refactor huggingface-cli repo create command by @Wauplin in #3094
  • Update mypy to 1.15.0 (current latest) by @Wauplin in #3095

Significant community contributions

The following contributors have made significant changes to the library over the last release:

  • @diadorer
    • [Inference Providers] Add feature extraction task for Nebius (#3057)
  • @tomaarsen
    • Dont set scale to zero as default when creating an Endpoint (#3062)
  • @nbarr07
    • 🗿 adding support for Nscale inference provider (#3068)
  • @S1ro1
    • Feat: support DTensor when saving (#3042)
  • @humengyu2012
    • Expand file lock scope to resolve concurrency issues during downloads (#3063)
May 19, 2025
[v0.31.4]: strict dataclasses, support `DTensor` saving & some bug fixes

This release includes some new features and bug fixes:

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.31.2...v0.31.4

Latest
v1.11.0
Tracking Since
Dec 20, 2023
Last fetched Apr 19, 2026