Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.34.4...v0.34.5
[!Tip] All supported Scaleway models can be found here. For more details, check out its documentation page.
Scaleway is a European cloud provider, serving latest LLM models through its Generative APIs alongside a complete cloud ecosystem.
from huggingface_hub import InferenceClient
client = InferenceClient(provider="scaleway")
completion = client.chat.completions.create(
model="Qwen/Qwen3-235B-A22B-Instruct-2507",
messages=[
{
"role": "user",
"content": "What is the capital of France?"
}
],
)
print(completion.choices[0].message)
Biggest update is the support of Image-To-Video task with inference provider Fal AI
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> video = client.image_to_video("cat.jpg", model="Wan-AI/Wan2.2-I2V-A14B", prompt="turn the cat into a tiger")
>>> with open("tiger.mp4", "wb") as f:
... f.write(video)
And some quality of life improvements:
Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.34.3...v0.34.4
Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.34.2...v0.34.3
Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.34.1...v0.34.2
We're thrilled to introduce a powerful new command-line interface for running and managing compute jobs on Hugging Face infrastructure! With the new hf jobs command, you can now seamlessly launch, monitor, and manage jobs using a familiar Docker-like experience. Run any command in Docker images (from Docker Hub, Hugging Face Spaces, or your own custom images) on a variety of hardware including CPUs, GPUs, and TPUs - all with simple, intuitive commands.
Key features:
run, ps, logs, inspect, cancel) to run and manage jobsuv (experimental)All features are available both from Python (run_job, list_jobs, etc.) and the CLI (hf jobs).
Example usage:
# Run a Python script on the cloud
hf jobs run python:3.12 python -c "print('Hello from the cloud!')"
# Use a GPU
hf jobs run --flavor=t4-small --namespace=huggingface ubuntu nvidia-smi
# List your jobs
hf jobs ps
# Stream logs from a job
hf jobs logs <job-id>
# Inspect job details
hf jobs inspect <job-id>
# Cancel a running job
hf jobs cancel <job-id>
# Run a UV script (experimental)
hf jobs uv run my_script.py --flavor=a10g-small --with=trl
You can also pass environment variables and secrets, select hardware flavors, run jobs in organizations, and use the experimental uv runner for Python scripts with inline dependencies.
Check out the Jobs guide for more examples and details.
hf! (formerly huggingface-cli)Glad to announce a long awaited quality-of-life improvement: the Hugging Face CLI has been officially renamed from huggingface-cli to hf! The legacy huggingface-cli remains available without any breaking change, but is officially deprecated. We took the opportunity update the syntax to a more modern command format hf <resource> <action> [options] (e.g. hf auth login, hf repo create, hf jobs run).
Run hf --help to know more about the CLI options.
✗ hf --help
usage: hf <command> [<args>]
positional arguments:
{auth,cache,download,jobs,repo,repo-files,upload,upload-large-folder,env,version,lfs-enable-largefiles,lfs-multipart-upload}
hf command helpers
auth Manage authentication (login, logout, etc.).
cache Manage local cache directory.
download Download files from the Hub
jobs Run and manage Jobs on the Hub.
repo Manage repos on the Hub.
repo-files Manage files in a repo on the Hub.
upload Upload a file or a folder to the Hub. Recommended for single-commit uploads.
upload-large-folder
Upload a large folder to the Hub. Recommended for resumable uploads.
env Print information about the environment.
version Print information about the hf version.
options:
-h, --help show this help message and exit
Added support for image-to-image task in the InferenceClient for Replicate and fal.ai providers, allowing quick image generation using FLUX.1-Kontext-dev:
from huggingface_hub import InferenceClient
client = InferenceClient(provider="fal-ai")
client = InferenceClient(provider="replicate")
with open("cat.png", "rb") as image_file:
input_image = image_file.read()
# output is a PIL.Image object
image = client.image_to_image(
input_image,
prompt="Turn the cat into a tiger.",
model="black-forest-labs/FLUX.1-Kontext-dev",
)
image-to-image support for Replicate provider by @hanouticelina in #3188image-to-image support for fal.ai provider by @hanouticelina in #3187In addition to this, it is now possible to directly pass a PIL.Image as input to the InferenceClient.
tiny-agents got a nice update to deal with environment variables and secrets. We've also changed its input format to follow more closely the config format from VSCode. Here is an up to date config to run Github MCP Server with a token:
{
"model": "Qwen/Qwen2.5-72B-Instruct",
"provider": "nebius",
"inputs": [
{
"type": "promptString",
"id": "github-personal-access-token",
"description": "Github Personal Access Token (read-only)",
"password": true
}
],
"servers": [
{
"type": "stdio",
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"GITHUB_PERSONAL_ACCESS_TOKEN",
"-e",
"GITHUB_TOOLSETS=repos,issues,pull_requests",
"ghcr.io/github/github-mcp-server"
],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "${input:github-personal-access-token}"
}
}
]
}
InferenceClient and tiny-agents got a few quality of life improvements and bug fixes:
Integration of Xet is now stable and production-ready. A majority of file transfer are now handled using this protocol on new repos. A few improvements have been shipped to ease developer experience during uploads:
Documentation has already been written to explain better the protocol and its options:
healthRoute instead of GET / to check status by @mfuntowicz in #3165expand argument when listing files in repos by @lhoestq in #3195libcst incompatibility with Python 3.13 by @hanouticelina in #3251AsyncInferenceClient https://github.com/huggingface/huggingface_hub/pull/3252Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.33.4...v0.33.5
Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.33.3...v0.33.4
Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.33.2...v0.33.3
Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.33.1...v0.33.2
Breaking changes:
Example of agent.json:
{
"model": "Qwen/Qwen2.5-72B-Instruct",
"provider": "nebius",
"inputs": [
{
"type": "promptString",
"id": "hf-token",
"description": "Token for Hugging Face API access",
"password": true
}
],
"servers": [
{
"type": "http",
"url": "https://huggingface.co/mcp",
"headers": {
"Authorization": "Bearer ${input:hf-token}"
}
}
]
}
Find more examples in https://huggingface.co/datasets/tiny-agents/tiny-agents
Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.33.0...v0.33.1
This release introduces bug fixes for chat completion type compatibility and feature extraction parameters, enhanced message handling in tiny-agents, and updated inference endpoint health check:
Featherless AI is a serverless AI inference provider with unique model loading and GPU orchestration abilities that makes an exceptionally large catalog of models available for users. Providers often offer either a low cost of access to a limited set of models, or an unlimited range of models with users managing servers and the associated costs of operation. Featherless provides the best of both worlds offering unmatched model range and variety but with serverless pricing. Find the full list of supported models on the models page.
from huggingface_hub import InferenceClient
client = InferenceClient(provider="featherless-ai")
completion = client.chat.completions.create(
model="deepseek-ai/DeepSeek-R1-0528",
messages=[
{
"role": "user",
"content": "What is the capital of France?"
}
],
)
print(completion.choices[0].message)
At the heart of Groq's technology is the Language Processing Unit (LPU™), a new type of end-to-end processing unit system that provides the fastest inference for computationally intensive applications with a sequential component, such as Large Language Models (LLMs). LPUs are designed to overcome the limitations of GPUs for inference, offering significantly lower latency and higher throughput. This makes them ideal for real-time AI applications.
Groq offers fast AI inference for openly-available models. They provide an API that allows developers to easily integrate these models into their applications. It offers an on-demand, pay-as-you-go model for accessing a wide range of openly-available LLMs.
from huggingface_hub import InferenceClient
client = InferenceClient(provider="groq")
completion = client.chat.completions.create(
model="meta-llama/Llama-4-Scout-17B-16E-Instruct",
messages=[
{
"role": "user",
"content": [
{"type": "text", "text": "Describe this image in one sentence."},
{
"type": "image_url",
"image_url": {"url": "https://vagabundler.com/wp-content/uploads/2019/06/P3160166-Copy.jpg"},
},
],
}
],
)
print(completion.choices[0].message)
It is now possible to run tiny-agents using a local server e.g. llama.cpp. 100% local agents are right behind the corner!
Fixing some DX issues in the tiny-agents CLI.
tiny-agents cli exit issues by @Wauplin in #3125New translation from the Hindi-speaking community, for the community!
The following contributors have made significant changes to the library over the last release:
Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.32.5...v0.32.6
Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.32.4...v0.32.5
Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.32.3...v0.32.4
This release introduces bug fixes to tiny-agents and InferenceClient.question_answering:
asyncio.wait() does not accept bare coroutines #3135 by @hanouticelinaFull Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.32.2...v0.32.3
This release introduces some improvements and bug fixes to tiny-agents:
tiny-agents cli exit issues #3125Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.32.1...v0.32.2
Patch release to fix #3116
Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.32.0...v0.32.1
✨ The huggingface_hub library now includes an MCP Client, designed to empower Large Language Models (LLMs) with the ability to interact with external Tools via Model Context Protocol (MCP). This client extends the InfrenceClient and provides a seamless way to connect LLMs to both local and remote tool servers!
pip install -U huggingface_hub[mcp]
In the following example, we use the Qwen/Qwen2.5-72B-Instruct model via the Nebius inference provider. We then add a remote MCP server, in this case, an SSE server which makes the Flux image generation tool available to the LLM:
import os
from huggingface_hub import ChatCompletionInputMessage, ChatCompletionStreamOutput, MCPClient
async def main():
async with MCPClient(
provider="nebius",
model="Qwen/Qwen2.5-72B-Instruct",
api_key=os.environ["HF_TOKEN"],
) as client:
await client.add_mcp_server(type="sse", url="https://evalstate-flux1-schnell.hf.space/gradio_api/mcp/sse")
messages = [
{
"role": "user",
"content": "Generate a picture of a cat on the moon",
}
]
async for chunk in client.process_single_turn_with_tools(messages):
# Log messages
if isinstance(chunk, ChatCompletionStreamOutput):
delta = chunk.choices[0].delta
if delta.content:
print(delta.content, end="")
# Or tool calls
elif isinstance(chunk, ChatCompletionInputMessage):
print(
f"\nCalled tool '{chunk.name}'. Result: '{chunk.content if len(chunk.content) < 1000 else chunk.content[:1000] + '...'}'"
)
if __name__ == "__main__":
import asyncio
asyncio.run(main())
For even simpler development, we now also offer a higher-level Agent class. These 'Tiny Agents' simplify creating conversational Agents by managing the chat loop and state, essentially acting as a user-friendly wrapper around MCPClient. It's designed to be a simple while loop built right on top of an MCPClient.
You can run these Agents directly from the command line:
> tiny-agents run --help
Usage: tiny-agents run [OPTIONS] [PATH] COMMAND [ARGS]...
Run the Agent in the CLI
╭─ Arguments ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ path [PATH] Path to a local folder containing an agent.json file or a built-in agent stored in the 'tiny-agents/tiny-agents' Hugging Face dataset │
│ (https://huggingface.co/datasets/tiny-agents/tiny-agents) │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Options ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ --help Show this message and exit. │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
You can run these Agents using your own local configs or load them directly from the Hugging Face dataset tiny-agents.
This is an early version of the MCPClient, and community contributions are welcome 🤗
InferenceClient is also a MCPClient by @julien-c in #2986Thanks to @diadorer, feature extraction (embeddings) inference is now supported with Nebius provider!
We’re thrilled to introduce Nscale as an official inference provider! This expansion strengthens the Hub as the go-to entry point for running inference on open-weight models 🔥
We also fixed compatibility issues with structured outputs across providers by ensuring the InferenceClient follows the OpenAI API specs structured output.
We've introduced a new @strict decorator for dataclasses, providing robust validation capabilities to ensure data integrity both at initialization and during assignment. Here is a basic example:
from dataclasses import dataclass
from huggingface_hub.dataclasses import strict, as_validated_field
# Custom validator to ensure a value is positive
def positive_int(value: int):
if not value > 0:
raise ValueError(f"Value must be positive, got {value}")
class Config:
model_type: str
hidden_size: int = positive_int(default=16)
vocab_size: int = 32 # Default value
# Methods named `validate_xxx` are treated as class-wise validators
def validate_big_enough_vocab(self):
if self.vocab_size < self.hidden_size:
raise ValueError(f"vocab_size ({self.vocab_size}) must be greater than hidden_size ({self.hidden_size})")
config = Config(model_type="bert", hidden_size=24) # Valid
config = Config(model_type="bert", hidden_size=-1) # Raises StrictDataclassFieldValidationError
# `vocab_size` too small compared to `hidden_size`
config = Config(model_type="bert", hidden_size=32, vocab_size=16) # Raises StrictDataclassClassValidationError
This feature also includes support for custom validators, class-wise validation logic, handling of additional keyword arguments, and automatic validation based on type hints. Documentation can be found here.
@strict decorator for dataclass validation by @Wauplin in #2895This release brings also support for DTensor in _get_unique_id / get_torch_storage_size helpers, allowing transformers to seamlessly use save_pretrained with DTensor.
When creating an Endpoint, the default for scale_to_zero_timeout is now None, meaning endpoints will no longer scale to zero by default unless explicitly configured.
We've also introduced experimental helpers to manage OAuth within FastAPI applications, bringing functionality previously used in Gradio to a wider range of frameworks for easier integration.
We now have much more detailed documentation for Inference! This includes more detailed explanations and examples to clarify that the InferenceClient can also be effectively used with local endpoints (llama.cpp, vllm, MLX..etc).
api.endpoint to arguments for _get_upload_mode by @matthewgrossman in #3077read() by @lhoestq in #3080hf-xet optional by @hanouticelina in #3079huggingface-cli repo create command by @Wauplin in #3094The following contributors have made significant changes to the library over the last release:
This release includes some new features and bug fixes:
strict decorators for runtime dataclass validation with custom and type-based checks. by @Wauplin in https://github.com/huggingface/huggingface_hub/pull/2895.DTensor support to _get_unique_id / get_torch_storage_size helpers, enabling transformers to use save_pretrained with DTensor. by @S1ro1 in https://github.com/huggingface/huggingface_hub/pull/3042.Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.31.2...v0.31.4