v0.23.0: LLMs with tools, seamless downloads, and much more!
The 0.23.0 release comes with a big revamp of the download process, especially when it comes to downloading to a local directory. Previously the process was still involving the cache directory and symlinks which led to misconceptions and a suboptimal user experience. The new workflow involves a .cache/huggingface/ folder, similar to the .git/ one, that keeps track of the progress of a download. The main features are:
Example to download q4 GGUF file for microsoft/Phi-3-mini-4k-instruct-gguf:
# Download q4 GGUF file from
huggingface-cli download microsoft/Phi-3-mini-4k-instruct-gguf Phi-3-mini-4k-instruct-q4.gguf --local-dir=data/phi3
With this addition, interrupted downloads are now resumable! This applies both for downloads in local and cache directories which should greatly improve UX for users with slow/unreliable connections. In this regard, the resume_download parameter is now deprecated (not relevant anymore).
.huggingface/ folder to .cache/huggingface/ by @Wauplin in #2262InferenceClientIt is now possible to provide a list of tools when chatting with a model using the InferenceClient! This major improvement has been made possible thanks to TGI that handle them natively.
>>> from huggingface_hub import InferenceClient
# Ask for weather in the next days using tools
>>> client = InferenceClient("meta-llama/Meta-Llama-3-70B-Instruct")
>>> messages = [
... {"role": "system", "content": "Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous."},
... {"role": "user", "content": "What's the weather like the next 3 days in San Francisco, CA?"},
... ]
>>> tools = [
... {
... "type": "function",
... "function": {
... "name": "get_current_weather",
... "description": "Get the current weather",
... "parameters": {
... "type": "object",
... "properties": {
... "location": {
... "type": "string",
... "description": "The city and state, e.g. San Francisco, CA",
... },
... "format": {
... "type": "string",
... "enum": ["celsius", "fahrenheit"],
... "description": "The temperature unit to use. Infer this from the users location.",
... },
... },
... "required": ["location", "format"],
... },
... },
... },
... ...
... ]
>>> response = client.chat_completion(
... model="meta-llama/Meta-Llama-3-70B-Instruct",
... messages=messages,
... tools=tools,
... tool_choice="auto",
... max_tokens=500,
... )
>>> response.choices[0].message.tool_calls[0].function
ChatCompletionOutputFunctionDefinition(
arguments={
'location': 'San Francisco, CA',
'format': 'fahrenheit',
'num_days': 3
},
name='get_n_day_weather_forecast',
description=None
)
It is also possible to provide grammar rules to the text_generation task. This ensures that the output follows a precise JSON Schema specification or matches a regular expression. For more details about it, check out the Guidance guide from Text-Generation-Inference docs.
Mention more chat-completion task instead of conversation in documentation.
chat_completion and remove conversational from Inference guide by @Wauplin in #2215chat-completion relies on server-side rendering in all cases, including when model is transformers-backed. Previously it was only the case for TGI-backed models and templates were rendered client-side otherwise.
Improved logic to determine whether a model is served via TGI or transformers.
Raise error in chat completion when unprocessable by @Wauplin in #2257
Document more chat_completion by @Wauplin in #2260
The PseudoLab team is a non-profit dedicated to make AI more accessible in the Korean-speaking community. In the past few weeks, their team of contributors managed to translated (almost) entirely the huggingface_hub documentation. Huge shout-out to the coordination on this task! Documentation can be accessed here.
guides/webhooks_server.md to Korean by @nuatmochoi in #2145reference/login.md to Korean by @SeungAhSon in #2151package_reference/tensorboard.md to Korean by @fabxoe in #2173package_reference/inference_client.md to Korean by @cjfghk5697 in #2178reference/inference_endpoints.md to Korean by @harheem in #2180package_reference/file_download.md to Korean by @seoyoung-3060 in #2184package_reference/cache.md to Korean by @nuatmochoi in #2191package_reference/collections.md to Korean by @boyunJang in #2214package_reference/inference_types.md to Korean by @fabxoe in #2171guides/upload.md to Korean by @junejae in #2139reference/repository.md to Korean by @junejae in #2189package_reference/space_runtime.md to Korean by @boyunJang in #2213guides/repository.md to Korean by @cjfghk5697 in #2124guides/model_cards.md to Korean" by @SeungAhSon in #2128guides/community.md to Korean by @seoulsky-field in #2126guides/cli.md to Korean by @harheem in #2131guides/search.md to Korean by @seoyoung-3060 in #2134guides/inference.md to Korean by @boyunJang in #2130guides/manage-spaces.md to Korean by @boyunJang in #2220guides/hf_file_system.md to Korean by @heuristicwave in #2146package_reference/hf_api.md to Korean by @fabxoe in #2165package_reference/mixins.md to Korean by @fabxoe in #2166guides/inference_endpoints.md to Korean by @usr-bin-ksh in #2164package_reference/utilities.md to Korean by @cjfghk5697 in #2196@bilgehanertan added support for 2 new routes:
get_user_overview to retrieve high-level information about a user: username, avatar, number of models/datasets/Spaces, number of likes and upvotes, number of interactions in discussion, etc.@bilgehanertan added a new command to the CLI to handle tags. It is now possible to:
>>> huggingface-cli tag Wauplin/my-cool-model v1.0
You are about to create tag v1.0 on model Wauplin/my-cool-model
Tag v1.0 created on Wauplin/my-cool-model
>>> huggingface-cli tag Wauplin/gradio-space-ci -l --repo-type space
Tags for space Wauplin/gradio-space-ci:
0.2.2
0.2.1
0.2.0
0.1.2
0.0.2
0.0.1
>>> huggingface-cli tag -d Wauplin/my-cool-model v1.0
You are about to delete tag v1.0 on model Wauplin/my-cool-model
Proceed? [Y/n] y
Tag v1.0 deleted on Wauplin/my-cool-model
For more details, check out the CLI guide.
This ModelHubMixin got a set of nice improvement to generate model cards and handle custom data types in the config.json file. More info in the integration guide.
ModelHubMixin: more metadata + arbitrary config types + proper guide by @Wauplin in #2230In a shared environment, it is now possible to set a custom path HF_TOKEN_PATH as environment variable so that each user of the cluster has their own access token.
HF_TOKEN_PATH as environment variable by @Wauplin in #2185Thanks to @Y4suyuki and @lappemic, most custom errors defined in huggingface_hub are now aggregated in the same module. This makes it very easy to import them from from huggingface_hub.errors import ....
Fixed HFSummaryWriter (class to seamlessly log tensorboard events to the Hub) to work with either tensorboardX or torch.utils implementation, depending on the user setup.
Speed to list files using HfFileSystem has been drastically improved, thanks to @awgr. The values returned from the cache are not deep-copied anymore, which was unfortunately the part taking the most time in the process. If users want to modify values returned by HfFileSystem, they would need to copy them before-hand. This is expected to be a very limited drawback.
Progress bars in huggingface_hub got some flexibility!
It is now possible to provide a name to a tqdm bar (similar to logging.getLogger) and to enable/disable only some progress bars. More details in this guide.
>>> from huggingface_hub.utils import tqdm, disable_progress_bars
>>> disable_progress_bars("peft.foo")
# No progress bars for `peft.boo.bar`
>>> for _ in tqdm(range(5), name="peft.foo.bar"):
... pass
# But for `peft` yes
>>> for _ in tqdm(range(5), name="peft"):
... pass
100%|█████████████████| 5/5 [00:00<00:00, 117817.53it/s]
--local-dir-use-symlink and --resume-downloadAs part of the download process revamp, some breaking changes have been introduced. However we believe that the benefits outweigh the change cost. Breaking changes include:
.cache/huggingface/ folder is not present at the root of the local dir. It only contains file locks, metadata and partially downloaded files. If you need to, you can safely delete this folder without corrupting the data inside the root folder. However, you should expect a longer recovery time if you try to re-run your download command.--local-dir-use-symlink is not in used anymore and will be ignored. It is not possible anymore to symlinks your local dir with the cache directory. Thanks to the .cache/huggingface/ folder, it shouldn't be needed anyway.--resume-download has been deprecated and will be ignored. Resuming failed downloads is now activated by default all the time. If you need to force a new download, use --force-download.As part of #2237 (Grammar and Tools support), we've updated the return value from InferenceClient.chat_completion and InferenceClient.text_generation to match exactly TGI output. The attributes of the returned objects did not change but the classes definition themselves yes. Expect errors if you've previously had from huggingface_hub import TextGenerationOutput in your code. This is however not the common usage since those objects are already instantiated by huggingface_hub directly.
Some other breaking changes were expected (and announced since 0.19.x):
list_files_info is definitively removed in favor of get_paths_info and list_repo_treeWebhookServer.run is definitively removed in favor of WebhookServer.launchapi_endpoint in ModelHubMixin push_to_hub's method is definitively removed in favor of the HF_ENDPOINT environment variableCheck #2156 for more details.
hf_file_system by @Wauplin in #2253updatedRefs in WebhookPayload by @Wauplin in #2169TestHfHubDownloadRelativePaths + implicit delete folder is ok by @Wauplin in #2259The following contributors have made significant changes to the library over the last release:
guides/community.md to Korean (#2126)guides/hf_file_system.md to Korean (#2146)guides/inference_endpoints.md to Korean (#2164)Fetched April 7, 2026