v0.26.0: Multi-tokens support, conversational VLMs and quality of life improvements
Managing fine-grained access tokens locally just became much easier and more efficient! Fine-grained tokens let you create tokens with specific permissions, making them especially useful in production environments or when working with external organizations, where strict access control is essential.
To make managing these tokens easier, we've added a ✨ new set of CLI commands ✨ that allow you to handle them programmatically:
login() command with each token:huggingface-cli login
huggingface-cli auth switch
huggingface-cli auth list
huggingface-cli logout [--token-name TOKEN_NAME]
✅ Nothing changes if you are using the HF_TOKEN environment variable as it takes precedence over the token set via the CLI. More details in the documentation. 🤗
Conversational vision-language models inference is now supported with InferenceClient's chat completion!
from huggingface_hub import InferenceClient
# works with remote url or base64 encoded url
image_url ="https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
client = InferenceClient("meta-llama/Llama-3.2-11B-Vision-Instruct")
output = client.chat.completions.create(
messages=[
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {"url": image_url},
},
{
"type": "text",
"text": "Describe this image in one sentence.",
},
],
},
],
)
print(output.choices[0].message.content)
#A determine figure of Lady Liberty stands tall, holding a torch aloft, atop a pedestal on an island.
You can now pass additional inference parameters to more task methods in the InferenceClient, including: image_classification, text_classification, image_segmentation, object_detection, document_question_answering and more!
For more details, visit the InferenceClient reference guide.
✅ Of course, all of those changes are also available in the AsyncInferenceClient async equivalent 🤗
update_repo_settings can now be used to switch visibility status of a repo. This is a drop-in replacement for update_repo_visibility which is deprecated and will be removed in version v0.29.0.
- update_repo_visibility(repo_id, private=True)
+ update_repo_settings(repo_id, private=True)
📄 Daily papers API is now supported in huggingface_hub, enabling you to search for papers on the Hub and retrieve detailed paper information.
>>> from huggingface_hub import HfApi
>>> api = HfApi()
# List all papers with "attention" in their title
>>> api.list_papers(query="attention")
# Get paper information for the "Attention Is All You Need" paper
>>> api.paper_info(id="1706.03762")
Efforts from the Tamil-speaking community to translate guides and package references to TM! Check out the result here.
A few breaking changes have been introduced:
cached_download(), url_to_filename(), filename_to_url() methods are now completely removed. From now on, you will have to use hf_hub_download() to benefit from the new cache layout.legacy_cache_layout argument from hf_hub_download() has been removed as well.These breaking changes have been announced with a regular deprecation cycle.
Also, any templating-related utility has been removed from huggingface_hub. Client side templating is not necessary now that all conversational text-generation models in InferenceAPI are served with TGI.
Prepare for release 0.26 by @hanouticelina in #2579 Remove templating utility by @Wauplin in #2611
i18n by @SauravMaheshkar in #2566state_dict by @SunMarc in #2591local_dir is provided. by @hanouticelina in #2592The following contributors have made significant changes to the library over the last release:
i18n (#2566)Fetched April 7, 2026