See https://github.com/huggingface/huggingface_hub/pull/2056. (+https://github.com/huggingface/huggingface_hub/pull/2050 shipped as v0.21.1).
Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.21.0...v0.21.2
Discuss about the release in our Community Tab. Feedback welcome!! 🤗
All objects returned by the HfApi client are now dataclasses!
In the past, objects were either dataclasses, typed dictionaries, non-typed dictionaries and even basic classes. This is now all harmonized with the goal of improving developer experience.
Kudos goes to the community for the implementation and testing of all the harmonization process. Thanks again for the contributions!
The HfFileSystem class implements the fsspec interface to allow loading and writing files with a filesystem-like interface. The interface is highly used by the datasets library and this release will improve further the efficiency and robustness of the integration.
rm on branch by @lhoestq in #1957HfFileSystem by @mariosasko in #1981HfFileSystem.url method by @mariosasko in #2027The PyTorchModelHubMixin class let's you upload ANY pytorch model to the Hub in a few lines of code. More precisely, it is a class that can be inherited in any nn.Module class to add the from_pretrained, save_pretrained and push_to_hub helpers to your class. It handles serialization and deserialization of weights and configs for you and enables download counts on the Hub.
With this release, we've fixed 2 pain points holding back users from using this lib:
.safetensors files instead of pytorch pickles for safety reasons. Loading from previous pytorch pickles is still supported but we are moving toward completely deprecating them (in a mid to long term plan).PyTorchModelHubMixin by @bmuskalla in #2033Audio-to-audio task is now supported by both by the InferenceClient!
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> audio_output = client.audio_to_audio("audio.flac")
>>> for i, item in enumerate(audio_output):
>>> with open(f"output_{i}.flac", "wb") as f:
f.write(item["blob"])
Also fixed a few things:
With the aim of harmonizing repo structures and file serialization on the Hub, we added a new module serialization with a first helper split_state_dict_into_shards that takes a state dict and split it into shards. Code implementation is mostly taken from transformers and aims to be reused by other libraries in the ecosystem. It seamlessly supports torch, tensorflow and numpy weights, and can be easily extended to other frameworks.
This is a first step in the harmonization process and more loading/saving helpers will be added soon.
split_state_dict_into_shards helper by @Wauplin in #1938Community is actively getting the job done to translate the huggingface_hub to other languages. We now have docs available in Simplified Chinese (here) and in French (here) to help democratize good machine learning!
base_model in modelcard metadata by @Wauplin in #1936hf_transfer extra into setup.py and docs/ by @jamesbraza in #1970download --repo-type by @jamesbraza in #1986get_safetensors_metadata docstring by @Wauplin in #1951Creating a commit with an invalid README will fail early instead of uploading all LFS files before failing to commit.
Added a revision_exists helper, working similarly to repo_exists and file_exists:
>>> from huggingface_hub import revision_exists
>>> revision_exists("google/gemma-7b", "float16")
True
>>> revision_exists("google/gemma-7b", "not-a-revision")
False
revision_exists helper by @Wauplin in #2042InferenceClient.wait(...) now raises an error if the endpoint is in a failed state.
Improved progress bar when downloading a file
Other stuff:
ModelFilter and DatasetFilter are deprecated when listing models and datasets in favor of a simpler API that lets you pass the parameters directly to list_models and list_datasets.>>> from huggingface_hub import list_models, ModelFilter
# use
>>> list_models(language="zh")
# instead of
>>> list_models(filter=ModelFilter(language="zh"))
Cleaner, right? ModelFilter and DatasetFilter will still be supported until v0.24 release.
ModelStatus.compute_type is not a string anymore but a dictionary with more detailed information (instance type + number of replicas). This breaking change reflects a server-side update.warnings.warn in repocard.py by @Wauplin in #1980force_download=True by @scruel in #1983setup.cfg to pyproject.toml by @jamesbraza in #1971pre-commit by @jamesbraza in #1987toml-sort tool by @jamesbraza in #1972The following contributors have made significant changes to the library over the last release:
This patch release fixes an issue when retrieving the locally saved token using huggingface_hub.HfFolder.get_token. For the record, this is a "planned to be deprecated" method, in favor of huggingface_hub.get_token which is more robust and versatile. The issue came from a breaking change introduced in https://github.com/huggingface/huggingface_hub/pull/1895 meaning only 0.20.x is affected.
For more details, please refer to https://github.com/huggingface/huggingface_hub/pull/1966.
Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.20.2...v0.20.3
A concurrency issue when using userdata.get to retrieve HF_TOKEN token led to deadlocks when downloading files in parallel. This hot-fix release fixes this issue by using a global lock before trying to get the token from the secrets vault. More details in https://github.com/huggingface/huggingface_hub/pull/1953.
Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.20.1...v0.20.2
This hot-fix release fixes a circular import error happening when import login or logout helpers from huggingface_hub.
Related PR: https://github.com/huggingface/huggingface_hub/pull/1930
Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.20.0...v0.20.1