[v0.31.0] LoRAs with Inference Providers, `auto` mode for provider selection, embeddings models and more
We're introducing blazingly fast LoRA inference powered by fal.ai and Replicate through Hugging Face Inference Providers! You can use any compatible LoRA available on the Hugging Face Hub and get generations at lightning fast speed β‘
from huggingface_hub import InferenceClient
client = InferenceClient(provider="fal-ai") # or provider="replicate"
# output is a PIL.Image object
image = client.text_to_image(
"a boy and a girl looking out of a window with a cat perched on the window sill. There is a bicycle parked in front of them and a plant with flowers to the right side of the image. The wall behind them is visible in the background.",
model="openfree/flux-chatgpt-ghibli-lora",
)
auto mode for provider selectionYou can now automatically select a provider for a model using auto mode β it will pick the first available provider based on your preferred order set in https://hf.co/settings/inference-providers.
from huggingface_hub import InferenceClient
# will select the first provider available for the model, sorted by your order.
client = InferenceClient(provider="auto")
completion = client.chat.completions.create(
model="Qwen/Qwen3-235B-A22B",
messages=[
{
"role": "user",
"content": "What is the capital of France?"
}
],
)
print(completion.choices[0].message)
β οΈ Note: This is now the default value for the provider argument. Previously, the default was hf-inference, so this change may be a breaking one if you're not specifying the provider name when initializing InferenceClient or AsyncInferenceClient.
provider="auto" by @julien-c in #3011We added support for feature extraction (embeddings) inference with sambanova provider.
HF Inference API provider is now fully integrated as an Inference Provider, this means it only supports a predefined list of deployed models, selected based on popularity. Cold-starting arbitrary models from the Hub is no longer supported β if a model isn't already deployed, it wonβt be available via HF Inference API.
Miscellaneous improvements and some bug fixes:
β
Of course, all of those inference changes are available in the AsyncInferenceClient async equivalent π€
Thanks to @bpronan's PR, Xet now supports uploading byte arrays:
from huggingface_hub import upload_file
file_content = b"my-file-content"
repo_id = "username/model-name" # `hf-xet` should be installed and Xet should be enabled for this repo
upload_file(
path_or_fileobj=file_content,
repo_id=repo_id,
)
Additionally, weβve added documentation for environment variables used by hf-xet to optimize file download/upload performance β including options for caching (HF_XET_CHUNK_CACHE_SIZE_BYTES), concurrency (HF_XET_NUM_CONCURRENT_RANGE_GETS), high-performance mode (HF_XET_HIGH_PERFORMANCE), and sequential writes (HF_XET_RECONSTRUCT_WRITE_SEQUENTIALLY).
Miscellaneous improvements:
We added HTTP download support for files larger than 50GB β enabling more reliable handling of large file downloads.
We also added dynamic batching to upload_large_folder, replacing the fixed 50-files-per-commit rule with an adaptive strategy that adjusts based on commit success and duration β improving performance and reducing the risk of hitting the commits rate limit on large repositories.
We added support for new arguments when creating or updating Hugging Face Inference Endpoints.
provider argument in InferenceClient and AsyncInferenceClient is now "auto" instead of "hf-inference" (HF Inference API). This means provider selection will now follow your preferred order set in your inference provider settings.
If your code relied on the previous default ("hf-inference"), you may need to update it explicitly to avoid unexpected behavior.feature-extraction and sentence-similarity tasks has changed from https://router.huggingface.co/hf-inference/pipeline/{task}/{model}to https://router.huggingface.co/hf-inference/models/{model}/pipeline/{task}.hf_xet min version to 1.0.0 + make it required dep on 64 bits by @hanouticelina in #2971The following contributors have made significant changes to the library over the last release:
Fetched April 7, 2026