[v1.1.5] Welcoming OVHcloud AI Endpoints as a new Inference Provider & More
OVHcloud AI Endpoints is now an official Inference Provider on Hugging Face! 🎉 OVHcloud delivers fast, production ready inference on secure, sovereign, fully 🇪🇺 European infrastructure - combining advanced features with competitive pricing.
import os
from huggingface_hub import InferenceClient
client = InferenceClient(
api_key=os.environ["HF_TOKEN"],
)
completion = client.chat.completions.create(
model="openai/gpt-oss-20b:ovhcloud",
messages=[
{
"role": "user",
"content": "What is the capital of France?"
}
],
)
print(completion.choices[0].message)
More snippets examples in the provider documentation 👉 here.
Installing the CLI is now much faster, thanks to @Boulaouaney for adding support for uv, bringing faster package installation.
This release also includes the following bug fixes:
HF_DEBUG environment variable in #3562 by @hanouticelinaFetched April 7, 2026