Add support for Qwen3 in https://github.com/huggingface/transformers.js/pull/1300.
Example usage:
import { pipeline, TextStreamer } from "@huggingface/transformers";
// Create a text generation pipeline
const generator = await pipeline(
"text-generation",
"onnx-community/Qwen3-0.6B-ONNX",
{ dtype: "q4f16", device: "webgpu" },
);
// Define the list of messages
const messages = [
{ role: "user", content: "If 5 brog 5 is 1, and 4 brog 2 is 2, what is 3 brog 1?" },
];
// Generate a response
const output = await generator(messages, {
max_new_tokens: 1024,
do_sample: true,
top_k: 20,
temperature: 0.7,
streamer: new TextStreamer(generator.tokenizer, { skip_prompt: true, skip_special_tokens: true}),
});
console.log(output[0].generated_text.at(-1).content);
Try out the online demo:
https://github.com/user-attachments/assets/d5262390-b70b-4310-b4f3-98be0af79cca
Add support for D-FINE in https://github.com/huggingface/transformers.js/pull/1303
Example usage:
import { pipeline } from "@huggingface/transformers";
const detector = await pipeline("object-detection", "onnx-community/dfine_s_coco-ONNX");
const image = "https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/cats.jpg";
const output = await detector(image, { threshold: 0.5 });
console.log(output);
See list of supported models: https://huggingface.co/models?library=transformers.js&other=d_fine&sort=trending
Introduce global inference chain (+ other WebGPU fixes) in https://github.com/huggingface/transformers.js/pull/1293
fix: RawImage.fromURL error when input file url by @himself65 in https://github.com/huggingface/transformers.js/pull/1288
[bugfix] tokenizers respect padding: true with non-null max_length by @dwisdom0 in https://github.com/huggingface/transformers.js/pull/1284
Full Changelog: https://github.com/huggingface/transformers.js/compare/3.5.0...3.5.1
Fetched April 7, 2026