releases.shpreview

3.2.0

$npx -y @buildinternet/releases show rel_HG4KcR3S1AGZpGI92jXw4

🔥 Transformers.js v3.2 — Moonshine for real-time speech recognition, Phi-3.5 Vision for multi-frame image understanding and reasoning, and more!

Table of contents:

  • 🤖 New models: Moonshine, Phi-3.5 Vision, EXAONE
    • Moonshine: Real-time speech recognition
    • Phi-3.5 Vision: Multi-frame image understanding and reasoning
    • EXAONE: Bilingual (English and Korean) text generation
  • 🐛 Bug fixes
  • 🛠️ Other improvements
<h2 id="new-models">🤖 New models: Moonshine, Phi-3.5 Vision, EXAONE</h2> <h3 id="moonshine">Moonshine for real-time speech recognition</h3>

Moonshine is a family of speech-to-text models optimized for fast and accurate automatic speech recognition (ASR) on resource-constrained devices. They are well-suited to real-time, on-device applications like live transcription and voice command recognition, and are perfect for in-browser usage (check out the online demo). See https://github.com/huggingface/transformers.js/pull/1099 for more information and here for the list of supported models.

Example: Automatic speech recognition w/ Moonshine tiny.

import { pipeline } from "@huggingface/transformers";

const transcriber = await pipeline("automatic-speech-recognition", "onnx-community/moonshine-tiny-ONNX");
const output = await transcriber("https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/jfk.wav");
console.log(output);
// { text: 'And so my fellow Americans ask not what your country can do for you as what you can do for your country.' }
<details> <summary>See example using the MoonshineForConditionalGeneration API</summary>
import { MoonshineForConditionalGeneration, AutoProcessor, read_audio } from "@huggingface/transformers";

// Load model and processor
const model_id = "onnx-community/moonshine-tiny-ONNX";
const model = await MoonshineForConditionalGeneration.from_pretrained(model_id, {
    dtype: "q4",
});
const processor = await AutoProcessor.from_pretrained(model_id);

// Load audio and prepare inputs
const audio = await read_audio("https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/jfk.wav", 16000);
const inputs = await processor(audio);

// Generate outputs
const outputs = await model.generate({ ...inputs, max_new_tokens: 100 });

// Decode outputs
const decoded = processor.batch_decode(outputs, { skip_special_tokens: true });
console.log(decoded[0]);
// And so my fellow Americans ask not what your country can do for you, ask what you can do for your country.
</details> <h3 id="phi3_v">Phi-3.5 Vision for multi-frame image understanding and reasoning</h3>

Phi-3.5 Vision is a lightweight, state-of-the-art, open multimodal model that can be used for multi-frame image understanding and reasoning. See https://github.com/huggingface/transformers.js/pull/1094 for more information and here for the list of supported models.

Examples:

<table> <tr> <th style="width: 40%;">Input</th> <th style="width: 60%;">Output</th> </tr> <tr> <td> "What's funny about this image?" <img src="https://github.com/user-attachments/assets/631df9f7-6076-47f6-ba8a-8fd34eb34439" width="400" /> </td> <td>The humor in this image stems from the exaggerated depiction of human evolution, using the Shiba Inu dog breed to represent both ancient and modern humans. The left side shows a muscular, hunter-like figure labeled as 'Humans 100,000 years ago' with the caption 'me hungry me hunt mammoth,' suggesting a time when humans were physically robust and actively hunting. The right side contrasts this with a modern, slim Shiba Inu labeled as 'Humans today' with the caption 'why food delivery slow,' humorously commenting on the modern human's reliance on convenience and technology, such as food delivery services, rather than hunting for sustenance. The use of a dog, which is often associated with loyalty and companionship, adds a layer of irony and humor as it portrays humans in a more diminished, dependent state.</td> </tr> <tr> <td> "Summarize the deck of slides." <img src="https://github.com/user-attachments/assets/78f45879-fa75-4636-9a69-57fc33747a6e" width="400" /> <img src="https://github.com/user-attachments/assets/3616cc53-ac1c-4f7e-8fba-55542354fd9a" width="400" /> <img src="https://github.com/user-attachments/assets/a6897365-c502-4815-9160-1092bda679cf" width="400" /> </td> <td> <p>To summarize, the slides are composed of these sections:</p> <ul> <li> <strong>Introduction to Azure:</strong> <p>The presentation introduces Microsoft Azure, a cloud computing platform. It highlights Azure's three service tiers: Hyper-scale, Enterprise, and Hybrid. The presenter is Dinesh Kumar Wickramasinghe, a Senior Software Engineer from CMS Private Limited in Sri Lanka.</p> </li> <li> <strong>Azure Overview:</strong> <p>Azure is described as Microsoft's cloud computing platform, continuously expanding to meet current and future business challenges. It offers freedom to build, manage, and deploy applications on a global network using preferred tools and frameworks.</p> </li> <li> <strong>Cloud Computing Services:</strong> <p>The presentation outlines three types of cloud computing services provided by Azure: Infrastructure-as-a-Service (IaaS) with a 'host' component, Platform-as-a-Service (PaaS) with a 'build' component, and Software-as-a-Service (SaaS) with a 'consume' component.</p> </li> </ul> </td> </tr> </table> <details> <summary>See example code</summary>

Example: Single-frame (critique an image)

import {
  AutoProcessor,
  AutoModelForCausalLM,
  TextStreamer,
  load_image,
} from "@huggingface/transformers";

// Load processor and model
const model_id = "onnx-community/Phi-3.5-vision-instruct";
const processor = await AutoProcessor.from_pretrained(model_id, {
  legacy: true, // Use legacy to match python version
});
const model = await AutoModelForCausalLM.from_pretrained(model_id, {
  dtype: {
    vision_encoder: "q4", // 'q4' or 'q4f16'
    prepare_inputs_embeds: "q4", // 'q4' or 'q4f16'
    model: "q4f16", // 'q4f16'
  },
});

// Load image
const image = await load_image("https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/meme.png");

// Prepare inputs
const messages = [
  { role: "user", content: "<|image_1|>What's funny about this image?" },
];
const prompt = processor.tokenizer.apply_chat_template(messages, {
  tokenize: false,
  add_generation_prompt: true,
});
const inputs = await processor(prompt, image, { num_crops: 4 });

// (Optional) Set up text streamer
const streamer = new TextStreamer(processor.tokenizer, {
  skip_prompt: true,
  skip_special_tokens: true,
});

// Generate response
const output = await model.generate({
  ...inputs,
  streamer,
  max_new_tokens: 256,
});

Or, decode the output at the end:

// Decode and display the answer
const generated_ids = output.slice(null, [inputs.input_ids.dims[1], null]);
const answer = processor.batch_decode(generated_ids, {
  skip_special_tokens: true,
});
console.log(answer[0]);

Example: Multi-frame (summarize slides)

import {
  AutoProcessor,
  AutoModelForCausalLM,
  TextStreamer,
  load_image,
} from "@huggingface/transformers";

// Load processor and model
const model_id = "onnx-community/Phi-3.5-vision-instruct";
const processor = await AutoProcessor.from_pretrained(model_id, {
  legacy: true, // Use legacy to match python version
});
const model = await AutoModelForCausalLM.from_pretrained(model_id, {
  dtype: {
    vision_encoder: "q4", // 'q4' or 'q4f16'
    prepare_inputs_embeds: "q4", // 'q4' or 'q4f16'
    model: "q4f16", // 'q4f16'
  },
});

// Load images
const urls = [
  "https://image.slidesharecdn.com/azureintroduction-191206101932/75/Introduction-to-Microsoft-Azure-Cloud-1-2048.jpg",
  "https://image.slidesharecdn.com/azureintroduction-191206101932/75/Introduction-to-Microsoft-Azure-Cloud-2-2048.jpg",
  "https://image.slidesharecdn.com/azureintroduction-191206101932/75/Introduction-to-Microsoft-Azure-Cloud-3-2048.jpg",
];
const images = await Promise.all(urls.map(load_image));

// Prepare inputs
const placeholder = images.map((_, i) => `<|image_${i + 1}|>\n`).join("");
const messages = [
  { role: "user", content: placeholder + "Summarize the deck of slides." },
];
const prompt = processor.tokenizer.apply_chat_template(messages, {
  tokenize: false,
  add_generation_prompt: true,
});
const inputs = await processor(prompt, images, { num_crops: 4 });

// (Optional) Set up text streamer
const streamer = new TextStreamer(processor.tokenizer, {
  skip_prompt: true,
  skip_special_tokens: true,
});

// Generate response
const output = await model.generate({
  ...inputs,
  streamer,
  max_new_tokens: 256,
});
</details> <h3 id="exaone">EXAONE 3.5 for bilingual (English and Korean) text generation</h3>

EXAONE 3.5 is a collection of instruction-tuned bilingual (English and Korean) generative models, developed and released by LG AI Research. See https://github.com/huggingface/transformers.js/pull/1084 for more information and here for the list of supported models.

Example: Text-generation w/ EXAONE-3.5-2.4B-Instruct:

import { pipeline } from "@huggingface/transformers";

// Create a text generation pipeline
const generator = await pipeline(
  "text-generation",
  "onnx-community/EXAONE-3.5-2.4B-Instruct",
  { dtype: "q4f16" },
);

// Define the list of messages
const messages = [
  { role: "system", content: "You are a helpful assistant." },
  { role: "user", content: "Tell me a joke." },
];

// Generate a response
const output = await generator(messages, { max_new_tokens: 128 });
console.log(output[0].generated_text.at(-1).content);
<details> <summary>See example output</summary>
Sure! Here's a light joke for you:

Why don't scientists trust atoms?

Because they make up everything! 

I hope you found that amusing! If you want another one, feel free to ask!
</details> <h2 id="bug-fixes">🐛 Bug fixes</h2> <h2 id="other-improvements">🛠️ Other improvements</h2>

Full Changelog: https://github.com/huggingface/transformers.js/compare/3.1.2...3.2.0

Fetched April 7, 2026