1216ded: Add full tool result payload support to onAgentToolResponse.
The onAgentToolResponse callback now also receives agent_tool_response_full_payload server events, delivering the raw full_tool_result string (capped at 64 KB) alongside the existing summary events. Consumers can distinguish between the two by checking for the presence of full_tool_result on the payload. To receive full payloads, enable the agent_tool_response_full_payload client event in the agent's configuration UI.
<ConversationProvider
agentId="…"
onAgentToolResponse={payload => {
if ("full_tool_result" in payload) {
if (payload.truncated) {
console.warn("full payload truncated to 64 KB");
}
console.log(payload.tool_name, payload.full_tool_result);
} else {
console.log(payload.tool_call_id, payload.is_error);
}
}}
>
…
</ConversationProvider>
The same callback is available on useConversation, startSession, and the lower-level Conversation.startSession in @elevenlabs/client.
RealtimeConnection now exposes mute(), unmute(), and isMuted. The useScribe React hook surfaces these as isMuted state with mute() and unmute() callbacks.onAgentResponseCorrection callback for agent response correction events.contextId to sendContextualUpdate for deduplicating contextual updatesllm to the typed agent prompt override for conversation sessions.keyterms option (string[]) to the Scribe realtime API. Biases the model towards specific terms (max 50 keyterms, each up to 20 chars), passed as repeated query params on the WebSocket URL.noVerbatim option (boolean) to the Scribe realtime API. When enabled, removes filler words, false starts, and disfluencies from transcripts.onConnect runs after the session is marked connected and React has synchronized conversationRef. Also expose and forward onConversationCreated for consumers that need the created Conversation instance before onConnect.0d5c368: Fix getInputVolume/getOutputVolume returning 0 in React Native by adding native volume providers using LiveKit's RMS and multiband FFT processors.
Breaking: getByteFrequencyData() now returns data focused on the human voice range (100-8000 Hz) instead of the full spectrum (0 to sampleRate/2). On web, getVolume() is also computed from this range. The deprecated getAnalyser() method still provides direct access to the raw AnalyserNode for consumers needing full-spectrum data.
50ea6ef: fix: use explicit .js extensions in ESM imports for Node.js compatibility
Switch moduleResolution from bundler to nodenext and add .js extensions to all relative imports. The published packages use "type": "module" but the compiled output had extensionless imports, which breaks Node.js ESM resolution. Also add "type": "module" to @elevenlabs/types.
Updated dependencies [50ea6ef]
sendMultimodalMessage in useConversationControls hook. Export MultimodalMessageInput type from @elevenlabs/client.f174972: Breaking: Input class removed from exports; VoiceConversation.input is now private; changeInputDevice() returns void.
The Input class is no longer exported. The input field on VoiceConversation is now private. changeInputDevice() returns Promise<void> instead of Promise<Input>.
Before:
import { Input } from "@elevenlabs/client";
const input: Input = conversation.input;
input.analyser.getByteFrequencyData(data);
input.setMuted(true);
const newInput: Input = await conversation.changeInputDevice(config);
getInputVolume()/getOutputVolume() and empty Uint8Array from getInputByteFrequencyData()/getOutputByteFrequencyData() instead of throwing when no active conversation or analyser is available. This avoids forcing consumers (e.g., animation loops) to wrap every call in try-catch.1b84231: Add guardrail_triggered server-to-client WebSocket event, emitted when a guardrail is triggered during the conversation.
New callback: onGuardrailTriggered on Callbacks — fires when the server detects a guardrail violation.
const conversation = await Conversation.startSession({
agentId: "your-agent-id",
onGuardrailTriggered: () => {
console.log("A guardrail was triggered");
},
});
For the client to receive events, it must be enabled on the "Advanced" tab of the agent's settings.
2e37cd9: Add type discriminant property to TextConversation and VoiceConversation, enabling discriminated union narrowing. Add startSession overloads that narrow return type based on textOnly option.
81013c0: Breaking: Input class removed from exports; VoiceConversation.input is now private; changeInputDevice() returns void.
The Input class is no longer exported. The input field on VoiceConversation is now private. changeInputDevice() returns Promise<void> instead of Promise<Input>.
Before:
import { Input } from "@elevenlabs/client";
const input: Input = conversation.input;
input.analyser.getByteFrequencyData(data);
input.setMuted(true);
const newInput: Input = await conversation.changeInputDevice(config);
textOnly option (passable both on the top-level and via the overrides object): Providing one will propagate to the other, with the top-level taking precedence, in case of conflict.After:
import type { InputController } from "@elevenlabs/client";
conversation.getInputByteFrequencyData(); // replaces input.analyser.getByteFrequencyData
conversation.setMicMuted(true); // replaces input.setMuted
await conversation.changeInputDevice(config); // return value droppedMigration:
import { Input } with import type { InputController } if you need the type.conversation.input.analyser.getByteFrequencyData(data) with conversation.getInputByteFrequencyData().conversation.input.setMuted(v) with conversation.setMicMuted(v).changeInputDevice().f174972: Breaking: InputController and OutputController interfaces are now exported; Input and Output class exports are replaced by these interfaces.
Before:
import { Input, Output } from "@elevenlabs/client";
After:
import type { InputController, OutputController } from "@elevenlabs/client";
f174972: Breaking: Conversation is no longer a class — it is now a plain namespace object and a type alias for TextConversation | VoiceConversation.
instanceof Conversation no longer compiles. Subclassing Conversation is no longer possible. The startSession() call is unchanged.
Before:
import { Conversation } from "@elevenlabs/client";
// instanceof check compiled fine
if (session instanceof Conversation) { … }
// subclassing was possible
class MyConversation extends Conversation { … }
// startSession returned the class type
const session: Conversation = await Conversation.startSession(options);
After:
import { Conversation } from "@elevenlabs/client";
import type {
Conversation,
TextConversation,
VoiceConversation,
} from "@elevenlabs/client";
// startSession call is unchanged
const session: Conversation = await Conversation.startSession(options);
// Narrow using the concrete types or duck-typing instead of instanceof
if ("changeInputDevice" in session) {
// session is VoiceConversation
Migration:
instanceof Conversation checks. Narrow on TextConversation or VoiceConversation using "changeInputDevice" in session (voice) or duck-typing on the methods you need.Conversation — implement the BaseConversation interface directly or compose instead.startSession() call is unchanged and requires no migration.1dc66aa: Breaking: The default connectionType is now inferred from the conversation mode instead of always defaulting to "websocket".
"webrtc" by defaulttextOnly: true) use "websocket" by defaultUsers who previously relied on the implicit "websocket" default for voice conversations and need to keep using WebSocket must now explicitly set connectionType: "websocket".
connectionType is now optional on PublicSessionConfig (when using agentId).
f174972: Breaking: Output class removed from exports; VoiceConversation.output is now private; changeOutputDevice() returns void.
The Output class is no longer exported. The output field on VoiceConversation is now private. changeOutputDevice() returns Promise<void> instead of Promise<Output>.
Before:
import { Output } from "@elevenlabs/client";
const output: Output = conversation.output;
output.gain.gain.value = 0.5;
output.analyser.getByteFrequencyData(data);
output.worklet.port.postMessage({ type: "interrupt" });
const newOutput: Output = await conversation.changeOutputDevice(config);
After:
import type { OutputController } from "@elevenlabs/client";
conversation.setVolume({ volume: 0.5 }); // replaces output.gain.gain.value
conversation.getOutputByteFrequencyData(); // replaces output.analyser.getByteFrequencyData
// interruption is handled internally by VoiceConversation
await conversation.changeOutputDevice(config); // return value dropped
Migration:
import { Output } with import type { OutputController } if you need the type.conversation.output.gain.gain.value = v with conversation.setVolume({ volume: v }).conversation.output.analyser.getByteFrequencyData(data) with conversation.getOutputByteFrequencyData().changeOutputDevice().f174972: Breaking: VoiceConversation.wakeLock is now private.
The wakeLock field is no longer accessible on VoiceConversation. It was always an internal detail for preventing screen sleep during a session and was never intended as stable public API.
Before:
const lock: WakeLockSentinel | null = conversation.wakeLock;
if (lock) {
await lock.release();
}
After: Wake lock lifecycle is managed entirely by VoiceConversation. There is no replacement — the lock is released automatically when the session ends. If you need to suppress wake locking entirely, pass useWakeLock: false in the session options.
const conversation = await Conversation.startSession({
// …
useWakeLock: false, // opt out of wake lock management
});
After:
import type { InputController } from "@elevenlabs/client";
conversation.getInputByteFrequencyData(); // replaces input.analyser.getByteFrequencyData
conversation.setMicMuted(true); // replaces input.setMuted
await conversation.changeInputDevice(config); // return value droppedMigration:
import { Input } with import type { InputController } if you need the type.conversation.input.analyser.getByteFrequencyData(data) with conversation.getInputByteFrequencyData().conversation.input.setMuted(v) with conversation.setMicMuted(v).changeInputDevice().81013c0: Breaking: InputController and OutputController interfaces are now exported; Input and Output class exports are replaced by these interfaces.
Before:
import { Input, Output } from "@elevenlabs/client";
After:
import type { InputController, OutputController } from "@elevenlabs/client";
81013c0: Breaking: Conversation is no longer a class — it is now a plain namespace object and a type alias for TextConversation | VoiceConversation.
instanceof Conversation no longer compiles. Subclassing Conversation is no longer possible. The startSession() call is unchanged.
Before:
import { Conversation } from "@elevenlabs/client";
// instanceof check compiled fine
if (session instanceof Conversation) { … }
// subclassing was possible
class MyConversation extends Conversation { … }
// startSession returned the class type
const session: Conversation = await Conversation.startSession(options);
After:
import { Conversation } from "@elevenlabs/client";
import type {
Conversation,
TextConversation,
VoiceConversation,
} from "@elevenlabs/client";
// startSession call is unchanged
const session: Conversation = await Conversation.startSession(options);
// Narrow using the concrete types or duck-typing instead of instanceof
if ("changeInputDevice" in session) {
// session is VoiceConversation
Migration:
instanceof Conversation checks. Narrow on TextConversation or VoiceConversation using "changeInputDevice" in session (voice) or duck-typing on the methods you need.Conversation — implement the BaseConversation interface directly or compose instead.startSession() call is unchanged and requires no migration.81013c0: Breaking: Output class removed from exports; VoiceConversation.output is now private; changeOutputDevice() returns void.
The Output class is no longer exported. The output field on VoiceConversation is now private. changeOutputDevice() returns Promise<void> instead of Promise<Output>.
Before:
import { Output } from "@elevenlabs/client";
const output: Output = conversation.output;
output.gain.gain.value = 0.5;
output.analyser.getByteFrequencyData(data);
output.worklet.port.postMessage({ type: "interrupt" });
const newOutput: Output = await conversation.changeOutputDevice(config);
After:
import type { OutputController } from "@elevenlabs/client";
conversation.setVolume({ volume: 0.5 }); // replaces output.gain.gain.value
conversation.getOutputByteFrequencyData(); // replaces output.analyser.getByteFrequencyData
// interruption is handled internally by VoiceConversation
await conversation.changeOutputDevice(config); // return value dropped
Migration:
import { Output } with import type { OutputController } if you need the type.conversation.output.gain.gain.value = v with conversation.setVolume({ volume: v }).conversation.output.analyser.getByteFrequencyData(data) with conversation.getOutputByteFrequencyData().changeOutputDevice().81013c0: Breaking: VoiceConversation.wakeLock is now private.
The wakeLock field is no longer accessible on VoiceConversation. It was always an internal detail for preventing screen sleep during a session and was never intended as stable public API.
Before:
const lock: WakeLockSentinel | null = conversation.wakeLock;
if (lock) {
await lock.release();
}
After: Wake lock lifecycle is managed entirely by VoiceConversation. There is no replacement — the lock is released automatically when the session ends. If you need to suppress wake locking entirely, pass useWakeLock: false in the session options.
const conversation = await Conversation.startSession({
// …
useWakeLock: false, // opt out of wake lock management
});