2025-11-19
We've switched to using proper generate endpoints for model calls, fixing a critical permission issue with OpenAI streaming. No more 403 errors when your users don't have full model permissions - the generate endpoint respects granular API key scopes properly.
Building custom UIs? You now have complete control over what gets sent in your AI SDK streams. Configure exactly which message chunks your frontend receives with the new sendStart, sendFinish, sendReasoning, and sendSources options.
Add sendStart, sendFinish, sendReasoning, and sendSources options to toAISdkV5Stream function, allowing fine-grained control over which message chunks are included in the converted stream. Previously, these values were hardcoded in the transformer.
BREAKING CHANGE: AgentStreamToAISDKTransformer now accepts an options object instead of a single lastMessageId parameter
Also, add sendStart, sendFinish, sendReasoning, and sendSources parameters to chatRoute function, enabling fine-grained control over which chunks are included in the AI SDK stream output. (#10127)
Added support for tripwire data chunks in streaming responses.
Tripwire chunks allow the AI SDK to emit special data events when certain conditions are triggered during stream processing. These chunks include a tripwireReason field explaining why the tripwire was activated.
When converting Mastra chunks to AI SDK v5 format, tripwire chunks are now automatically handled:
// Tripwire chunks are converted to data-tripwire format
const chunk = {
type: 'tripwire',
payload: { tripwireReason: 'Rate limit approaching' }
};
// Converts to:
{
type: 'data-tripwire',
data: { tripwireReason: 'Rate limit approaching' }
}
(#10269)
description field to GetAgentResponse to support richer agent metadata (#10305)Only handle download image asset transformation if needed (#10122)
Fix tool outputSchema validation to allow unsupported Zod types like ZodTuple. The outputSchema is only used for internal validation and never sent to the LLM, so model compatibility checks are not needed. (#9409)
Fix vector definition to fix pinecone (#10150)
Add type bailed to workflowRunStatus (#10091)
Allow provider to pass through options to the auth config (#10284)
Fix deprecation warning when agent network executes workflows by using .fullStream instead of iterating WorkflowRunOutput directly (#10306)
Add support for doGenerate in LanguageModelV2. This change fixes issues with OpenAI stream permissions.
Fetched April 7, 2026