We've switched to using proper generate endpoints for model calls, fixing a critical permission issue with OpenAI streaming. No more 403 errors when your users don't have full model permissions - the generate endpoint respects granular API key scopes properly.
Building custom UIs? You now have complete control over what gets sent in your AI SDK streams. Configure exactly which message chunks your frontend receives with the new sendStart, sendFinish, sendReasoning, and sendSources options.
Add sendStart, sendFinish, sendReasoning, and sendSources options to toAISdkV5Stream function, allowing fine-grained control over which message chunks are included in the converted stream. Previously, these values were hardcoded in the transformer.
BREAKING CHANGE: AgentStreamToAISDKTransformer now accepts an options object instead of a single lastMessageId parameter
Also, add sendStart, sendFinish, sendReasoning, and sendSources parameters to chatRoute function, enabling fine-grained control over which chunks are included in the AI SDK stream output. (#10127)
Added support for tripwire data chunks in streaming responses.
Tripwire chunks allow the AI SDK to emit special data events when certain conditions are triggered during stream processing. These chunks include a tripwireReason field explaining why the tripwire was activated.
When converting Mastra chunks to AI SDK v5 format, tripwire chunks are now automatically handled:
// Tripwire chunks are converted to data-tripwire format
const chunk = {
type: 'tripwire',
payload: { tripwireReason: 'Rate limit approaching' }
};
// Converts to:
{
type: 'data-tripwire',
data: { tripwireReason: 'Rate limit approaching' }
}
(#10269)
description field to GetAgentResponse to support richer agent metadata (#10305)Only handle download image asset transformation if needed (#10122)
Fix tool outputSchema validation to allow unsupported Zod types like ZodTuple. The outputSchema is only used for internal validation and never sent to the LLM, so model compatibility checks are not needed. (#9409)
Fix vector definition to fix pinecone (#10150)
Add type bailed to workflowRunStatus (#10091)
Allow provider to pass through options to the auth config (#10284)
Fix deprecation warning when agent network executes workflows by using .fullStream instead of iterating WorkflowRunOutput directly (#10306)
Add support for doGenerate in LanguageModelV2. This change fixes issues with OpenAI stream permissions.
We've worked hard on a 1.0 beta version to signal that Mastra is ready for prime time and there will not be any breaking changes in the near future. Please visit the migration guide to get started.
We added the ability not to download images or any supported files by the model, and instead send the raw URL so it can handle it on its own. This improves the speed of the LLM call.
Added improved support for Mistral by using the ai-sdk provider under the hood instead of the openai compat provider.
Fix bad dane change in 0.x workflowRoute (#10090)
Improve ai-sdk transformers, handle custom data from agent sub workflow, sug agent tools (#10026)
Extend the workflow route to accept optional runId and resourceId parameters, allowing clients to specify custom identifiers when creating workflow runs. These parameters are now properly validated in the OpenAPI schema and passed through to the createRun method.
Also updates the OpenAPI schema to include previously undocumented resumeData and step fields. (#10034)
Integrates the native Mistral AI SDK provider (@ai-sdk/mistral) to replace the current OpenAI-compatible endpoint implementation for Mistral models. (#9789)
Fix: Don't download unsupported media (#9209)
Use a shared getAllToolPaths() method from the bundler to discover tool paths. (#9204)
Add an additional check to determine whether the model natively supports specific file types. Only download the file if the model does not support it natively. (#9790)
Fix agent network iteration counter bug causing infinite loops
The iteration counter in agent networks was stuck at 0 due to a faulty ternary operator that treated 0 as falsy. This prevented maxSteps from working correctly, causing infinite loops when the routing agent kept selecting primitives instead of returning "none".
Changes:
Fixed iteration counter logic in loop/network/index.ts from (inputData.iteration ? inputData.iteration : -1) + 1 to (inputData.iteration ?? -1) + 1
Changed initial iteration value from 0 to -1 so first iteration correctly starts at 0
Added checkIterations() helper to validate iteration counting in all network tests
Fixes #9314 (#9762)
Exposes requiresAuth to custom api routes (#9952)
Fix agent network working memory tool routing. Memory tools are now included in routing agent instructions but excluded from its direct tool calls, allowing the routing agent to properly route to tool execution steps for memory updates. (#9428)
Fixes assets not being downloaded when available (#10079)
getAllToolPaths() method from the bundler to discover tool paths. (#9204)getAllToolPaths() method from the bundler to discover tool paths. (#9204)This release focuses primarily on bug fixes and stability improvements.
We've resolved several issues related to message deduplication and preserving lastMessageIds. More importantly, this release adds support for suspend/resume operations and custom data writes, with network data now properly surfacing as data-parts.
We've fully resolved bundling issues with the reflect-metadata package by ensuring it's not removed during the bundling step. This means packages no longer need to be marked as externals to avoid runtime crashes in the Mastra server.
update peerdeps (5ca1cca)
Fix workflow input property preservation after resume from snapshot
Ensure that when resuming a workflow from a snapshot, the input property is correctly set from the snapshot's context input rather than from resume data. This prevents the loss of original workflow input data during suspend/resume cycles. (#9380)
Fix a bug where streaming didn't output the final chunk (#9546)
Fixes issue where clicking the reset button in the model picker would fail to restore the original LanguageModelV2 (or any other types) object that was passed during agent construction. (#9487)
Fix network routing agent smoothstreaming (#9247)
update peerdeps (5ca1cca)
Improve analyze recursion in bundler when using monorepos (#9490)
Update peer dependencies to match core package version bump (0.23.4) (#9487)
Fixes issue where clicking the reset button in the model picker would fail to restore the original LanguageModelV2 (or any other types) object that was passed during agent construction. (#9487)
Make sure external deps are built with side-effects. Fixes an issue with reflect-metadata #7328 (#9714)
Remove unused /model-providers API (#9533)
Fix undefined runtimeContext using memory from playground (#9328)
Add readable-streams to global externals, not compatible with CJS compilation (#9735)
fix: add /api route to default public routes to allow unauthenticated access
The /api route was returning 401 instead of 200 because it was being caught by the /api/_ protected pattern. Adding it to the default public routes ensures the root API endpoint is accessible without authentication while keeping /api/_ routes protected. (#9662)
Fixed a critical bug in @mastra/core where tool input validation used the original Zod schema while LLMs received a transformed version. This caused validation failures with models like OpenAI o3 and Claude 3.5 Haiku that send valid responses matching the transformed schema (e.g., converting .optional() to .nullable()).
Fixed import isssues in exporters. (#9331)
fix(@mastra/arize): Auto-detect arize endpoint when endpoint field is not provided
When spaceId is provided to ArizeExporter constructor, and endpoint is not, pre-populate endpoint with default ArizeAX endpoint. (#9250)
Fix agent onChunk callback receiving wrapped chunk instead of direct chunk (#9402)
Ensure model_generation spans end before agent_run spans. (#9393)
Fix OpenAI schema validation errors in processors (#9400)
Don't call os.homedir() at top level (but lazy invoke it) to accommodate sandboxed environments (#9211)
Detect thenable objects returned by AI model providers (#8905)
Bug fix: Use input processors that are passed in generate or stream agent options rather than always defaulting to the processors set on the Agent class. (#9407)
Fix tool input validation to use schema-compat transformed schemas
Previously, tool input validation used the original Zod schema while the LLM received a schema-compat transformed version. This caused validation failures when LLMs (like OpenAI o3 or Claude 3.5 Haiku) sent arguments matching the transformed schema but not the original.
For example:
OpenAI o3 reasoning models convert .optional() to .nullable(), sending null values
Claude 3.5 Haiku strips min/max string constraints, sending shorter strings
Validation would reject these valid responses because it checked against the original schema
The fix ensures validation uses the same schema-compat processed schema that was sent to the LLM, eliminating this mismatch. (#9258)
Add import for WritableStream in execution-engine and dedupe llm.getModel in agent.ts (#9185)
pass writableStream parameter to workflow execution (#9139)
Save correct status in snapshot for all workflow parallel steps.
This ensures when you poll workflow run result using getWorkflowRunExecutionResult(runId), you get the right status for all parallel steps (#9379)
Add ability to pass agent options when wrapping an agent with createStep. This allows configuring agent execution settings when using agents as workflow steps. (#9199)
Fix network loop iteration counter and usage promise handling:
Fixed usage promise resolution in RunOutput stream by properly resolving or rejecting the promise on stream close, preventing hanging promises when streams complete. (#9408)
Workflow validation zod v4 support (#9319)
Fix usage tracking with agent network (#9226)
mastra build for ERR_MODULE_NOT_FOUND cases. (#9127)Implemented AI tracing and observability features
Added batchCreateAISpans, batchUpdateAISpans, batchDeleteAITraces
Automatic performance indexes for AI spans
Implemented workflow update methods
Added updateWorkflowState with row-level locking
Concurrent update protection for parallel workflow execution
Added index management API
Exposed index management methods directly on store instance
Support for composite indexes, unique constraints, and filtered indexes
Documentation improvements
Detailed feature descriptions for all storage capabilities
Index management examples and best practices
Updated to reflect all atomic transaction usage (#9280)
Fix Zod v4 toJSONSchema bug with z.record() single-argument form
Zod v4 has a bug in the single-argument form of z.record(valueSchema) where it incorrectly assigns the value schema to keyType instead of valueType, leaving valueType undefined. This causes toJSONSchema() to throw "Cannot read properties of undefined (reading '_zod')" when processing schemas containing z.record() fields.
This fix patches affected schemas before conversion by detecting records with missing valueType and correctly assigning the schema to valueType while setting keyType to z.string() (the default). The patch recursively handles nested schemas including those wrapped in .optional(), .nullable(), arrays, unions, and objects. (#9265)
Improved reliability of string field types in tool schema compatibility (#9266)
mastra start and throw them with Mastra's logger. Also add special error handling for ERR_MODULE_NOT_FOUND cases. (#9127)mastra init also installs the mastra CLI package (if not already installed) (#9179)feat(otel-exporter): Add customizable 'exporter' constructor parameter
You can now pass in an instantiated TraceExporter inheriting class into OtelExporter.
This will circumvent the default package detection, no longer instantiating a TraceExporter
automatically if one is instead passed in to the OtelExporter constructor.
feat(arize): Initial release of @mastra/arize observability package
The @mastra/arize package exports an ArizeExporter class that can be used to easily send AI
traces from Mastra to Arize AX, Arize Phoenix, or any OpenInference compatible collector.
It sends traces uses BatchSpanProcessor over OTLP connections.
It leverages the @mastra/otel-exporter package, reusing OtelExporter for transmission and
span management.
See the README in observability/arize/README.md for more details (#8827)
fix(observability): Add ParentSpanContext to MastraSpan's with parentage (#9085)
Update peerdeps to 0.23.0-0 (#9043)
Rename LLM span types and attributes to use Model prefix
BREAKING CHANGE: This release renames AI tracing span types and attribute interfaces to use the "Model" prefix instead of "LLM":
AISpanType.LLM_GENERATION → AISpanType.MODEL_GENERATIONAISpanType.LLM_STEP → AISpanType.MODEL_STEP
AISpanType.LLM_CHUNK → AISpanType.MODEL_CHUNK
LLMGenerationAttributes → ModelGenerationAttributes
LLMStepAttributes → ModelStepAttributes
LLMChunkAttributes → ModelChunkAttributes
InternalSpans.LLM → InternalSpans.MODEL
This change better reflects that these span types apply to all AI models, not just Large Language Models.
Migration guide:
Update all imports: import { ModelGenerationAttributes } from '@mastra/core/ai-tracing'
Update span type references: AISpanType.MODEL_GENERATION
Update InternalSpans usage: InternalSpans.MODEL (#9105)
Update provider registry and model documentation with latest models and providers (c67ca32)
Update provider registry and model documentation with latest models and providers (efb5ed9)
Add deprecation warnings for format:ai-sdk (#9018)
network routing agent text delta ai-sdk streaming (#8979)
Support writing custom top level stream chunks (#8922)
Consolidate streamVNext logic into stream, move old stream function into streamLegacy (#9092)
Fix incorrect type assertions in Tool class. Created MastraToolInvocationOptions type to properly extend AI SDK's ToolInvocationOptions with Mastra-specific properties (suspend, resumeData, writableStream). Removed unsafe type assertions from tool execution code. (#8510)
fix(core): Fix Gemini message ordering validation errors (#7287, #8053)
Fixes Gemini API "single turn requests" validation error by ensuring the first non-system message is from the user role. This resolves errors when:
Messages start with assistant role (e.g., from memory truncation)
Tool-call sequences begin with assistant messages
Breaking Change: Empty or system-only message lists now throw an error instead of adding a placeholder user message, preventing confusing LLM responses.
This fix handles both issue #7287 (tool-call ordering) and #8053 (single-turn validation) by inserting a placeholder user message when needed. (#7287)
Add support for external trace and parent span IDs in TracingOptions. This enables integration with external tracing systems by allowing new AI traces to be started with existing traceId and parentSpanId values. The implementation includes OpenTelemetry-compatible ID validation (32 hex chars for trace IDs, 16 hex chars for span IDs). (#9053)
Updated watch and watchAsync methods to use proper function overloads instead of generic conditional types, ensuring compatibility with the base Run class signatures. (#9048)
Fix tracing context propagation to agent steps in workflows
When creating a workflow step from an agent using createStep(myAgent), the tracing context was not being passed to the agent's stream() and streamLegacy() methods. This caused tracing spans to break in the workflow chain.
This fix ensures that tracingContext is properly propagated to both agent.stream() and agent.streamLegacy() calls, matching the behavior of tool steps which already propagate tracingContext correctly. (#9074)
Fixes how reasoning chunks are stored in memory to prevent data loss and ensure they are consolidated as single message parts rather than split into word-level fragments. (#9041)
fixes an issue where input processors couldn't add system or assistant messages. Previously all messages from input processors were forced to be user messages, causing an error when trying to add other role types. (#8835)
fix(core): Validate structured output at text-end instead of flush
Fixes structured output validation for Bedrock and LMStudio by moving validation from flush() to text-end chunk. Eliminates finishReason heuristics, adds special token extraction for LMStudio, and validates at the correct point in stream lifecycle. (#8934)
fix model.loop.test.ts tests to use structuredOutput.schema and add assertions (#8926)
Add initialState as an option to .streamVNext() (#9071)
added resourceId and runId to workflow_run metadata in ai tracing (#9031)
When using OpenAI models with JSON response format, automatically enable strict schema validation. (#8924)
Fix custom metadata preservation in UIMessages when loading threads. The getMessagesHandler now converts messagesV2 (V2 format with metadata) instead of messages (V1 format without metadata) to AIV5.UI format. Also updates the abstract MastraMemory.query() return type to include messagesV2 for proper type safety. (#8938)
Fix TypeScript type errors when using provider-defined tools from external AI SDK packages.
Agents can now accept provider tools like google.tools.googleSearch() without type errors. Creates new @internal/external-types package to centralize AI SDK type re-exports and adds ProviderDefinedTool structural type to handle tools from different package versions/instances due to TypeScript's module path discrimination. (#8940)
feat(ai-tracing): Add automatic metadata extraction from RuntimeContext to spans
Enables automatic extraction of RuntimeContext values as metadata for AI tracing spans across entire traces.
Key features:
Configure runtimeContextKeys in TracingConfig to extract specific keys from RuntimeContext
Add per-request keys via tracingOptions.runtimeContextKeys for trace-specific additions
Supports dot notation for nested values (e.g., 'user.id', 'session.data.experimentId')
TraceState computed once at root span and inherited by all child spans
Explicit metadata in span options takes precedence over extracted metadata
Example:
const mastra = new Mastra({
observability: {
configs: {
default: {
runtimeContextKeys: ['userId', 'environment', 'tenantId']
}
}
}
});
await agent.generate({
messages,
runtimeContext,
tracingOptions: {
runtimeContextKeys: ['experimentId'] // Adds to configured keys
}
});
(#9072)
Fix provider tools for popular providers and add support for anthropic/claude skills. (#9038)
Refactor workflowstream into workflow output with fullStream property (#9048)
Added the ability to use model router configs for embedders (eg "openai/text-embedding-ada-002") (#8992)
Always set supportsStructuredOutputs true for openai compatible provider. (#8933)
Support for custom resume labels mapping to step to be resumed (#8941)
added tracing of LLM steps & chunks (#9058)
Fixed an issue where a custom URL in model router still validated unknown providers against the known providers list. Custom URL means we don't necessarily know the provider. This allows local providers like Ollama to work properly (#8989)
Show agent tool output better in playground (#9021)
feat: inject schema context into main agent for processor mode structured output (#8886)
Added providerOptions types to generate/stream for main builtin model router providers (openai/anthropic/google/xai) (#8995)
Generate a title for Agent.network() threads (#8853)
Fix nested workflow events and networks (#9132)
Update provider registry and model documentation with latest models and providers (f743dbb)
Add tool call approval (#8649)
Fix error handling and serialization in agent streaming to ensure errors are consistently exposed and preserved. (#9192)
Rename LLM span types and attributes to use Model prefix
BREAKING CHANGE: This release renames AI tracing span types and attribute interfaces to use the "Model" prefix instead of "LLM":
AISpanType.LLM_GENERATION → AISpanType.MODEL_GENERATIONAISpanType.LLM_STEP → AISpanType.MODEL_STEP
AISpanType.LLM_CHUNK → AISpanType.MODEL_CHUNK
LLMGenerationAttributes → ModelGenerationAttributes
LLMStepAttributes → ModelStepAttributes
LLMChunkAttributes → ModelChunkAttributes
InternalSpans.LLM → InternalSpans.MODEL
This change better reflects that these span types apply to all AI models, not just Large Language Models.
Migration guide:
Update all imports: import { ModelGenerationAttributes } from '@mastra/core/ai-tracing'
Update span type references: AISpanType.MODEL_GENERATION
Update InternalSpans usage: InternalSpans.MODEL (#9105)
watch and watchAsync methods to use proper function overloads instead of generic conditional types, ensuring compatibility with the base Run class signatures. (#9048)@lancedb/lancedb@^0.22.2 ↗︎ (from ^0.21.2, in dependencies) (#8693)Update peerdeps to 0.23.0-0 (#9043)
Rename LLM span types and attributes to use Model prefix
BREAKING CHANGE: This release renames AI tracing span types and attribute interfaces to use the "Model" prefix instead of "LLM":
AISpanType.LLM_GENERATION → AISpanType.MODEL_GENERATIONAISpanType.LLM_STEP → AISpanType.MODEL_STEP
AISpanType.LLM_CHUNK → AISpanType.MODEL_CHUNK
LLMGenerationAttributes → ModelGenerationAttributes
LLMStepAttributes → ModelStepAttributes
LLMChunkAttributes → ModelChunkAttributes
InternalSpans.LLM → InternalSpans.MODEL
This change better reflects that these span types apply to all AI models, not just Large Language Models.
Migration guide:
Update all imports: import { ModelGenerationAttributes } from '@mastra/core/ai-tracing'
Update span type references: AISpanType.MODEL_GENERATION
Update InternalSpans usage: InternalSpans.MODEL (#9105)
Update peerdeps to 0.23.0-0 (#9043)
Rename LLM span types and attributes to use Model prefix
BREAKING CHANGE: This release renames AI tracing span types and attribute interfaces to use the "Model" prefix instead of "LLM":
AISpanType.LLM_GENERATION → AISpanType.MODEL_GENERATIONAISpanType.LLM_STEP → AISpanType.MODEL_STEP
AISpanType.LLM_CHUNK → AISpanType.MODEL_CHUNK
LLMGenerationAttributes → ModelGenerationAttributes
LLMStepAttributes → ModelStepAttributes
LLMChunkAttributes → ModelChunkAttributes
InternalSpans.LLM → InternalSpans.MODEL
This change better reflects that these span types apply to all AI models, not just Large Language Models.
Migration guide:
Update all imports: import { ModelGenerationAttributes } from '@mastra/core/ai-tracing'
Update span type references: AISpanType.MODEL_GENERATION
Update InternalSpans usage: InternalSpans.MODEL (#9105)
feat(otel-exporter): Add customizable 'exporter' constructor parameter
You can now pass in an instantiated TraceExporter inheriting class into OtelExporter.
This will circumvent the default package detection, no longer instantiating a TraceExporter
automatically if one is instead passed in to the OtelExporter constructor.
feat(arize): Initial release of @mastra/arize observability package
The @mastra/arize package exports an ArizeExporter class that can be used to easily send AI
traces from Mastra to Arize AX, Arize Phoenix, or any OpenInference compatible collector.
It sends traces uses BatchSpanProcessor over OTLP connections.
It leverages the @mastra/otel-exporter package, reusing OtelExporter for transmission and
span management.
See the README in observability/arize/README.md for more details (#8827)
fix(observability): Add ParentSpanContext to MastraSpan's with parentage (#9085)
Update peerdeps to 0.23.0-0 (#9043)
Rename LLM span types and attributes to use Model prefix
BREAKING CHANGE: This release renames AI tracing span types and attribute interfaces to use the "Model" prefix instead of "LLM":
AISpanType.LLM_GENERATION → AISpanType.MODEL_GENERATIONAISpanType.LLM_STEP → AISpanType.MODEL_STEP
AISpanType.LLM_CHUNK → AISpanType.MODEL_CHUNK
LLMGenerationAttributes → ModelGenerationAttributes
LLMStepAttributes → ModelStepAttributes
LLMChunkAttributes → ModelChunkAttributes
InternalSpans.LLM → InternalSpans.MODEL
This change better reflects that these span types apply to all AI models, not just Large Language Models.
Migration guide:
Update all imports: import { ModelGenerationAttributes } from '@mastra/core/ai-tracing'
Update span type references: AISpanType.MODEL_GENERATION
Update InternalSpans usage: InternalSpans.MODEL (#9105)
watch and watchAsync methods to use proper function overloads instead of generic conditional types, ensuring compatibility with the base Run class signatures. (#9048)getMessagesHandler now converts messagesV2 (V2 format with metadata) instead of messages (V1 format without metadata) to AIV5.UI format. Also updates the abstract MastraMemory.query() return type to include messagesV2 for proper type safety. (#8938)@google-cloud/text-to-speech@^6.3.1 ↗︎ (from ^6.3.0, in dependencies) (#8936)Model configuration has been unified across @mastra/core, @mastra/evals, and related packages, with all components now accepting the same flexible Model Configuration. This enables consistent model specification using magic strings ("openai/gpt-4o"), config objects with custom URLs, or dynamic resolution functions across scorers, processors, and relevance scoring components.
// All of these now work everywhere models are accepted
const scorer = createScorer({
judge: { model: "openai/gpt-4o" } // Magic string
});
const processor = new ModerationProcessor({
model: { id: "custom/model", url: "https://..." } // Custom config
});
const relevanceScorer = new MastraAgentRelevanceScorer(
async (ctx) => ctx.getModel() // Dynamic function
);
We've revamped the AI-SDK documentation. You can now use the useChat hook on Networks and Workflows. When you're using Agents and Workflows as a tool, you will receive a custom data component that allows you to render a tailored Tool Widget containing all the necessary information.
"use client";
import { useChat } from "@ai-sdk/react";
import { AgentTool } from '../ui/agent-tool';
import type { AgentDataPart } from "@mastra/ai-sdk";
export default function Page() {
const { messages } = useChat({
transport: new DefaultChatTransport({
api: 'http://localhost:4111/chat',
}),
});
return (
<div>
{messages.map((message) => (
<div key={message.id}>
{message.parts.map((part, i) => {
switch (part.type) {
case 'data-tool-agent':
return (
<AgentTool {...part.data as AgentDataPart} key={`${message.id}-${i}`} />
);
default:
return null;
}
})}
</div>
))}
</div>
);
}
We've updated the build pipeline to better support typescript packages in workspaces. We now detect packages that we cannot build, mostly binary modules, and provide a log with instructions on how to do so.
Fix aisdk format in workflow breaking stream (#8716)
Standardize model configuration across all Mastra components
All model configuration points now accept the same flexible MastraModelConfig type as the Agent class:
This change provides:
"openai/gpt-4o")Example:
// All of these now work everywhere models are accepted
const scorer = createScorer({
judge: { model: "openai/gpt-4o" } // Magic string
});
const processor = new ModerationProcessor({
model: { id: "custom/model", url: "https://..." } // Custom config
});
const relevanceScorer = new MastraAgentRelevanceScorer(
async (ctx) => ctx.getModel() // Dynamic function
);
(#8626)
fix: preserve providerOptions through message list conversions (#8836)
improve error propagation in agent stream failures (#8733)
prevent duplicate deprecation warning logs and deprecate modelSettings.abortSignal in favor of top-level abortSignal (#8840)
Removed logging of massive model objects in tool failures (#8839)
Create unified Sidebar component to use on Playground and Cloud (#8655)
Added tracing of input & output processors (this includes using structuredOutput) (#8623)
support model router in structured output and client-js (#8686)
ai-sdk workflow route, agent network route (#8672)
Handle maxRetries in agent.generate/stream properly. Add deprecation warning to top level abortSignal in AgentExecuteOptions as that property is duplicated inside of modelSettings as well. (#8729)
Include span id and trace id when running live scorers (#8842)
Added deprecation warnings for stream and observeStream. We will switch the implementation to streamVNext/observeStreamVNext in the future. (#8701)
Add div wrapper around entity tables to fix table vertical position (#8758)
Customize AITraces type to seamlessly work on Cloud too (#8759)
Refactor EntryList component and Scorer and Observability pages (#8652)
Update structuredOutput to use response format by default with an opt in to json prompt injection. Replaced internal usage of output with structuredOutput. (#8557)
Add support for exporting scores for external observability providers (#8335)
Stream finalResult from network loop (#8795)
Fix broken generateTitle behaviour #8726, make generateTitle: true default memory setting (#8800)
Standardize model configuration across all components to support flexible model resolution
All model configuration points now accept MastraModelConfig, enabling consistent model specification across:
Scorers (createScorer and all built-in scorers)
Input/Output Processors (ModerationProcessor, PIIDetector)
Relevance Scorers (MastraAgentRelevanceScorer)
Supported formats:
Magic strings: 'openai/gpt-4o-mini'
Config objects: { id: 'openai/gpt-4o-mini' } or { providerId: 'openai', modelId: 'gpt-4o-mini' }
Custom endpoints: { id: 'custom/model', url: 'https://...', apiKey: '...' }
Dynamic resolution: (ctx) => 'openai/gpt-4o-mini'
This change provides a unified model configuration experience matching the Agent class, making it easier to switch models and use custom providers across all Mastra components. (#8626)
Improve README (#8819)
nested ai-sdk workflows and networks streaming support (#8614)
@rollup/plugin-node-resolve@^16.0.2 ↗︎ (from ^16.0.1, in dependencies) (#8599)mastra build & mastra start (#8653)redis@^5.8.3 ↗︎ (from ^5.8.2, in dependencies) (#8635)Update peer dependencies to match core package version bump (0.21.0) (#8619)
Update peer dependencies to match core package version bump (0.21.0) (#8557)
Update peer dependencies to match core package version bump (0.21.0) (#8626)
Standardize model configuration across all components to support flexible model resolution
All model configuration points now accept MastraModelConfig, enabling consistent model specification across:
Scorers (createScorer and all built-in scorers)
Input/Output Processors (ModerationProcessor, PIIDetector)
Relevance Scorers (MastraAgentRelevanceScorer)
Supported formats:
Magic strings: 'openai/gpt-4o-mini'
Config objects: { id: 'openai/gpt-4o-mini' } or { providerId: 'openai', modelId: 'gpt-4o-mini' }
Custom endpoints: { id: 'custom/model', url: 'https://...', apiKey: '...' }
Dynamic resolution: (ctx) => 'openai/gpt-4o-mini'
This change provides a unified model configuration experience matching the Agent class, making it easier to switch models and use custom providers across all Mastra components. (#8626)
Update peer dependencies to match core package version bump (0.21.0) (#8686)
@inngest/realtime@^0.4.4 ↗︎ (from ^0.3.1, in dependencies) (#8647)inngest@^3.44.2 ↗︎ (from ^3.40.3, in dependencies) (#8651)@inngest/realtime@^0.4.4 ↗︎ (from ^0.3.1, in dependencies) (#8647)inngest@^3.44.2 ↗︎ (from ^3.40.3, in dependencies) (#8651)Update peer dependencies to match core package version bump (0.21.0) (#8619)
Add AI SDK v5 compatibility to Langfuse exporter while maintaining backward compatibility with v4
Features:
Normalize token usage to handle both AI SDK v4 format (promptTokens/completionTokens) and v5 format (inputTokens/outputTokens)
Support AI SDK v5-specific features:
Automatic detection and normalization of token formats with v5 taking precedence
Comprehensive type definitions with JSDoc annotations indicating version compatibility
Technical Changes:
Added NormalizedUsage interface with detailed version documentation
Implemented normalizeUsage() method using nullish coalescing (??) to safely handle both formats
Added 8 new test cases covering v4/v5 compatibility scenarios
Updated documentation with AI SDK v5 compatibility guide
Breaking Changes: None - fully backward compatible with existing AI SDK v4 implementations (#8790)
Update peer dependencies to match core package version bump (0.21.0) (#8557)
Update peer dependencies to match core package version bump (0.21.0) (#8626)
Support exporting scores to Langfuse traces (#8335)
Update peer dependencies to match core package version bump (0.21.0) (#8686)
dependencies updates:
redis@^5.8.3 ↗︎ (from ^5.8.2, in dependencies) (#8635)dependencies updates:
@upstash/redis@^1.35.5 ↗︎ (from ^1.35.4, in dependencies) (#8684)feat(pg): add flexible PostgreSQL configuration with shared types
Introduce shared PostgresConfig type with generic SSL support (ISSLConfig for pg-promise, ConnectionOptions for pg)
Add pgPoolOptions support to PgVector for advanced pool configuration
Create shared validation helpers to reduce code duplication
Maintain backward compatibility with existing configurations (#8103)
Update peer dependencies to match core package version bump (0.21.0) (#8619)
Update peer dependencies to match core package version bump (0.21.0) (#8557)
Update peer dependencies to match core package version bump (0.21.0) (#8626)
Update peer dependencies to match core package version bump (0.21.0) (#8686)
resourceAttributes to OtelExporterConfig so that attributes like deployment.environment can be set in the new OpenTelemetry exporter. (#8700)feat(pg): add flexible PostgreSQL configuration with shared types
Introduce shared PostgresConfig type with generic SSL support (ISSLConfig for pg-promise, ConnectionOptions for pg)
Add pgPoolOptions support to PgVector for advanced pool configuration
Create shared validation helpers to reduce code duplication
Maintain backward compatibility with existing configurations (#8103)
Update peer dependencies to match core package version bump (0.21.0) (#8619)
Update peer dependencies to match core package version bump (0.21.0) (#8557)
Update peer dependencies to match core package version bump (0.21.0) (#8626)
Update peer dependencies to match core package version bump (0.21.0) (#8686)
zod@^4.1.12 ↗︎ (from ^4.1.9, in dependencies) (#8685)@upstash/redis@^1.35.5 ↗︎ (from ^1.35.4, in dependencies) (#8684)Workflows now support global state, you can now read state in each of your defined steps and set it withsetState. This makes it easier to manage state over multiple steps nstead of passing it through input/output variables.
const firstStep = createStep({
id: "first-step",
execute({ setState }) {
setState({
myValue: "a value",
});
},
});
const secondStep = createStep({
id: "second-step",
execute({ state }) {
console.log(state.myValue);
},
});
createWorkflow({
id: "my-worfklow",
stateSchema: z.object({
myValue: z.string(),
}),
}).then(myStep);
Working memory can be stored using thread metadata. It allows you to set the initial wokring memory directly.
const thread = await memory.createThread({
threadId: "thread-123",
resourceId: "user-456",
title: "Medical Consultation",
metadata: {
workingMemory: `# Patient Profile
- Name: John Doe
- Blood Type: O+
- Allergies: Penicillin
- Current Medications: None
- Medical History: Hypertension (controlled)`,
},
});
Improved useChat support from ai-sdk if you're using agents in your tools. You get a custom UI message called data-tool-agent with all relevant information.
// in src/mastra.ts
export const mastra = new Mastra({
server: {
apiRoutes: [
chatRoute({
path: "/chat",
agent: "my-agent",
}),
],
},
});
// in my useChat file
const { error, status, sendMessage, messages, regenerate, stop } =
useChat<MyMessage>({
transport: new DefaultChatTransport({
api: 'http://localhost:4111/chat',
body: {
}
}),
});
return (
<div className="flex flex-col pt-24 mx-auto w-full max-w-4xl h-screen">
<div className="flex flex-row mx-auto w-full overflow-y-auto gap-4">
<div className="flex-1">
{messages.map(message => {
return (
<div key={message.id} className="whitespace-pre-wrap">
{message.role === 'user' ? 'User: ' : 'AI: '}{' '}
{message.parts
.filter(part => part.type === 'data-tool-agent')
.map((part) => {
return <CustomWidget key={part.id} {...part.data} />
})}
{message.parts
.filter(part => part.type === 'text')
.map((part, index) => {
if (part.type === 'text') {
return <div key={index}>{part.text}</div>;
}
})
}
)
}}
</div>
</div>
</div>
)
Fix TypeScript errors with provider-defined tools by updating ai-v5 and openai-v5 to matching provider-utils versions. This ensures npm deduplicates to a single provider-utils instance, resolving type incompatibility issues when passing provider tools to Agent.
Also adds deprecation warning to Agent import from root path to encourage using the recommended subpath import. (#8584)
chromadb@^3.0.17 ↗︎ (from ^3.0.15, in dependencies) (#8554)workflow run thread more visible (#8539)
Add iterationCount to loop condition params (#8579)
Mutable shared workflow run state (#8545)
avoid refetching memory threads and messages on window focus (#8519)
add tripwire reason in playground (#8568)
Add validation for index creation (#8552)
Save waiting step status in snapshot (#8576)
Added AI SDK provider packages to model router for anthropic/google/openai/openrouter/xai (#8559)
type fixes and missing changeset (#8545)
Convert WorkflowWatchResult to WorkflowResult in workflow graph (#8541)
add new deploy to cloud button (#8549)
remove icons in entity lists (#8520)
add client search to all entities (#8523)
Improve JSDoc documentation for Agent (#8389)
Properly fix cloudflare randomUUID in global scope issue (#8450)
Marked OTEL based telemetry as deprecated. (#8586)
Add support for streaming nested agent tools (#8580)
Fix TypeScript errors with provider-defined tools by updating ai-v5 and openai-v5 to matching provider-utils versions. This ensures npm deduplicates to a single provider-utils instance, resolving type incompatibility issues when passing provider tools to Agent.
Also adds deprecation warning to Agent import from root path to encourage using the recommended subpath import. (#8584)
UX for the agents page (#8517)
add icons into playground titles + a link to the entity doc (#8518)
fix: custom API routes now properly respect authentication requirements
Fixed a critical bug where custom routes were bypassing authentication when they should have been protected by default. The issue was in the isProtectedPath function which only checked pattern-based protection but ignored custom route configurations.
requiresAuth: trueCustom routes properly inherit protection from parent patterns (like /api/*)
Routes with explicit requiresAuth: false continue to work as public endpoints
Enhanced isProtectedPath to consider both pattern matching and custom route auth config
This fixes issue #8421 where custom routes were not being properly protected by the authentication system. (#8469)
Correctly handle errors in streams. Errors (e.g. rate limiting) before the stream begins are now returned with their code. Mid-stream errors are passed as a chunk (with type: 'error') to the stream. (#8567)
Mutable shared workflow run state (#8545)
Fix bug when lodash dependencies where used in subdependencies (#8537)
@aws-sdk/client-dynamodb@^3.902.0 ↗︎ (from ^3.896.0, in dependencies)@aws-sdk/lib-dynamodb@^3.902.0 ↗︎ (from ^3.896.0, in dependencies) (#8436)langsmith@>=0.3.72 ↗︎ (from >=0.3.71, in dependencies) (#8560)Ensure working memory can be updated througb createThread and updateThread (#8513)
Fix TypeScript errors with provider-defined tools by updating ai-v5 and openai-v5 to matching provider-utils versions. This ensures npm deduplicates to a single provider-utils instance, resolving type incompatibility issues when passing provider tools to Agent.
Also adds deprecation warning to Agent import from root path to encourage using the recommended subpath import. (#8584)
@aws-sdk/client-s3vectors@^3.901.0 ↗︎ (from ^3.896.0, in dependencies) (#8436)Mutable shared workflow run state (#8545)
Fix TypeScript errors with provider-defined tools by updating ai-v5 and openai-v5 to matching provider-utils versions. This ensures npm deduplicates to a single provider-utils instance, resolving type incompatibility issues when passing provider tools to Agent.
Also adds deprecation warning to Agent import from root path to encourage using the recommended subpath import. (#8584)
This release includes improvements to documentation, playground functionality, API naming conventions, and various bug fixes across the platform.
We are excited to announce the release of our new model router and model fallbacks! You can now choose any model provider and model without the need to install or import it. If one model is not functioning properly, you will automatically be able to fallback to another model.
🦋 New tag: @mastra/ai-sdk@0.0.5 🦋 New tag: @mastra/client-js@0.14.1 🦋 New tag: @mastra/react@0.0.2@mastra/deployer-cloud@0.19.1 🦋 New tag: @mastra/deployer-cloudflare@0.14.4 🦋 New tag: @mastra/deployer-netlify@0.13.4 🦋 New tag: @mastra/deployer-vercel@0.12.4 🦋 New tag: @mastra/longmemeval@0.1.23 🦋 New tag: @mastra/braintrust@0.1.5 🦋 New tag: @mastra/langfuse@0.0.10 🦋 New tag: @mastra/langsmith@0.0.1 🦋 New tag: @mastra/otel-exporter@0.0.2 🦋 New tag: @mastra/agent-builder@0.0.7 🦋 New tag: mastra@0.13.4 🦋 New tag: @mastra/cloud@0.1.17 🦋 New tag: @mastra/core@0.19.1 🦋 New tag: create-mastra@0.13.4 🦋 New tag: @mastra/deployer@0.19.1 🦋 New tag: @mastra/evals@0.13.9 🦋 New tag: @mastra/loggers@0.10.14 🦋 New tag: @mastra/mcp@0.13.2 🦋 New tag: @mastra/mcp-docs-server@0.13.26 🦋 New tag: @mastra/mcp-registry-registry@0.10.17 🦋 New tag: @mastra/memory@0.15.4 🦋 New tag: @mastra/playground-ui@6.2.4 🦋 New tag: @mastra/rag@1.2.7 🦋 New tag: @mastra/server@0.19.1 🦋 New tag: @mastra/google-cloud-pubsub@0.1.7 🦋 New tag: @mastra/astra@0.11.11 🦋 New tag: @mastra/chroma@0.11.11 🦋 New tag: @mastra/clickhouse@0.15.3 🦋 New tag: @mastra/cloudflare@0.12.3 🦋 New tag: @mastra/cloudflare-d1@0.13.3 🦋 New tag: @mastra/couchbase@0.11.11 🦋 New tag: @mastra/dynamodb@0.15.4 🦋 New tag: @mastra/lance@0.3.3 🦋 New tag: @mastra/libsql@0.15.0 🦋 New tag: @mastra/mongodb@0.14.3 🦋 New tag: @mastra/mssql@0.4.3 🦋 New tag: @mastra/opensearch@0.11.12 🦋 New tag: @mastra/pg@0.17.0 🦋 New tag: @mastra/pinecone@0.11.11 🦋 New tag: @mastra/qdrant@0.11.14 🦋 New tag: @mastra/s3vectors@0.2.4 🦋 New tag: @mastra/turbopuffer@0.11.11 🦋 New tag: @mastra/upstash@0.15.3 🦋 New tag: @mastra/vectorize@0.11.12 🦋 New tag: @mastra/voice-azure@0.10.14 🦋 New tag: @mastra/voice-cloudflare@0.11.7 🦋 New tag: @mastra/voice-deepgram@0.11.7 🦋 New tag: @mastra/voice-elevenlabs@0.11.7 🦋 New tag: @mastra/voice-gladia@0.11.7 🦋 New tag: @mastra/voice-google@0.11.7 🦋 New tag: @mastra/voice-google-gemini-live@0.10.13 🦋 New tag: @mastra/voice-murf@0.11.7 🦋 New tag: @mastra/voice-openai@0.11.7 🦋 New tag: @mastra/voice-openai-realtime@0.11.7 🦋 New tag: @mastra/voice-playai@0.11.7 🦋 New tag: @mastra/voice-sarvam@0.11.7 🦋 New tag: @mastra/voice-speechify@0.11.7 🦋 New tag: @mastra/inngest@0.14.2
getStepResult in workflow steps, updates its documentation, and adds related tests. #8065ScoreDialog, ScoresTools), updates score rendering to use EntryList, adds a simplified code section option, and fixes minor issues. #7872.network() returns a network stream class. #7763stopWhen condition in vNext to ensure the correct accumulated step results across iterations. #7862ToolInvocationOptions type alias as a union of ToolExecutionOptions and ToolCallOptions to resolve TypeScript portability issues by abstracting direct references to ai package types in createTool and related interfaces. #7914serve/inngestServe function, allowing users to pass custom functions alongside Mastra workflow functions for more flexible integration, while maintaining backward compatibility and type safety. #7900 [TIER2]🦋 @mastra/auth-auth0@0.10.5 🦋 @mastra/auth-clerk@0.10.5 🦋 @mastra/auth-firebase@0.10.4 🦋 @mastra/auth-supabase@0.10.6 🦋 @mastra/auth-workos@0.10.7 🦋 @mastra/ai-sdk@0.0.3 🦋 @mastra/client-js@0.13.0 🦋 @mastra/deployer-cloud@0.17.0 🦋 @mastra/deployer-cloudflare@0.14.0 🦋 @mastra/deployer-netlify@0.13.0 🦋 @mastra/deployer-vercel@0.12.0 🦋 @mastra/braintrust@0.1.3 🦋 @mastra/langfuse@0.0.8 🦋 @mastra/agent-builder@0.0.5 🦋 mastra@0.13.0 🦋 @mastra/cloud@0.1.15 🦋 @mastra/core@0.17.0 🦋 create-mastra@0.13.0 🦋 @mastra/deployer@0.17.0 🦋 @mastra/evals@0.13.7 🦋 @mastra/loggers@0.10.12 🦋 @mastra/mcp@0.13.0 🦋 @mastra/mcp-docs-server@0.13.22 🦋 @mastra/mcp-registry-registry@0.10.15 🦋 @mastra/memory@0.15.2 🦋 @mastra/playground-ui@6.2.0 🦋 @mastra/rag@1.2.5 🦋 @mastra/server@0.17.0 🦋 @mastra/google-cloud-pubsub@0.1.5 🦋 @mastra/astra@0.11.9 🦋 @mastra/chroma@0.11.9 🦋 @mastra/clickhouse@0.15.1 🦋 @mastra/cloudflare@0.12.1 🦋 @mastra/cloudflare-d1@0.13.1 🦋 @mastra/couchbase@0.11.9 🦋 @mastra/dynamodb@0.15.2 🦋 @mastra/lance@0.3.1 🦋 @mastra/libsql@0.14.2 🦋 @mastra/mongodb@0.14.1 🦋 @mastra/mssql@0.4.1 🦋 @mastra/opensearch@0.11.10 🦋 @mastra/pg@0.16.0 🦋 @mastra/pinecone@0.11.9 🦋 @mastra/qdrant@0.11.11 🦋 @mastra/s3vectors@0.2.2 🦋 @mastra/turbopuffer@0.11.9 🦋 @mastra/upstash@0.15.1 🦋 @mastra/vectorize@0.11.10 🦋 @mastra/voice-azure@0.10.12 🦋 @mastra/voice-cloudflare@0.11.5 🦋 @mastra/voice-deepgram@0.11.5 🦋 @mastra/voice-elevenlabs@0.11.5 🦋 @mastra/voice-gladia@0.11.5 🦋 @mastra/voice-google@0.11.5 🦋 @mastra/voice-google-gemini-live@0.10.11 🦋 @mastra/voice-murf@0.11.5 🦋 @mastra/voice-openai@0.11.5 🦋 @mastra/voice-openai-realtime@0.11.5 🦋 @mastra/voice-playai@0.11.5 🦋 @mastra/voice-sarvam@0.11.5 🦋 @mastra/voice-speechify@0.11.5 🦋 @mastra/inngest@0.14.0
yarn pack to properly handle native dependencies in yarn monorepos. #7570transpilePackages feature during mastra dev by adding stricter checks and changing the transpilation output location for workspace packages to improve reliability and avoid embedding third-party dependencies. #7572Workflow.createRecordStream method in the JavaScript client SDK, increasing coverage by 15 lines and verifying both success and error scenarios for streaming record serialization. #7594getStep utility, improving coverage of workflow navigation and nested EventedWorkflow resolution logic. #7433experimental_output and removes references to structuredOutput in example code. #6854StoreOperationsInMemory.batchInsert method, verifying correct ID assignment, edge case handling, and increasing code coverage by 15 lines. #7107convertFullStreamChunkToUIMessageStream correctly transforms tool-output stream parts into UI message format, accurately mapping fields and preserving all output properties. #7130CohereRelevanceScorer, covering API request formatting, authentication, success and error scenarios, edge cases, and increases code coverage by 29 lines. #7214z.toJSONSchema in the core and schema-compat modules, addressing issues reported in #7283 and affecting the openapi-spec-writer example. #7350.streamVNext() method to enable streaming of agent events within workflows, with future updates planned for additional streaming formats and execution engines. #7413.stream() event types for workflows. #7424We're thrilled to announce the release of support for Zod v4, while maintaining compatibility with Zod v3! We've also revamped our observability under the new name ai-tracing (documentation is on the way). Plus, you'll discover improvements in our streamVNext and generateVNext methods.
InMemoryLegacyEvals.getEvals method, covering field transformation, pagination, agent and type filtering, empty collection handling, and data integrity, increasing test coverage by 39 lines. #6984This release introduces two major features: the new streamVNext and generateVNext methods that support multiple output formats (Mastra and AI SDK v5 formats), and output processors for transforming agent streams and results.
streamVNext and generateVNext methods with configurable output formats - choose between Mastra's native format or AI SDK v5 format #6877streamVNext() function API #6876DefaultExecutionEngine.executeConditional error handling during condition evaluation #6829mitt.off method, verifying event handler removal and edge cases #6813.map method #6799agentIds field, and also removes stray console logs and corrects typos in related tests. #6683The default Google Gemini model from the deprecated experimental 'gemini-2.5-pro-exp-03-25' to the stable 'gemini-2.5-pro' model to fix onboarding issues caused by the removed model. #6443
The getVoice and getScorers functions from agent trace logging. #6590
A unified types builder utility to ensure all generated TypeScript declaration files use proper .js import extensions for ESM compatibility, updates build and tsup configurations across the monorepo, and resolves type generation issues. #6588 [TIER2]
The convert message list v1 function to include message deduplication and adds corresponding tests. #6597
Update safelyParseJSON to better handle parameter values, fixing an issue where certain database rows were not properly retrieved from the vector DB, which caused traces to fail to load for stores like PostgreSQL. #6576
Mastra core peer dependencies to accommodate breaking changes in the scorer API and score data storage. #6578
Setup a template for converting PDFs to podcasts. #6101 [TIER2]
Updates the examples for Agents and Tools. #6508
The workflow example to use the current Mastra API, refactors logic for better step usage, and ensures compatibility and type safety for both even and odd input scenarios. #6583
Updates the documentation by adding an image example to the agents overview section. #6442
createMockModel utility to the test scope and removes its export from the MCP package, eliminating unnecessary test dependencies from production and introducing a breaking change for users importing it. #6180LibSQLVector.doUpsert, preventing unnecessary errors. #6158Mastra.prototype.#vectors. #6023createRun method to createRunAsync in the client-js library to align it with the Mastra Workflow class, addressing issue #5996. #6110