Fix Zod 4 compatibility for storage schema detection (#11431)
If you're using Zod 4, buildStorageSchema was failing to detect nullable and optional fields correctly. This caused NOT NULL constraint failed errors when storing observability spans and other data.
This fix enables proper schema detection for Zod 4 users, ensuring nullable fields like parentSpanId are correctly identified and don't cause database constraint violations.
Fix OpenAI structured output compatibility for fields with .default() values (#11434)
When using Zod schemas with .default() fields (e.g., z.number().default(1)), OpenAI's structured output API was failing with errors like Missing '<field>' in required. This happened because zod-to-json-schema doesn't include fields with defaults in the required array, but OpenAI requires all properties to be required.
This fix converts .default() fields to .nullable() with a transform that returns the default value when null is received, ensuring compatibility with OpenAI's strict mode while preserving the original default value semantics.
Full Changelog: 4cbe850
Fixed inline type narrowing for tool.execute() return type when using outputSchema. (#11420)
Problem: When calling tool.execute(), TypeScript couldn't narrow the ValidationError | OutputType union after checking 'error' in result && result.error, causing type errors when accessing output properties.
Solution:
{ error?: never } to the success type, enabling proper discriminated union narrowingcreateTool generics so inputData is correctly typed based on inputSchemaNote: Tool output schemas should not use error as a field name since it's reserved for ValidationError discrimination. Use errorMessage or similar instead.
Usage:
const result = await myTool.execute({ firstName: 'Hans' });
if ('error' in result && result.error) {
console.error('Validation failed:', result.message);
return;
}
// ✅ TypeScript now correctly narrows result
return { fullName: result.fullName };
Add storage composition to MastraStorage (#11401)
MastraStorage can now compose storage domains from different adapters. Use it when you need different databases for different purposes - for example, PostgreSQL for memory and workflows, but a different database for observability.
import { MastraStorage } from '@mastra/core/storage';
import { MemoryPG, WorkflowsPG, ScoresPG } from '@mastra/pg';
import { MemoryLibSQL } from '@mastra/libsql';
// Compose domains from different stores
const storage = new MastraStorage({
id: 'composite',
domains: {
memory: new MemoryLibSQL({ url: 'file:./local.db' }),
workflows: new WorkflowsPG({ connectionString: process.env.DATABASE_URL }),
scores: new ScoresPG({ connectionString: process.env.DATABASE_URL }),
},
});
Breaking changes:
storage.supports property no longer existsStorageSupports type is no longer exported from @mastra/core/storageAll stores now support the same features. For domain availability, use getStore():
const store = await storage.getStore('memory');
if (store) {
// domain is available
}
cancel() method as an alias for cancelRun() in the Run class. The new method provides a more concise API while maintaining backward compatibility. Includes comprehensive documentation about abort signals and how steps can respond to cancellation. (#11417)Add storage composition to MastraStorage (#11401)
MastraStorage can now compose storage domains from different adapters. Use it when you need different databases for different purposes - for example, PostgreSQL for memory and workflows, but a different database for observability.
import { MastraStorage } from '@mastra/core/storage';
import { MemoryPG, WorkflowsPG, ScoresPG } from '@mastra/pg';
import { MemoryLibSQL } from '@mastra/libsql';
// Compose domains from different stores
const storage = new MastraStorage({
id: 'composite',
domains: {
memory: new MemoryLibSQL({ url: 'file:./local.db' }),
workflows: new WorkflowsPG({ connectionString: process.env.DATABASE_URL }),
scores: new ScoresPG({ connectionString: process.env.DATABASE_URL }),
},
});
Breaking changes:
storage.supports property no longer existsStorageSupports type is no longer exported from @mastra/core/storageAll stores now support the same features. For domain availability, use getStore():
const store = await storage.getStore('memory');
if (store) {
// domain is available
}
Add storage composition to MastraStorage (#11401)
MastraStorage can now compose storage domains from different adapters. Use it when you need different databases for different purposes - for example, PostgreSQL for memory and workflows, but a different database for observability.
import { MastraStorage } from '@mastra/core/storage';
import { MemoryPG, WorkflowsPG, ScoresPG } from '@mastra/pg';
import { MemoryLibSQL } from '@mastra/libsql';
// Compose domains from different stores
const storage = new MastraStorage({
id: 'composite',
domains: {
memory: new MemoryLibSQL({ url: 'file:./local.db' }),
workflows: new WorkflowsPG({ connectionString: process.env.DATABASE_URL }),
scores: new ScoresPG({ connectionString: process.env.DATABASE_URL }),
},
});
Breaking changes:
storage.supports property no longer existsStorageSupports type is no longer exported from @mastra/core/storageAll stores now support the same features. For domain availability, use getStore():
const store = await storage.getStore('memory');
if (store) {
// domain is available
}
Add storage composition to MastraStorage (#11401)
MastraStorage can now compose storage domains from different adapters. Use it when you need different databases for different purposes - for example, PostgreSQL for memory and workflows, but a different database for observability.
import { MastraStorage } from '@mastra/core/storage';
import { MemoryPG, WorkflowsPG, ScoresPG } from '@mastra/pg';
import { MemoryLibSQL } from '@mastra/libsql';
// Compose domains from different stores
const storage = new MastraStorage({
id: 'composite',
domains: {
memory: new MemoryLibSQL({ url: 'file:./local.db' }),
workflows: new WorkflowsPG({ connectionString: process.env.DATABASE_URL }),
scores: new ScoresPG({ connectionString: process.env.DATABASE_URL }),
},
});
Breaking changes:
storage.supports property no longer existsStorageSupports type is no longer exported from @mastra/core/storageAll stores now support the same features. For domain availability, use getStore():
const store = await storage.getStore('memory');
if (store) {
// domain is available
}
Add onError hook to server configuration for custom error handling. (#11403)
You can now provide a custom error handler through the Mastra server config to catch errors, format responses, or send them to external services like Sentry:
import { Mastra } from '@mastra/core/mastra';
const mastra = new Mastra({
server: {
onError: (err, c) => {
// Send to Sentry
Sentry.captureException(err);
// Return custom formatted response
return c.json(
{
error: err.message,
timestamp: new Date().toISOString(),
},
500,
);
},
},
});
If no onError is provided, the default error handler is used.
Fixes #9610
fix(observability): start MODEL_STEP span at beginning of LLM execution (#11409)
The MODEL_STEP span was being created when the step-start chunk arrived (after the model API call completed), causing the span's startTime to be close to its endTime instead of accurately reflecting when the step began.
This fix ensures MODEL_STEP spans capture the full duration of each LLM execution step, including the API call latency, by starting the span at the beginning of the step execution rather than when the response starts streaming.
Fixes #11271
Fixed inline type narrowing for tool.execute() return type when using outputSchema. (#11420)
Problem: When calling tool.execute(), TypeScript couldn't narrow the ValidationError | OutputType union after checking 'error' in result && result.error, causing type errors when accessing output properties.
Solution:
{ error?: never } to the success type, enabling proper discriminated union narrowingcreateTool generics so inputData is correctly typed based on inputSchemaNote: Tool output schemas should not use error as a field name since it's reserved for ValidationError discrimination. Use errorMessage or similar instead.
Usage:
const result = await myTool.execute({ firstName: 'Hans' });
if ('error' in result && result.error) {
console.error('Validation failed:', result.message);
return;
}
// ✅ TypeScript now correctly narrows result
return { fullName: result.fullName };
Add support for instructions field in MCPServer (#11421)
Implements the official MCP specification's instructions field, which allows MCP servers to provide system-wide prompts that are automatically sent to clients during initialization. This eliminates the need for per-project configuration files (like AGENTS.md) by centralizing the system prompt in the server definition.
What's New:
instructions optional field to MCPServerConfig typeInitializeResult responseExample Usage:
const server = new MCPServer({
name: 'GitHub MCP Server',
version: '1.0.0',
instructions:
'Use the available tools to help users manage GitHub repositories, issues, and pull requests. Always search before creating to avoid duplicates.',
tools: { searchIssues, createIssue, listPRs },
});
Add storage composition to MastraStorage (#11401)
MastraStorage can now compose storage domains from different adapters. Use it when you need different databases for different purposes - for example, PostgreSQL for memory and workflows, but a different database for observability.
import { MastraStorage } from '@mastra/core/storage';
import { MemoryPG, WorkflowsPG, ScoresPG } from '@mastra/pg';
import { MemoryLibSQL } from '@mastra/libsql';
// Compose domains from different stores
const storage = new MastraStorage({
id: 'composite',
domains: {
memory: new MemoryLibSQL({ url: 'file:./local.db' }),
workflows: new WorkflowsPG({ connectionString: process.env.DATABASE_URL }),
scores: new ScoresPG({ connectionString: process.env.DATABASE_URL }),
},
});
Breaking changes:
storage.supports property no longer existsStorageSupports type is no longer exported from @mastra/core/storageAll stores now support the same features. For domain availability, use getStore():
const store = await storage.getStore('memory');
if (store) {
// domain is available
}
Fix various places in core package where we were logging with console.error instead of the mastra logger. (#11425)
fix(workflows): ensure writer.custom() bubbles up from nested workflows and loops (#11422)
Previously, when using writer.custom() in steps within nested sub-workflows or loops (like dountil), the custom data events would not properly bubble up to the top-level workflow stream. This fix ensures that custom events are now correctly propagated through the nested workflow hierarchy without modification, allowing them to be consumed at the top level.
This brings workflows in line with the existing behavior for agents, where custom data chunks properly bubble up through sub-agent execution.
What changed:
nestedWatchCb function in workflow event handling to detect and preserve data-* custom eventsExample:
const subStep = createStep({
id: 'subStep',
execute: async ({ writer }) => {
await writer.custom({
type: 'custom-progress',
data: { status: 'processing' },
});
return { result: 'done' };
},
});
const subWorkflow = createWorkflow({ id: 'sub' }).then(subStep).commit();
const topWorkflow = createWorkflow({ id: 'top' }).then(subWorkflow).commit();
const run = await topWorkflow.createRun();
const stream = run.stream({ inputData: {} });
// Custom events from subStep now properly appear in the top-level stream
for await (const event of stream) {
if (event.type === 'custom-progress') {
console.log(event.data); // { status: 'processing' }
}
}
Add onError hook to server configuration for custom error handling. (#11403)
You can now provide a custom error handler through the Mastra server config to catch errors, format responses, or send them to external services like Sentry:
import { Mastra } from '@mastra/core/mastra';
const mastra = new Mastra({
server: {
onError: (err, c) => {
// Send to Sentry
Sentry.captureException(err);
// Return custom formatted response
return c.json(
{
error: err.message,
timestamp: new Date().toISOString(),
},
500,
);
},
},
});
If no onError is provided, the default error handler is used.
Fixes #9610
Add storage composition to MastraStorage (#11401)
MastraStorage can now compose storage domains from different adapters. Use it when you need different databases for different purposes - for example, PostgreSQL for memory and workflows, but a different database for observability.
import { MastraStorage } from '@mastra/core/storage';
import { MemoryPG, WorkflowsPG, ScoresPG } from '@mastra/pg';
import { MemoryLibSQL } from '@mastra/libsql';
// Compose domains from different stores
const storage = new MastraStorage({
id: 'composite',
domains: {
memory: new MemoryLibSQL({ url: 'file:./local.db' }),
workflows: new WorkflowsPG({ connectionString: process.env.DATABASE_URL }),
scores: new ScoresPG({ connectionString: process.env.DATABASE_URL }),
},
});
Breaking changes:
storage.supports property no longer existsStorageSupports type is no longer exported from @mastra/core/storageAll stores now support the same features. For domain availability, use getStore():
const store = await storage.getStore('memory');
if (store) {
// domain is available
}
Add storage composition to MastraStorage (#11401)
MastraStorage can now compose storage domains from different adapters. Use it when you need different databases for different purposes - for example, PostgreSQL for memory and workflows, but a different database for observability.
import { MastraStorage } from '@mastra/core/storage';
import { MemoryPG, WorkflowsPG, ScoresPG } from '@mastra/pg';
import { MemoryLibSQL } from '@mastra/libsql';
// Compose domains from different stores
const storage = new MastraStorage({
id: 'composite',
domains: {
memory: new MemoryLibSQL({ url: 'file:./local.db' }),
workflows: new WorkflowsPG({ connectionString: process.env.DATABASE_URL }),
scores: new ScoresPG({ connectionString: process.env.DATABASE_URL }),
},
});
Breaking changes:
storage.supports property no longer existsStorageSupports type is no longer exported from @mastra/core/storageAll stores now support the same features. For domain availability, use getStore():
const store = await storage.getStore('memory');
if (store) {
// domain is available
}
Add storage composition to MastraStorage (#11401)
MastraStorage can now compose storage domains from different adapters. Use it when you need different databases for different purposes - for example, PostgreSQL for memory and workflows, but a different database for observability.
import { MastraStorage } from '@mastra/core/storage';
import { MemoryPG, WorkflowsPG, ScoresPG } from '@mastra/pg';
import { MemoryLibSQL } from '@mastra/libsql';
// Compose domains from different stores
const storage = new MastraStorage({
id: 'composite',
domains: {
memory: new MemoryLibSQL({ url: 'file:./local.db' }),
workflows: new WorkflowsPG({ connectionString: process.env.DATABASE_URL }),
scores: new ScoresPG({ connectionString: process.env.DATABASE_URL }),
},
});
Breaking changes:
storage.supports property no longer existsStorageSupports type is no longer exported from @mastra/core/storageAll stores now support the same features. For domain availability, use getStore():
const store = await storage.getStore('memory');
if (store) {
// domain is available
}
Add support for instructions field in MCPServer (#11421)
Implements the official MCP specification's instructions field, which allows MCP servers to provide system-wide prompts that are automatically sent to clients during initialization. This eliminates the need for per-project configuration files (like AGENTS.md) by centralizing the system prompt in the server definition.
What's New:
instructions optional field to MCPServerConfig typeInitializeResult responseExample Usage:
const server = new MCPServer({
name: 'GitHub MCP Server',
version: '1.0.0',
instructions:
'Use the available tools to help users manage GitHub repositories, issues, and pull requests. Always search before creating to avoid duplicates.',
tools: { searchIssues, createIssue, listPRs },
});
Add storage composition to MastraStorage (#11401)
MastraStorage can now compose storage domains from different adapters. Use it when you need different databases for different purposes - for example, PostgreSQL for memory and workflows, but a different database for observability.
import { MastraStorage } from '@mastra/core/storage';
import { MemoryPG, WorkflowsPG, ScoresPG } from '@mastra/pg';
import { MemoryLibSQL } from '@mastra/libsql';
// Compose domains from different stores
const storage = new MastraStorage({
id: 'composite',
domains: {
memory: new MemoryLibSQL({ url: 'file:./local.db' }),
workflows: new WorkflowsPG({ connectionString: process.env.DATABASE_URL }),
scores: new ScoresPG({ connectionString: process.env.DATABASE_URL }),
},
});
Breaking changes:
storage.supports property no longer existsStorageSupports type is no longer exported from @mastra/core/storageAll stores now support the same features. For domain availability, use getStore():
const store = await storage.getStore('memory');
if (store) {
// domain is available
}
Add storage composition to MastraStorage (#11401)
MastraStorage can now compose storage domains from different adapters. Use it when you need different databases for different purposes - for example, PostgreSQL for memory and workflows, but a different database for observability.
import { MastraStorage } from '@mastra/core/storage';
import { MemoryPG, WorkflowsPG, ScoresPG } from '@mastra/pg';
import { MemoryLibSQL } from '@mastra/libsql';
// Compose domains from different stores
const storage = new MastraStorage({
id: 'composite',
domains: {
memory: new MemoryLibSQL({ url: 'file:./local.db' }),
workflows: new WorkflowsPG({ connectionString: process.env.DATABASE_URL }),
scores: new ScoresPG({ connectionString: process.env.DATABASE_URL }),
},
});
Breaking changes:
storage.supports property no longer existsStorageSupports type is no longer exported from @mastra/core/storageAll stores now support the same features. For domain availability, use getStore():
const store = await storage.getStore('memory');
if (store) {
// domain is available
}
Add storage composition to MastraStorage (#11401)
MastraStorage can now compose storage domains from different adapters. Use it when you need different databases for different purposes - for example, PostgreSQL for memory and workflows, but a different database for observability.
import { MastraStorage } from '@mastra/core/storage';
import { MemoryPG, WorkflowsPG, ScoresPG } from '@mastra/pg';
import { MemoryLibSQL } from '@mastra/libsql';
// Compose domains from different stores
const storage = new MastraStorage({
id: 'composite',
domains: {
memory: new MemoryLibSQL({ url: 'file:./local.db' }),
workflows: new WorkflowsPG({ connectionString: process.env.DATABASE_URL }),
scores: new ScoresPG({ connectionString: process.env.DATABASE_URL }),
},
});
Breaking changes:
storage.supports property no longer existsStorageSupports type is no longer exported from @mastra/core/storageAll stores now support the same features. For domain availability, use getStore():
const store = await storage.getStore('memory');
if (store) {
// domain is available
}
fix(observability): start MODEL_STEP span at beginning of LLM execution (#11409)
The MODEL_STEP span was being created when the step-start chunk arrived (after the model API call completed), causing the span's startTime to be close to its endTime instead of accurately reflecting when the step began.
This fix ensures MODEL_STEP spans capture the full duration of each LLM execution step, including the API call latency, by starting the span at the beginning of the step execution rather than when the response starts streaming.
Fixes #11271
Fix missing timezone columns during PostgreSQL spans table migration (#11419)
Fixes issue #11410 where users upgrading to observability beta.7 encountered errors about missing startedAtZ, endedAtZ, createdAtZ, and updatedAtZ columns. The migration now properly adds timezone-aware columns for all timestamp fields when upgrading existing databases, ensuring compatibility with the new observability implementation that requires these columns for batch operations.
Add storage composition to MastraStorage (#11401)
MastraStorage can now compose storage domains from different adapters. Use it when you need different databases for different purposes - for example, PostgreSQL for memory and workflows, but a different database for observability.
import { MastraStorage } from '@mastra/core/storage';
import { MemoryPG, WorkflowsPG, ScoresPG } from '@mastra/pg';
import { MemoryLibSQL } from '@mastra/libsql';
// Compose domains from different stores
const storage = new MastraStorage({
id: 'composite',
domains: {
memory: new MemoryLibSQL({ url: 'file:./local.db' }),
workflows: new WorkflowsPG({ connectionString: process.env.DATABASE_URL }),
scores: new ScoresPG({ connectionString: process.env.DATABASE_URL }),
},
});
Breaking changes:
storage.supports property no longer existsStorageSupports type is no longer exported from @mastra/core/storageAll stores now support the same features. For domain availability, use getStore():
const store = await storage.getStore('memory');
if (store) {
// domain is available
}
Fix workflow observability view broken by invalid entityType parameter (#11427)
The UI workflow observability view was failing with a Zod validation error when trying to filter traces by workflow. The UI was sending entityType=workflow, but the backend's EntityType enum only accepts workflow_run.
Root Cause: The legacy value transformation was happening in the handler (after validation), but Zod validation occurred earlier in the request pipeline, rejecting the request before it could be transformed.
Solution:
z.preprocess() to the query schema to transform workflow → workflow_run before validationEntityType.WORKFLOW_RUN enum value for type safetyThis maintains backward compatibility with legacy clients while fixing the validation error.
Fixes #11412
Add container queries, adjust the agent chat and use container queries to better display information on the agent sidebar (#11408)
Fix workflow observability view broken by invalid entityType parameter (#11427)
The UI workflow observability view was failing with a Zod validation error when trying to filter traces by workflow. The UI was sending entityType=workflow, but the backend's EntityType enum only accepts workflow_run.
Root Cause: The legacy value transformation was happening in the handler (after validation), but Zod validation occurred earlier in the request pipeline, rejecting the request before it could be transformed.
Solution:
z.preprocess() to the query schema to transform workflow → workflow_run before validationEntityType.WORKFLOW_RUN enum value for type safetyThis maintains backward compatibility with legacy clients while fixing the validation error.
Fixes #11412
Add storage composition to MastraStorage (#11401)
MastraStorage can now compose storage domains from different adapters. Use it when you need different databases for different purposes - for example, PostgreSQL for memory and workflows, but a different database for observability.
import { MastraStorage } from '@mastra/core/storage';
import { MemoryPG, WorkflowsPG, ScoresPG } from '@mastra/pg';
import { MemoryLibSQL } from '@mastra/libsql';
// Compose domains from different stores
const storage = new MastraStorage({
id: 'composite',
domains: {
memory: new MemoryLibSQL({ url: 'file:./local.db' }),
workflows: new WorkflowsPG({ connectionString: process.env.DATABASE_URL }),
scores: new ScoresPG({ connectionString: process.env.DATABASE_URL }),
},
});
Breaking changes:
storage.supports property no longer existsStorageSupports type is no longer exported from @mastra/core/storageAll stores now support the same features. For domain availability, use getStore():
const store = await storage.getStore('memory');
if (store) {
// domain is available
}
Add storage composition to MastraStorage (#11401)
MastraStorage can now compose storage domains from different adapters. Use it when you need different databases for different purposes - for example, PostgreSQL for memory and workflows, but a different database for observability.
import { MastraStorage } from '@mastra/core/storage';
import { MemoryPG, WorkflowsPG, ScoresPG } from '@mastra/pg';
import { MemoryLibSQL } from '@mastra/libsql';
// Compose domains from different stores
const storage = new MastraStorage({
id: 'composite',
domains: {
memory: new MemoryLibSQL({ url: 'file:./local.db' }),
workflows: new WorkflowsPG({ connectionString: process.env.DATABASE_URL }),
scores: new ScoresPG({ connectionString: process.env.DATABASE_URL }),
},
});
Breaking changes:
storage.supports property no longer existsStorageSupports type is no longer exported from @mastra/core/storageAll stores now support the same features. For domain availability, use getStore():
const store = await storage.getStore('memory');
if (store) {
// domain is available
}
Full Changelog: a82c275
Fix model-level and runtime header support for LLM calls (#11303)
This fixes a bug where custom headers configured on models (like anthropic-beta) were not being passed through to the underlying AI SDK calls. The fix properly handles headers from multiple sources with correct priority:
Header Priority (low to high):
Examples that now work:
// Model config headers
new Agent({
model: {
id: 'anthropic/claude-4-5-sonnet',
headers: { 'anthropic-beta': 'context-1m-2025-08-07' },
},
});
// Runtime headers override config
agent.generate('...', {
modelSettings: { headers: { 'x-custom': 'runtime-value' } },
});
// Provider-level headers preserved
const openai = createOpenAI({ headers: { 'openai-organization': 'org-123' } });
new Agent({ model: openai('gpt-4o-mini') });
Add helpful JSDoc comments to BundlerConfig properties (used with bundler option) (#11300)
Fix telemetry disabled configuration being ignored by decorators (#11267)
The hasActiveTelemetry() function now properly checks the enabled configuration flag before creating spans. Previously, it only checked if a tracer existed (which always returns true in OpenTelemetry), causing decorators to create spans even when telemetry: { enabled: false } was set.
What changed:
hasActiveTelemetry() to check globalThis.__TELEMETRY__?.isEnabled() before checking for tracer existenceHow to use:
// Telemetry disabled at initialization
const mastra = new Mastra({
telemetry: { enabled: false },
});
// Or disable at runtime
Telemetry.setEnabled(false);
Breaking changes: None - this is a bug fix that makes the existing API work as documented.
Allow for bundler.externals: true to be set. (#11300)
With this configuration during mastra build all dependencies (except workspace dependencies) will be treated as "external" and not bundled. Instead they will be added to the .mastra/output/package.json file.
Fix generate system prompt by updating deprecated function call. (#11075)
Full Changelog: fd37787
Full Changelog: aab8fd1
@withSpan decorator now uses bounded serialization utilities to prevent unbounded memory growth when tracing agents with large inputs like base64 images. (#11231)Add support for AI SDK v6 (LanguageModelV3) (#11191)
Agents can now use LanguageModelV3 models from AI SDK v6 beta providers like @ai-sdk/openai@^3.0.0-beta.
New features:
reasoningTokens, cachedInputTokens, and raw data preserved in a raw fieldBackward compatible: All existing V1 and V2 models continue to work unchanged.
Add support for AI SDK v6 (LanguageModelV3) (#11191)
Agents can now use LanguageModelV3 models from AI SDK v6 beta providers like @ai-sdk/openai@^3.0.0-beta.
New features:
reasoningTokens, cachedInputTokens, and raw data preserved in a raw fieldBackward compatible: All existing V1 and V2 models continue to work unchanged.
Fix requestContext not being forwarded from middleware in chatRoute and networkRoute (b7b0930)
Previously, when using middleware to set values in requestContext (e.g., extracting agentId and organizationId from the request body), those values were not properly forwarded to agents, tools, and workflows when using chatRoute and networkRoute from the AI SDK.
This fix ensures that requestContext set by middleware is correctly prioritized and forwarded with the following precedence:
Resolves #11192
This change introduces three major breaking changes to the Auth0 authentication provider. These updates make token verification safer, prevent server crashes, and ensure proper authorization checks. (#10632)
authenticateToken() now fails safely instead of throwingauthorizeUser() now performs meaningful security checksThese changes improve stability, prevent runtime crashes, and enforce safer authentication & authorization behavior throughout the system.
feat: Add field filtering and nested workflow control to workflow execution result endpoint (#11246)
Adds two optional query parameters to /api/workflows/:workflowId/runs/:runId/execution-result endpoint:
fields: Request only specific fields (e.g., status, result, error)withNestedWorkflows: Control whether to fetch nested workflow dataThis significantly reduces response payload size and improves response times for large workflows.
# Get only status (minimal payload - fastest)
GET /api/workflows/:workflowId/runs/:runId/execution-result?fields=status
# Get status and result
GET /api/workflows/:workflowId/runs/:runId/execution-result?fields=status,result
# Get all fields but without nested workflow data (faster)
GET /api/workflows/:workflowId/runs/:runId/execution-result?withNestedWorkflows=false
# Get only specific fields without nested workflow data
GET /api/workflows/:workflowId/runs/:runId/execution-result?fields=status,steps&withNestedWorkflows=false
# Get full data (default behavior)
GET /api/workflows/:workflowId/runs/:runId/execution-result
import { MastraClient } from '@mastra/client-js';
const client = new MastraClient({ baseUrl: 'http://localhost:4111' });
const workflow = client.getWorkflow('myWorkflow');
// Get only status (minimal payload - fastest)
const statusOnly = await workflow.runExecutionResult(runId, {
fields: ['status'],
});
console.log(statusOnly.status); // 'success' | 'failed' | 'running' | etc.
// Get status and result
const statusAndResult = await workflow.runExecutionResult(runId, {
fields: ['status', 'result'],
});
// Get all fields but without nested workflow data (faster)
const resultWithoutNested = await workflow.runExecutionResult(runId, {
withNestedWorkflows: false,
});
// Get specific fields without nested workflow data
const optimized = await workflow.runExecutionResult(runId, {
fields: ['status', 'steps'],
withNestedWorkflows: false,
});
// Get full execution result (default behavior)
const fullResult = await workflow.runExecutionResult(runId);
The Workflow.getWorkflowRunExecutionResult method now accepts an options object:
await workflow.getWorkflowRunExecutionResult(runId, {
withNestedWorkflows: false, // default: true, set to false to skip nested workflow data
fields: ['status', 'result'], // optional field filtering
});
The @mastra/inngest package has been updated to use the new options object API. This is a non-breaking internal change - no action required from inngest workflow users.
For workflows with large step outputs:
status: ~99% reduction in payload sizestatus,result,error: ~95% reduction in payload sizewithNestedWorkflows=false: Avoids expensive nested workflow data fetchingFix delayed promises rejecting when stream suspends on tool-call-approval (#11278)
When a stream ends in suspended state (e.g., requiring tool approval), the delayed promises like toolResults, toolCalls, text, etc. now resolve with partial results instead of rejecting with an error. This allows consumers to access data that was produced before the suspension.
Also improves generic type inference for LLMStepResult and related types throughout the streaming infrastructure.
Fixed Convex schema exports to support import in convex/schema.ts files. (#11242)
Previously, importing table definitions from @mastra/convex/server failed in Convex schema files because it transitively imported Node.js runtime modules (crypto, fs, path) that are unavailable in Convex's deploy-time sandbox.
Changes
@mastra/convex/schema that provides table definitions without runtime dependenciessrc/schema.ts file@mastra/convex/server to re-export schema definitions from the new location for backward compatibilityMigration
Users should now import schema tables from @mastra/convex/schema instead of @mastra/convex/server in their convex/schema.ts files:
// Before
import { mastraThreadsTable, mastraMessagesTable } from '@mastra/convex/server';
// After
import { mastraThreadsTable, mastraMessagesTable } from '@mastra/convex/schema';
Add support for AI SDK v6 (LanguageModelV3) (#11191)
Agents can now use LanguageModelV3 models from AI SDK v6 beta providers like @ai-sdk/openai@^3.0.0-beta.
New features:
reasoningTokens, cachedInputTokens, and raw data preserved in a raw fieldBackward compatible: All existing V1 and V2 models continue to work unchanged.
Fix model-level and runtime header support for LLM calls (#11275)
This fixes a bug where custom headers configured on models (like anthropic-beta) were not being passed through to the underlying AI SDK calls. The fix properly handles headers from multiple sources with correct priority:
Header Priority (low to high):
Examples that now work:
// Model config headers
new Agent({
model: {
id: 'anthropic/claude-4-5-sonnet',
headers: { 'anthropic-beta': 'context-1m-2025-08-07' },
},
});
// Runtime headers override config
agent.generate('...', {
modelSettings: { headers: { 'x-custom': 'runtime-value' } },
});
// Provider-level headers preserved
const openai = createOpenAI({ headers: { 'openai-organization': 'org-123' } });
new Agent({ model: openai('gpt-4o-mini') });
Fixed AbortSignal not propagating from parent workflows to nested sub-workflows in the evented workflow engine. (#11142)
Previously, canceling a parent workflow did not stop nested sub-workflows, causing them to continue running and consuming resources after the parent was canceled.
Now, when you cancel a parent workflow, all nested sub-workflows are automatically canceled as well, ensuring clean termination of the entire workflow tree.
Example:
const parentWorkflow = createWorkflow({ id: 'parent-workflow' }).then(someStep).then(nestedChildWorkflow).commit();
const run = await parentWorkflow.createRun();
const resultPromise = run.start({ inputData: { value: 5 } });
// Cancel the parent workflow - nested workflows will also be canceled
await run.cancel();
// or use: run.abortController.abort();
const result = await resultPromise;
// result.status === 'canceled'
// All nested child workflows are also canceled
Related to #11063
Fix empty overrideScorers causing error instead of skipping scoring (#11257)
When overrideScorers was passed as an empty object {}, the agent would throw a "No scorers found" error. Now an empty object explicitly skips scoring, while undefined continues to use default scorers.
feat: Add field filtering and nested workflow control to workflow execution result endpoint (#11246)
Adds two optional query parameters to /api/workflows/:workflowId/runs/:runId/execution-result endpoint:
fields: Request only specific fields (e.g., status, result, error)withNestedWorkflows: Control whether to fetch nested workflow dataThis significantly reduces response payload size and improves response times for large workflows.
# Get only status (minimal payload - fastest)
GET /api/workflows/:workflowId/runs/:runId/execution-result?fields=status
# Get status and result
GET /api/workflows/:workflowId/runs/:runId/execution-result?fields=status,result
# Get all fields but without nested workflow data (faster)
GET /api/workflows/:workflowId/runs/:runId/execution-result?withNestedWorkflows=false
# Get only specific fields without nested workflow data
GET /api/workflows/:workflowId/runs/:runId/execution-result?fields=status,steps&withNestedWorkflows=false
# Get full data (default behavior)
GET /api/workflows/:workflowId/runs/:runId/execution-result
import { MastraClient } from '@mastra/client-js';
const client = new MastraClient({ baseUrl: 'http://localhost:4111' });
const workflow = client.getWorkflow('myWorkflow');
// Get only status (minimal payload - fastest)
const statusOnly = await workflow.runExecutionResult(runId, {
fields: ['status'],
});
console.log(statusOnly.status); // 'success' | 'failed' | 'running' | etc.
// Get status and result
const statusAndResult = await workflow.runExecutionResult(runId, {
fields: ['status', 'result'],
});
// Get all fields but without nested workflow data (faster)
const resultWithoutNested = await workflow.runExecutionResult(runId, {
withNestedWorkflows: false,
});
// Get specific fields without nested workflow data
const optimized = await workflow.runExecutionResult(runId, {
fields: ['status', 'steps'],
withNestedWorkflows: false,
});
// Get full execution result (default behavior)
const fullResult = await workflow.runExecutionResult(runId);
The Workflow.getWorkflowRunExecutionResult method now accepts an options object:
await workflow.getWorkflowRunExecutionResult(runId, {
withNestedWorkflows: false, // default: true, set to false to skip nested workflow data
fields: ['status', 'result'], // optional field filtering
});
The @mastra/inngest package has been updated to use the new options object API. This is a non-breaking internal change - no action required from inngest workflow users.
For workflows with large step outputs:
status: ~99% reduction in payload sizestatus,result,error: ~95% reduction in payload sizewithNestedWorkflows=false: Avoids expensive nested workflow data fetchingRemoved a debug log that printed large Zod schemas, resulting in cleaner console output when using agents with memory enabled. (#11279)
Set externals: true as the default for mastra build and cloud-deployer to reduce bundle issues with native dependencies. (0dbf199)
Note: If you previously relied on the default bundling behavior (all dependencies bundled), you can explicitly set externals: false in your bundler configuration.
Fix delayed promises rejecting when stream suspends on tool-call-approval (#11278)
When a stream ends in suspended state (e.g., requiring tool approval), the delayed promises like toolResults, toolCalls, text, etc. now resolve with partial results instead of rejecting with an error. This allows consumers to access data that was produced before the suspension.
Also improves generic type inference for LLMStepResult and related types throughout the streaming infrastructure.
Set externals: true as the default for mastra build and cloud-deployer to reduce bundle issues with native dependencies. (0dbf199)
Note: If you previously relied on the default bundling behavior (all dependencies bundled), you can explicitly set externals: false in your bundler configuration.
Set externals: true as the default for mastra build and cloud-deployer to reduce bundle issues with native dependencies. (0dbf199)
Note: If you previously relied on the default bundling behavior (all dependencies bundled), you can explicitly set externals: false in your bundler configuration.
Set externals: true as the default for mastra build and cloud-deployer to reduce bundle issues with native dependencies. (0dbf199)
Note: If you previously relied on the default bundling behavior (all dependencies bundled), you can explicitly set externals: false in your bundler configuration.
feat: Add field filtering and nested workflow control to workflow execution result endpoint (#11246)
Adds two optional query parameters to /api/workflows/:workflowId/runs/:runId/execution-result endpoint:
fields: Request only specific fields (e.g., status, result, error)withNestedWorkflows: Control whether to fetch nested workflow dataThis significantly reduces response payload size and improves response times for large workflows.
# Get only status (minimal payload - fastest)
GET /api/workflows/:workflowId/runs/:runId/execution-result?fields=status
# Get status and result
GET /api/workflows/:workflowId/runs/:runId/execution-result?fields=status,result
# Get all fields but without nested workflow data (faster)
GET /api/workflows/:workflowId/runs/:runId/execution-result?withNestedWorkflows=false
# Get only specific fields without nested workflow data
GET /api/workflows/:workflowId/runs/:runId/execution-result?fields=status,steps&withNestedWorkflows=false
# Get full data (default behavior)
GET /api/workflows/:workflowId/runs/:runId/execution-result
import { MastraClient } from '@mastra/client-js';
const client = new MastraClient({ baseUrl: 'http://localhost:4111' });
const workflow = client.getWorkflow('myWorkflow');
// Get only status (minimal payload - fastest)
const statusOnly = await workflow.runExecutionResult(runId, {
fields: ['status'],
});
console.log(statusOnly.status); // 'success' | 'failed' | 'running' | etc.
// Get status and result
const statusAndResult = await workflow.runExecutionResult(runId, {
fields: ['status', 'result'],
});
// Get all fields but without nested workflow data (faster)
const resultWithoutNested = await workflow.runExecutionResult(runId, {
withNestedWorkflows: false,
});
// Get specific fields without nested workflow data
const optimized = await workflow.runExecutionResult(runId, {
fields: ['status', 'steps'],
withNestedWorkflows: false,
});
// Get full execution result (default behavior)
const fullResult = await workflow.runExecutionResult(runId);
The Workflow.getWorkflowRunExecutionResult method now accepts an options object:
await workflow.getWorkflowRunExecutionResult(runId, {
withNestedWorkflows: false, // default: true, set to false to skip nested workflow data
fields: ['status', 'result'], // optional field filtering
});
The @mastra/inngest package has been updated to use the new options object API. This is a non-breaking internal change - no action required from inngest workflow users.
For workflows with large step outputs:
status: ~99% reduction in payload sizestatus,result,error: ~95% reduction in payload sizewithNestedWorkflows=false: Avoids expensive nested workflow data fetchingFixed ReDoS vulnerability in working memory tag parsing. (#11248)
Replaced regex-based parsing with indexOf-based string parsing to prevent denial of service attacks from malicious input. The vulnerable regex /<working_memory>([^]*?)<\/working_memory>/g had O(n²) complexity on pathological inputs - the new implementation maintains O(n) linear time.
Change searchbar to search on input with debounce instead of on Enter key press (#11138)
Add support for AI SDK v6 (LanguageModelV3) (#11191)
Agents can now use LanguageModelV3 models from AI SDK v6 beta providers like @ai-sdk/openai@^3.0.0-beta.
New features:
reasoningTokens, cachedInputTokens, and raw data preserved in a raw fieldBackward compatible: All existing V1 and V2 models continue to work unchanged.
Add support for AI SDK v6 (LanguageModelV3) (#11191)
Agents can now use LanguageModelV3 models from AI SDK v6 beta providers like @ai-sdk/openai@^3.0.0-beta.
New features:
reasoningTokens, cachedInputTokens, and raw data preserved in a raw fieldBackward compatible: All existing V1 and V2 models continue to work unchanged.
Add execution metadata to A2A message/send responses. The A2A protocol now returns detailed execution information including tool calls, tool results, token usage, and finish reason in the task metadata. This allows clients to inspect which tools were invoked during agent execution and access execution statistics without additional queries. (#11241)
feat: Add field filtering and nested workflow control to workflow execution result endpoint (#11246)
Adds two optional query parameters to /api/workflows/:workflowId/runs/:runId/execution-result endpoint:
fields: Request only specific fields (e.g., status, result, error)withNestedWorkflows: Control whether to fetch nested workflow dataThis significantly reduces response payload size and improves response times for large workflows.
# Get only status (minimal payload - fastest)
GET /api/workflows/:workflowId/runs/:runId/execution-result?fields=status
# Get status and result
GET /api/workflows/:workflowId/runs/:runId/execution-result?fields=status,result
# Get all fields but without nested workflow data (faster)
GET /api/workflows/:workflowId/runs/:runId/execution-result?withNestedWorkflows=false
# Get only specific fields without nested workflow data
GET /api/workflows/:workflowId/runs/:runId/execution-result?fields=status,steps&withNestedWorkflows=false
# Get full data (default behavior)
GET /api/workflows/:workflowId/runs/:runId/execution-result
import { MastraClient } from '@mastra/client-js';
const client = new MastraClient({ baseUrl: 'http://localhost:4111' });
const workflow = client.getWorkflow('myWorkflow');
// Get only status (minimal payload - fastest)
const statusOnly = await workflow.runExecutionResult(runId, {
fields: ['status'],
});
console.log(statusOnly.status); // 'success' | 'failed' | 'running' | etc.
// Get status and result
const statusAndResult = await workflow.runExecutionResult(runId, {
fields: ['status', 'result'],
});
// Get all fields but without nested workflow data (faster)
const resultWithoutNested = await workflow.runExecutionResult(runId, {
withNestedWorkflows: false,
});
// Get specific fields without nested workflow data
const optimized = await workflow.runExecutionResult(runId, {
fields: ['status', 'steps'],
withNestedWorkflows: false,
});
// Get full execution result (default behavior)
const fullResult = await workflow.runExecutionResult(runId);
The Workflow.getWorkflowRunExecutionResult method now accepts an options object:
await workflow.getWorkflowRunExecutionResult(runId, {
withNestedWorkflows: false, // default: true, set to false to skip nested workflow data
fields: ['status', 'result'], // optional field filtering
});
The @mastra/inngest package has been updated to use the new options object API. This is a non-breaking internal change - no action required from inngest workflow users.
For workflows with large step outputs:
status: ~99% reduction in payload sizestatus,result,error: ~95% reduction in payload sizewithNestedWorkflows=false: Avoids expensive nested workflow data fetchingSet externals: true as the default for mastra build and cloud-deployer to reduce bundle issues with native dependencies. (0dbf199)
Note: If you previously relied on the default bundling behavior (all dependencies bundled), you can explicitly set externals: false in your bundler configuration.
Two smaller quality of life improvements: (#11232)
create-mastra project no longer defines a LibSQLStore storage for the weather agent memory. It uses the root level storage option now (which is memory). This way no mastra.db files are created outside of the projectmastra init inside a project that already has git initialized, the prompt to initialize git is skippedFull Changelog: f6c82ec
Add onFinish and onError lifecycle callbacks to workflow options (#11200)
Workflows now support lifecycle callbacks for server-side handling of workflow completion and errors:
onFinish: Called when workflow completes with any status (success, failed, suspended, tripwire)onError: Called only when workflow fails (failed or tripwire status)const workflow = createWorkflow({
id: 'my-workflow',
inputSchema: z.object({ ... }),
outputSchema: z.object({ ... }),
options: {
onFinish: async (result) => {
// Handle any workflow completion
await updateJobStatus(result.status);
},
onError: async (errorInfo) => {
// Handle workflow failures
await logError(errorInfo.error);
},
},
});
Both callbacks support sync and async functions. Callback errors are caught and logged, not propagated to the workflow result.
Add onFinish and onError lifecycle callbacks to workflow options (#11200)
Workflows now support lifecycle callbacks for server-side handling of workflow completion and errors:
onFinish: Called when workflow completes with any status (success, failed, suspended, tripwire)onError: Called only when workflow fails (failed or tripwire status)const workflow = createWorkflow({
id: 'my-workflow',
inputSchema: z.object({ ... }),
outputSchema: z.object({ ... }),
options: {
onFinish: async (result) => {
// Handle any workflow completion
await updateJobStatus(result.status);
},
onError: async (errorInfo) => {
// Handle workflow failures
await logError(errorInfo.error);
},
},
});
Both callbacks support sync and async functions. Callback errors are caught and logged, not propagated to the workflow result.
Full Changelog: 79c7a87
9650cce)Embed AI types to fix peerdeps mismatches (9650cce)
Improve JSDoc comments for toAISdkV5Messages, toAISdkV4Messages functions (#11119)
Fixed duplicate assistant messages appearing when using useChat with memory enabled. (#11195)
What was happening: When using useChat with chatRoute and memory, assistant messages were being duplicated in storage after multiple conversation turns. This occurred because the backend-generated message ID wasn't being sent back to useChat, causing ID mismatches during deduplication.
What changed:
useChat uses the same ID as storagedata-* parts (from writer.custom()) are now preserved when messages contain V5 tool partsFixes #11091
add requestContext support to networkRoute (#11164)
Preserve error details when thrown from workflow steps (#10992)
Workflow errors now retain custom properties like statusCode, responseHeaders, and cause chains. This enables error-specific recovery logic in your applications.
Before:
const result = await workflow.execute({ input });
if (result.status === 'failed') {
// Custom error properties were lost
console.log(result.error); // "Step execution failed" (just a string)
}
After:
const result = await workflow.execute({ input });
if (result.status === 'failed') {
// Custom properties are preserved
console.log(result.error.message); // "Step execution failed"
console.log(result.error.statusCode); // 429
console.log(result.error.cause?.name); // "RateLimitError"
}
Type change: WorkflowState.error and WorkflowRunState.error types changed from string | Error to SerializedError.
Other changes:
UpdateWorkflowStateOptions type for workflow state updatesfix: make getSqlType consistent across storage adapters (#11112)
getSqlType() in createTable instead of toUpperCase()getSqlType() in createTable, return JSONB for jsonb type (matches SQLite 3.45+ support)getSqlType() in createTable instead of COLUMN_TYPES constant, add missing types (uuid, float, boolean)getSqlType() and getDefaultValue() from MastraStorage base class (all stores use StoreOperations versions)Embed AI types to fix peerdeps mismatches (9650cce)
Add resourceId to workflow routes (#11166)
Add Run instance to client-js. workflow.createRun returns the Run instance which can be used for the different run methods. (#11207)
With this change, run methods cannot be called directly on workflow instance anymore
- const result = await workflow.stream({ runId: '123', inputData: { ... } });
+ const run = await workflow.createRun({ runId: '123' });
+ const stream = await run.stream({ inputData: { ... } });
Deserialize workflow errors on the client side (#10992)
When workflows fail, the server sends error data as JSON over HTTP. This change deserializes those errors back to proper Error instances on the client.
Before:
const result = await workflow.startAsync({ input });
if (result.status === 'failed') {
// result.error was a plain object, couldn't use instanceof
console.log(result.error.message); // TypeScript error
}
After:
const result = await workflow.startAsync({ input });
if (result.status === 'failed') {
// result.error is now a proper Error instance
if (result.error instanceof MyCustomError) {
console.log(result.error.statusCode); // Works!
}
}
This enables proper error handling and type checking in client applications, allowing developers to implement error-specific recovery logic based on custom error types and properties.
Features:
instanceof Error checkserror.message, error.name, error.stackstatusCode, responseHeaders)error.causeAffected methods:
startAsync()resumeAsync()restartAsync()timeTravelAsync()Add missing status parameter to workflow.runs() method (#11095)
The status parameter was supported by the server API but was missing from the TypeScript types in @mastra/client-js.
Now you can filter workflow runs by status:
// Get only running workflows
const runningRuns = await workflow.runs({ status: 'running' });
// Get completed workflows
const completedRuns = await workflow.runs({ status: 'success' });
Preserve error details when thrown from workflow steps (#10992)
Workflow errors now retain custom properties like statusCode, responseHeaders, and cause chains. This enables error-specific recovery logic in your applications.
Before:
const result = await workflow.execute({ input });
if (result.status === 'failed') {
// Custom error properties were lost
console.log(result.error); // "Step execution failed" (just a string)
}
After:
const result = await workflow.execute({ input });
if (result.status === 'failed') {
// Custom properties are preserved
console.log(result.error.message); // "Step execution failed"
console.log(result.error.statusCode); // 429
console.log(result.error.cause?.name); // "RateLimitError"
}
Type change: WorkflowState.error and WorkflowRunState.error types changed from string | Error to SerializedError.
Other changes:
UpdateWorkflowStateOptions type for workflow state updatesPreserve error details when thrown from workflow steps (#10992)
Workflow errors now retain custom properties like statusCode, responseHeaders, and cause chains. This enables error-specific recovery logic in your applications.
Before:
const result = await workflow.execute({ input });
if (result.status === 'failed') {
// Custom error properties were lost
console.log(result.error); // "Step execution failed" (just a string)
}
After:
const result = await workflow.execute({ input });
if (result.status === 'failed') {
// Custom properties are preserved
console.log(result.error.message); // "Step execution failed"
console.log(result.error.statusCode); // 429
console.log(result.error.cause?.name); // "RateLimitError"
}
Type change: WorkflowState.error and WorkflowRunState.error types changed from string | Error to SerializedError.
Other changes:
UpdateWorkflowStateOptions type for workflow state updatesPreserve error details when thrown from workflow steps (#10992)
Workflow errors now retain custom properties like statusCode, responseHeaders, and cause chains. This enables error-specific recovery logic in your applications.
Before:
const result = await workflow.execute({ input });
if (result.status === 'failed') {
// Custom error properties were lost
console.log(result.error); // "Step execution failed" (just a string)
}
After:
const result = await workflow.execute({ input });
if (result.status === 'failed') {
// Custom properties are preserved
console.log(result.error.message); // "Step execution failed"
console.log(result.error.statusCode); // 429
console.log(result.error.cause?.name); // "RateLimitError"
}
Type change: WorkflowState.error and WorkflowRunState.error types changed from string | Error to SerializedError.
Other changes:
UpdateWorkflowStateOptions type for workflow state updatesRemove redundant toolCalls from network agent finalResult (#11189)
The network agent's finalResult was storing toolCalls separately even though all tool call information is already present in the messages array (as tool-call and tool-result type messages). This caused significant token waste since the routing agent reads this data from memory on every iteration.
Before: finalResult: { text, toolCalls, messages }
After: finalResult: { text, messages }
+Migration: If you were accessing finalResult.toolCalls, retrieve tool calls from finalResult.messages by filtering for messages with type: 'tool-call'.
Updated @mastra/react to extract tool calls directly from the messages array instead of the removed toolCalls field when resolving initial messages from memory.
Fixes #11059
Embed AI types to fix peerdeps mismatches (9650cce)
Fix invalid state: Controller is already closed (932d63d)
Fixes #11005
Fix HITL (Human-In-The-Loop) tool execution bug when mixing tools with and without execute functions. (#11178)
When an agent called multiple tools simultaneously where some had execute functions and others didn't (HITL tools expecting addToolResult from the frontend), the HITL tools would incorrectly receive result: undefined and be marked as "output-available" instead of "input-available". This caused the agent to continue instead of pausing for user input.
Add resourceId to workflow routes (#11166)
Auto resume suspended tools if autoResumeSuspendedTools: true (#11157)
The flag can be added to defaultAgentOptions when creating the agent or to options in agent.stream or agent.generate
const agent = new Agent({
//...agent information,
defaultAgentOptions: {
autoResumeSuspendedTools: true,
},
});
Preserve error details when thrown from workflow steps (#10992)
cause chain and custom propertiesSerializedError type with proper cause chain supportSerializedStepResult and SerializedStepFailure types for handling errors loaded from storageaddErrorToJSON to recursively serialize error cause chains with max depth protectionhydrateSerializedStepErrors to convert serialized errors back to Error instancesNonRetriableError.causeMove @ai-sdk/azure to devDependencies (#10218)
Refactor internal event system from Emitter to PubSub abstraction for workflow event handling. This change replaces the EventEmitter-based event system with a pluggable PubSub interface, enabling support for distributed workflow execution backends like Inngest. Adds close() method to PubSub implementations for proper cleanup. (#11052)
Add startAsync() method and fix Inngest duplicate workflow execution bug (#11093)
New Feature: startAsync() for fire-and-forget workflow execution
Run.startAsync() to base workflow class - starts workflow in background and returns { runId } immediatelyEventedRun.startAsync() - publishes workflow start event without subscribing for completionInngestRun.startAsync() - sends Inngest event without polling for resultBug Fix: Prevent duplicate Inngest workflow executions
getRuns() to properly handle rate limits (429), empty responses, and JSON parse errors with retry logic and exponential backoffgetRunOutput() to throw NonRetriableError when polling fails, preventing Inngest from retrying the parent function and re-triggering the workflowgetRunOutput() polling (default 5 minutes) with NonRetriableError on timeoutThis fixes a production issue where polling failures after successful workflow completion caused Inngest to retry the parent function, which fired a new workflow event and resulted in duplicate executions (e.g., duplicate Slack messages).
Preserve error details when thrown from workflow steps (#10992)
Workflow errors now retain custom properties like statusCode, responseHeaders, and cause chains. This enables error-specific recovery logic in your applications.
Before:
const result = await workflow.execute({ input });
if (result.status === 'failed') {
// Custom error properties were lost
console.log(result.error); // "Step execution failed" (just a string)
}
After:
const result = await workflow.execute({ input });
if (result.status === 'failed') {
// Custom properties are preserved
console.log(result.error.message); // "Step execution failed"
console.log(result.error.statusCode); // 429
console.log(result.error.cause?.name); // "RateLimitError"
}
Type change: WorkflowState.error and WorkflowRunState.error types changed from string | Error to SerializedError.
Other changes:
UpdateWorkflowStateOptions type for workflow state updatesFix Zod 4 compatibility issue with structuredOutput in agent.generate() (#11133)
Users with Zod 4 installed would see TypeError: undefined is not an object (evaluating 'def.valueType._zod') when using structuredOutput with agent.generate(). This happened because ProcessorStepSchema contains z.custom() fields that hold user-provided Zod schemas, and the workflow validation was trying to deeply validate these schemas causing version conflicts.
The fix disables input validation for processor workflows since z.custom() fields are meant to pass through arbitrary types without deep validation.
Truncate map config when too long (#11175)
Add helpful JSDoc comments to BundlerConfig properties (used with bundler option) (#10218)
Fixes .network() method ignores MASTRA_RESOURCE_ID_KEY from requestContext (4524734)
fix: make getSqlType consistent across storage adapters (#11112)
getSqlType() in createTable instead of toUpperCase()getSqlType() in createTable, return JSONB for jsonb type (matches SQLite 3.45+ support)getSqlType() in createTable instead of COLUMN_TYPES constant, add missing types (uuid, float, boolean)getSqlType() and getDefaultValue() from MastraStorage base class (all stores use StoreOperations versions)Fix workflow cancel not updating status when workflow is suspended (#11139)
Run.cancel() now updates workflow status to 'canceled' in storage, resolving the issue where suspended workflows remained in 'suspended' status after cancellationWhat changed: (#10998)
Support for sequential tool execution was added. Tool call concurrency is now set conditionally, defaulting to 1 when sequential execution is needed (to avoid race conditions that interfere with human-in-the-loop approval during the workflow) rather than the default of 10 when concurrency is acceptable.
How it was changed:
A sequentialExecutionRequired constant was set to a boolean depending on whether any of the tools involved in a returned agentic execution workflow would require approval. If any tool has a 'suspendSchema' property (used for conditionally suspending execution and waiting for human input), or if they have their requireApproval property set to true, then the concurrency property used in the toolCallStep is set to 1, causing sequential execution. The old default of 10 remains otherwise.
Fixed duplicate assistant messages appearing when using useChat with memory enabled. (#11195)
What was happening: When using useChat with chatRoute and memory, assistant messages were being duplicated in storage after multiple conversation turns. This occurred because the backend-generated message ID wasn't being sent back to useChat, causing ID mismatches during deduplication.
What changed:
useChat uses the same ID as storagedata-* parts (from writer.custom()) are now preserved when messages contain V5 tool partsFixes #11091
Remove deprecated playground-only prompt generation handler (functionality moved to @mastra/server) (#11074)
Improve prompt enhancement UX: show toast errors when enhancement fails, disable button when no model has a configured API key, and prevent users from disabling all models in the model list
Add missing /api/agents/:agentId/instructions/enhance endpoint that was referenced by @mastra/client-js and @mastra/playground-ui
Allow for bundler.externals: true to be set. (#10218)
With this configuration during mastra build all dependencies (except workspace dependencies) will be treated as "external" and not bundled. Instead they will be added to the .mastra/output/package.json file.
Preserve error details when thrown from workflow steps (#10992)
Workflow errors now retain custom properties like statusCode, responseHeaders, and cause chains. This enables error-specific recovery logic in your applications.
Before:
const result = await workflow.execute({ input });
if (result.status === 'failed') {
// Custom error properties were lost
console.log(result.error); // "Step execution failed" (just a string)
}
After:
const result = await workflow.execute({ input });
if (result.status === 'failed') {
// Custom properties are preserved
console.log(result.error.message); // "Step execution failed"
console.log(result.error.statusCode); // 429
console.log(result.error.cause?.name); // "RateLimitError"
}
Type change: WorkflowState.error and WorkflowRunState.error types changed from string | Error to SerializedError.
Other changes:
UpdateWorkflowStateOptions type for workflow state updates9650cce)Preserve error details when thrown from workflow steps (#10992)
cause chain and custom propertiesSerializedError type with proper cause chain supportSerializedStepResult and SerializedStepFailure types for handling errors loaded from storageaddErrorToJSON to recursively serialize error cause chains with max depth protectionhydrateSerializedStepErrors to convert serialized errors back to Error instancesNonRetriableError.causeRefactor internal event system from Emitter to PubSub abstraction for workflow event handling. This change replaces the EventEmitter-based event system with a pluggable PubSub interface, enabling support for distributed workflow execution backends like Inngest. Adds close() method to PubSub implementations for proper cleanup. (#11052)
Add startAsync() method and fix Inngest duplicate workflow execution bug (#11093)
New Feature: startAsync() for fire-and-forget workflow execution
Run.startAsync() to base workflow class - starts workflow in background and returns { runId } immediatelyEventedRun.startAsync() - publishes workflow start event without subscribing for completionInngestRun.startAsync() - sends Inngest event without polling for resultBug Fix: Prevent duplicate Inngest workflow executions
getRuns() to properly handle rate limits (429), empty responses, and JSON parse errors with retry logic and exponential backoffgetRunOutput() to throw NonRetriableError when polling fails, preventing Inngest from retrying the parent function and re-triggering the workflowgetRunOutput() polling (default 5 minutes) with NonRetriableError on timeoutThis fixes a production issue where polling failures after successful workflow completion caused Inngest to retry the parent function, which fired a new workflow event and resulted in duplicate executions (e.g., duplicate Slack messages).
Preserve error details when thrown from workflow steps (#10992)
Workflow errors now retain custom properties like statusCode, responseHeaders, and cause chains. This enables error-specific recovery logic in your applications.
Before:
const result = await workflow.execute({ input });
if (result.status === 'failed') {
// Custom error properties were lost
console.log(result.error); // "Step execution failed" (just a string)
}
After:
const result = await workflow.execute({ input });
if (result.status === 'failed') {
// Custom properties are preserved
console.log(result.error.message); // "Step execution failed"
console.log(result.error.statusCode); // 429
console.log(result.error.cause?.name); // "RateLimitError"
}
Type change: WorkflowState.error and WorkflowRunState.error types changed from string | Error to SerializedError.
Other changes:
UpdateWorkflowStateOptions type for workflow state updatesPreserve error details when thrown from workflow steps (#10992)
Workflow errors now retain custom properties like statusCode, responseHeaders, and cause chains. This enables error-specific recovery logic in your applications.
Before:
const result = await workflow.execute({ input });
if (result.status === 'failed') {
// Custom error properties were lost
console.log(result.error); // "Step execution failed" (just a string)
}
After:
const result = await workflow.execute({ input });
if (result.status === 'failed') {
// Custom properties are preserved
console.log(result.error.message); // "Step execution failed"
console.log(result.error.statusCode); // 429
console.log(result.error.cause?.name); // "RateLimitError"
}
Type change: WorkflowState.error and WorkflowRunState.error types changed from string | Error to SerializedError.
Other changes:
UpdateWorkflowStateOptions type for workflow state updatesPreserve error details when thrown from workflow steps (#10992)
Workflow errors now retain custom properties like statusCode, responseHeaders, and cause chains. This enables error-specific recovery logic in your applications.
Before:
const result = await workflow.execute({ input });
if (result.status === 'failed') {
// Custom error properties were lost
console.log(result.error); // "Step execution failed" (just a string)
}
After:
const result = await workflow.execute({ input });
if (result.status === 'failed') {
// Custom properties are preserved
console.log(result.error.message); // "Step execution failed"
console.log(result.error.statusCode); // 429
console.log(result.error.cause?.name); // "RateLimitError"
}
Type change: WorkflowState.error and WorkflowRunState.error types changed from string | Error to SerializedError.
Other changes:
UpdateWorkflowStateOptions type for workflow state updatesfix: make getSqlType consistent across storage adapters (#11112)
getSqlType() in createTable instead of toUpperCase()getSqlType() in createTable, return JSONB for jsonb type (matches SQLite 3.45+ support)getSqlType() in createTable instead of COLUMN_TYPES constant, add missing types (uuid, float, boolean)getSqlType() and getDefaultValue() from MastraStorage base class (all stores use StoreOperations versions)Embed AI types to fix peerdeps mismatches (9650cce)
Fix crash in updateMessageToHideWorkingMemoryV2 when message.content is not a V2 object. Added defensive type guards before spreading content to handle legacy or malformed message formats. (#11180)
Preserve error details when thrown from workflow steps (#10992)
Workflow errors now retain custom properties like statusCode, responseHeaders, and cause chains. This enables error-specific recovery logic in your applications.
Before:
const result = await workflow.execute({ input });
if (result.status === 'failed') {
// Custom error properties were lost
console.log(result.error); // "Step execution failed" (just a string)
}
After:
const result = await workflow.execute({ input });
if (result.status === 'failed') {
// Custom properties are preserved
console.log(result.error.message); // "Step execution failed"
console.log(result.error.statusCode); // 429
console.log(result.error.cause?.name); // "RateLimitError"
}
Type change: WorkflowState.error and WorkflowRunState.error types changed from string | Error to SerializedError.
Other changes:
UpdateWorkflowStateOptions type for workflow state updatesPreserve error details when thrown from workflow steps (#10992)
Workflow errors now retain custom properties like statusCode, responseHeaders, and cause chains. This enables error-specific recovery logic in your applications.
Before:
const result = await workflow.execute({ input });
if (result.status === 'failed') {
// Custom error properties were lost
console.log(result.error); // "Step execution failed" (just a string)
}
After:
const result = await workflow.execute({ input });
if (result.status === 'failed') {
// Custom properties are preserved
console.log(result.error.message); // "Step execution failed"
console.log(result.error.statusCode); // 429
console.log(result.error.cause?.name); // "RateLimitError"
}
Type change: WorkflowState.error and WorkflowRunState.error types changed from string | Error to SerializedError.
Other changes:
UpdateWorkflowStateOptions type for workflow state updateszod from dependencies to devDependencies as users should install it themselves to avoid version conflicts. (#11114)Preserve error details when thrown from workflow steps (#10992)
Workflow errors now retain custom properties like statusCode, responseHeaders, and cause chains. This enables error-specific recovery logic in your applications.
Before:
const result = await workflow.execute({ input });
if (result.status === 'failed') {
// Custom error properties were lost
console.log(result.error); // "Step execution failed" (just a string)
}
After:
const result = await workflow.execute({ input });
if (result.status === 'failed') {
// Custom properties are preserved
console.log(result.error.message); // "Step execution failed"
console.log(result.error.statusCode); // 429
console.log(result.error.cause?.name); // "RateLimitError"
}
Type change: WorkflowState.error and WorkflowRunState.error types changed from string | Error to SerializedError.
Other changes:
UpdateWorkflowStateOptions type for workflow state updatesfix: make getSqlType consistent across storage adapters (#11112)
getSqlType() in createTable instead of toUpperCase()getSqlType() in createTable, return JSONB for jsonb type (matches SQLite 3.45+ support)getSqlType() in createTable instead of COLUMN_TYPES constant, add missing types (uuid, float, boolean)getSqlType() and getDefaultValue() from MastraStorage base class (all stores use StoreOperations versions)Adds tool/workflow error being surfaced to the side panel in the playground (#11099)
Auto resume suspended tools if autoResumeSuspendedTools: true (#11157)
The flag can be added to defaultAgentOptions when creating the agent or to options in agent.stream or agent.generate
const agent = new Agent({
//...agent information,
defaultAgentOptions: {
autoResumeSuspendedTools: true,
},
});
Fix trace-span-usage component to handle object values in token usage data. Usage objects can contain nested inputDetails and outputDetails properties which are objects, not numbers. The component now properly type-checks values and renders object properties as nested key-value pairs. (#11141)
Add Run instance to client-js. workflow.createRun returns the Run instance which can be used for the different run methods. (#11207)
With this change, run methods cannot be called directly on workflow instance anymore
- const result = await workflow.stream({ runId: '123', inputData: { ... } });
+ const run = await workflow.createRun({ runId: '123' });
+ const stream = await run.stream({ inputData: { ... } });
Remove deprecated playground-only prompt generation handler (functionality moved to @mastra/server) (#11074)
Improve prompt enhancement UX: show toast errors when enhancement fails, disable button when no model has a configured API key, and prevent users from disabling all models in the model list
Add missing /api/agents/:agentId/instructions/enhance endpoint that was referenced by @mastra/client-js and @mastra/playground-ui
Focus the textarea when clicking anywhere in the entire chat prompt input box (#11160)
Removes redundant "Working Memory" section from memory config panel (already displayed in dedicated working memory component) (#11104) Fixes badge rendering for falsy values by using ?? instead of || (e.g., false was incorrectly displayed as empty string) Adds tooltip on disabled "Edit Working Memory" button explaining that working memory becomes available after the agent calls updateWorkingMemory
Workflow step detail panel improvements (#11134)
Fix agent default settings not being applied in playground (#11107)
maxOutputTokens (AI SDK v5) to maxTokens for UI compatibilityseed parameter support to model settingsdefaultOptions.modelSettings on loadfix isTopLevelSpan value definition on SpanScoring to properly recognize lack of span?.parentSpanId value (null or empty string) (#11083)
Remove redundant toolCalls from network agent finalResult (#11189)
The network agent's finalResult was storing toolCalls separately even though all tool call information is already present in the messages array (as tool-call and tool-result type messages). This caused significant token waste since the routing agent reads this data from memory on every iteration.
Before: finalResult: { text, toolCalls, messages }
After: finalResult: { text, messages }
+Migration: If you were accessing finalResult.toolCalls, retrieve tool calls from finalResult.messages by filtering for messages with type: 'tool-call'.
Updated @mastra/react to extract tool calls directly from the messages array instead of the removed toolCalls field when resolving initial messages from memory.
Fixes #11059
Auto resume suspended tools if autoResumeSuspendedTools: true (#11157)
The flag can be added to defaultAgentOptions when creating the agent or to options in agent.stream or agent.generate
const agent = new Agent({
//...agent information,
defaultAgentOptions: {
autoResumeSuspendedTools: true,
},
});
9650cce)Add resourceId to workflow routes (#11166)
Fix MastraServer.tools typing to accept tools with input schemas (5118f38)
Fixes issue #11185 where MastraServer.tools was rejecting tools created with createTool({ inputSchema }). Changed the tools property type from Record<string, Tool> to ToolsInput to accept tools with any schema types (input, output, or none), as well as Vercel AI SDK tools and provider-defined tools.
Tools created with createTool({ inputSchema: z.object(...) }) now work without TypeScript errors.
Remove deprecated playground-only prompt generation handler (functionality moved to @mastra/server) (#11074)
Improve prompt enhancement UX: show toast errors when enhancement fails, disable button when no model has a configured API key, and prevent users from disabling all models in the model list
Add missing /api/agents/:agentId/instructions/enhance endpoint that was referenced by @mastra/client-js and @mastra/playground-ui
Preserve error details when thrown from workflow steps (#10992)
Workflow errors now retain custom properties like statusCode, responseHeaders, and cause chains. This enables error-specific recovery logic in your applications.
Before:
const result = await workflow.execute({ input });
if (result.status === 'failed') {
// Custom error properties were lost
console.log(result.error); // "Step execution failed" (just a string)
}
After:
const result = await workflow.execute({ input });
if (result.status === 'failed') {
// Custom properties are preserved
console.log(result.error.message); // "Step execution failed"
console.log(result.error.statusCode); // 429
console.log(result.error.cause?.name); // "RateLimitError"
}
Type change: WorkflowState.error and WorkflowRunState.error types changed from string | Error to SerializedError.
Other changes:
UpdateWorkflowStateOptions type for workflow state updatesAuto resume suspended tools if autoResumeSuspendedTools: true (#11157)
The flag can be added to defaultAgentOptions when creating the agent or to options in agent.stream or agent.generate
const agent = new Agent({
//...agent information,
defaultAgentOptions: {
autoResumeSuspendedTools: true,
},
});
Add Run instance to client-js. workflow.createRun returns the Run instance which can be used for the different run methods. (#11207)
With this change, run methods cannot be called directly on workflow instance anymore
- const result = await workflow.stream({ runId: '123', inputData: { ... } });
+ const run = await workflow.createRun({ runId: '123' });
+ const stream = await run.stream({ inputData: { ... } });
fix isTopLevelSpan value definition on SpanScoring to properly recognize lack of span?.parentSpanId value (null or empty string) (#11083)
Auto resume suspended tools if autoResumeSuspendedTools: true (#11157)
The flag can be added to defaultAgentOptions when creating the agent or to options in agent.stream or agent.generate
const agent = new Agent({
//...agent information,
defaultAgentOptions: {
autoResumeSuspendedTools: true,
},
});
Add Run instance to client-js. workflow.createRun returns the Run instance which can be used for the different run methods. (#11207)
With this change, run methods cannot be called directly on workflow instance anymore
- const result = await workflow.stream({ runId: '123', inputData: { ... } });
+ const run = await workflow.createRun({ runId: '123' });
+ const stream = await run.stream({ inputData: { ... } });
Fix the development experience of the studio. It was not able to resolve the running instance because the index.html variables were not replaced in the vite dev standalone config (#11085)
fix isTopLevelSpan value definition on SpanScoring to properly recognize lack of span?.parentSpanId value (null or empty string) (#11083)
Support new Workflow tripwire run status. Tripwires that are thrown from within a workflow will now bubble up and return a graceful state with information about tripwires. (#10947)
When a workflow contains an agent step that triggers a tripwire, the workflow returns with status: 'tripwire' and includes tripwire details:
const run = await workflow.createRun();
const result = await run.start({ inputData: { message: 'Hello' } });
if (result.status === 'tripwire') {
console.log('Workflow terminated by tripwire:', result.tripwire?.reason);
console.log('Processor ID:', result.tripwire?.processorId);
console.log('Retry requested:', result.tripwire?.retry);
}
Adds new UI state for tripwire in agent chat and workflow UI.
This is distinct from status: 'failed' which indicates an unexpected error. A tripwire status means a processor intentionally stopped execution (e.g., for content moderation).
Support new Workflow tripwire run status. Tripwires that are thrown from within a workflow will now bubble up and return a graceful state with information about tripwires. (#10947)
When a workflow contains an agent step that triggers a tripwire, the workflow returns with status: 'tripwire' and includes tripwire details:
const run = await workflow.createRun();
const result = await run.start({ inputData: { message: 'Hello' } });
if (result.status === 'tripwire') {
console.log('Workflow terminated by tripwire:', result.tripwire?.reason);
console.log('Processor ID:', result.tripwire?.processorId);
console.log('Retry requested:', result.tripwire?.retry);
}
Adds new UI state for tripwire in agent chat and workflow UI.
This is distinct from status: 'failed' which indicates an unexpected error. A tripwire status means a processor intentionally stopped execution (e.g., for content moderation).
Fix type safety for message ordering - restrict orderBy to only accept 'createdAt' field (#11069)
Messages don't have an updatedAt field, but the previous type allowed ordering by it, which would return empty results. This change adds compile-time type safety by making StorageOrderBy generic and restricting StorageListMessagesInput.orderBy to only accept 'createdAt'. The API validation schemas have also been updated to reject invalid orderBy values at runtime.
Loosen tools types in processInputStep / prepareStep. (#11071)
Added the ability to provide a base path for Mastra Studio. (#10441)
import { Mastra } from '@mastra/core';
export const mastra = new Mastra({
server: {
studioBase: '/my-mastra-studio',
},
});
This will make Mastra Studio available at http://localhost:4111/my-mastra-studio.
Expand processInputStep processor method and integrate prepareStep as a processor (#10774)
New Features:
prepareStep callback now runs through the standard processInputStep pipelinemodel, tools, toolChoice, activeTools, messages, systemMessages, providerOptions, modelSettings, and structuredOutputBreaking Change:
prepareStep messages format changed from AI SDK v5 model messages to MastraDBMessage formatmessageList.get.all.aiV5.model() if you need the old formatMultiple Processor improvements including: (#10947)
processOutputStep added which runs after every step.What's new:
1. Retry mechanism with LLM feedback - Processors can now request retries with feedback that gets sent back to the LLM:
processOutputStep: async ({ text, abort, retryCount }) => {
if (isLowQuality(text)) {
abort('Response quality too low', { retry: true, metadata: { score: 0.6 } });
}
return [];
};
Configure with maxProcessorRetries (default: 3). Rejected steps are preserved in result.steps[n].tripwire. Retries are only available in processOutputStep and processInputStep. It will replay the step with additional context added.
2. Workflow orchestration for processors - Processors can now be composed using workflow primitives:
import { createStep, createWorkflow } from '@mastra/core/workflows';
import {
ProcessorStepSchema,
} from '@mastra/core/processors';
const moderationWorkflow = createWorkflow({ id: 'moderation', inputSchema: ProcessorStepSchema, outputSchema: ProcessorStepSchema })
.then(createStep(new lengthValidator({...})))
.parallel([createStep(new piiDetector({...}), createStep(new toxicityChecker({...}))])
.commit();
const agent = new Agent({ inputProcessors: [moderationWorkflow] });
Every processor array that gets passed to an agent gets added as a workflow <img width="614" height="673" alt="image" src="https://github.com/user-attachments/assets/0d79f1fd-8fca-4d86-8b45-22fddea984a8" />
3. Extended tripwire API - abort() now accepts options for retry control and typed metadata:
abort('reason', { retry: true, metadata: { score: 0.8, category: 'quality' } });
4. New processOutputStep method - Per-step output processing with access to step number, finish reason, tool calls, and retry count.
5. Workflow tripwire status - Workflows now have a 'tripwire' status distinct from 'failed', properly bubbling up processor rejections.
Support new Workflow tripwire run status. Tripwires that are thrown from within a workflow will now bubble up and return a graceful state with information about tripwires. (#10947)
When a workflow contains an agent step that triggers a tripwire, the workflow returns with status: 'tripwire' and includes tripwire details:
const run = await workflow.createRun();
const result = await run.start({ inputData: { message: 'Hello' } });
if (result.status === 'tripwire') {
console.log('Workflow terminated by tripwire:', result.tripwire?.reason);
console.log('Processor ID:', result.tripwire?.processorId);
console.log('Retry requested:', result.tripwire?.retry);
}
Adds new UI state for tripwire in agent chat and workflow UI.
This is distinct from status: 'failed' which indicates an unexpected error. A tripwire status means a processor intentionally stopped execution (e.g., for content moderation).
Support new Workflow tripwire run status. Tripwires that are thrown from within a workflow will now bubble up and return a graceful state with information about tripwires. (#10947)
When a workflow contains an agent step that triggers a tripwire, the workflow returns with status: 'tripwire' and includes tripwire details:
const run = await workflow.createRun();
const result = await run.start({ inputData: { message: 'Hello' } });
if (result.status === 'tripwire') {
console.log('Workflow terminated by tripwire:', result.tripwire?.reason);
console.log('Processor ID:', result.tripwire?.processorId);
console.log('Retry requested:', result.tripwire?.retry);
}
Adds new UI state for tripwire in agent chat and workflow UI.
This is distinct from status: 'failed' which indicates an unexpected error. A tripwire status means a processor intentionally stopped execution (e.g., for content moderation).
Support new Workflow tripwire run status. Tripwires that are thrown from within a workflow will now bubble up and return a graceful state with information about tripwires. (#10947)
When a workflow contains an agent step that triggers a tripwire, the workflow returns with status: 'tripwire' and includes tripwire details:
const run = await workflow.createRun();
const result = await run.start({ inputData: { message: 'Hello' } });
if (result.status === 'tripwire') {
console.log('Workflow terminated by tripwire:', result.tripwire?.reason);
console.log('Processor ID:', result.tripwire?.processorId);
console.log('Retry requested:', result.tripwire?.retry);
}
Adds new UI state for tripwire in agent chat and workflow UI.
This is distinct from status: 'failed' which indicates an unexpected error. A tripwire status means a processor intentionally stopped execution (e.g., for content moderation).
Fix type safety for message ordering - restrict orderBy to only accept 'createdAt' field (#11069)
Messages don't have an updatedAt field, but the previous type allowed ordering by it, which would return empty results. This change adds compile-time type safety by making StorageOrderBy generic and restricting StorageListMessagesInput.orderBy to only accept 'createdAt'. The API validation schemas have also been updated to reject invalid orderBy values at runtime.
Support new Workflow tripwire run status. Tripwires that are thrown from within a workflow will now bubble up and return a graceful state with information about tripwires. (#10947)
When a workflow contains an agent step that triggers a tripwire, the workflow returns with status: 'tripwire' and includes tripwire details:
const run = await workflow.createRun();
const result = await run.start({ inputData: { message: 'Hello' } });
if (result.status === 'tripwire') {
console.log('Workflow terminated by tripwire:', result.tripwire?.reason);
console.log('Processor ID:', result.tripwire?.processorId);
console.log('Retry requested:', result.tripwire?.retry);
}
Adds new UI state for tripwire in agent chat and workflow UI.
This is distinct from status: 'failed' which indicates an unexpected error. A tripwire status means a processor intentionally stopped execution (e.g., for content moderation).
Allow to run mastra studio from anywhere in the file system, and not necessarily inside a mastra project (#11067)
Make sure to verify that a mastra instance is running on server.port OR 4111 by default (#11066)
Internal changes to enable a custom base path for Mastra Studio (#10441)
Add delete workflow run API (#10991)
await workflow.deleteWorkflowRunById(runId);
Add delete workflow run API (#10991)
await workflow.deleteWorkflowRunById(runId);
Add delete workflow run API (#10991)
await workflow.deleteWorkflowRunById(runId);
Add delete workflow run API (#10991)
await workflow.deleteWorkflowRunById(runId);
Add delete workflow run API (#10991)
await workflow.deleteWorkflowRunById(runId);
Add support for typed structured output in agent workflow steps (#11014)
When wrapping an agent with createStep() and providing a structuredOutput.schema, the step's outputSchema is now correctly inferred from the provided schema instead of defaulting to { text: string }.
This enables type-safe chaining of agent steps with structured output to subsequent steps:
const articleSchema = z.object({
title: z.string(),
summary: z.string(),
tags: z.array(z.string()),
});
// Agent step with structured output - outputSchema is now articleSchema
const agentStep = createStep(agent, {
structuredOutput: { schema: articleSchema },
});
// Next step can receive the structured output directly
const processStep = createStep({
id: 'process',
inputSchema: articleSchema, // Matches agent's outputSchema
outputSchema: z.object({ tagCount: z.number() }),
execute: async ({ inputData }) => ({
tagCount: inputData.tags.length, // Fully typed!
}),
});
workflow.then(agentStep).then(processStep).commit();
When structuredOutput is not provided, the agent step continues to use the default { text: string } output schema.
Fixed a bug where multiple tools streaming output simultaneously could fail with "WritableStreamDefaultWriter is locked" errors. Tool streaming now works reliably during concurrent tool executions. (#10830)
Add delete workflow run API (#10991)
await workflow.deleteWorkflowRunById(runId);
Fixed CachedToken tracking in all Observability Exporters. Also fixed TimeToFirstToken in Langfuse, Braintrust, PostHog exporters. Fixed trace formatting in Posthog Exporter. (#11029)
fix: persist data-* chunks from writer.custom() to memory storage (#10884)
data-* parts) emitted via writer.custom() in tools@assistant-ui/react to v0.11.47 with native DataMessagePart supportdata-* parts to DataMessagePart format ({ type: 'data', name: string, data: T })@assistant-ui/* packages for compatibilityFixed double validation bug that prevented Zod transforms from working correctly in tool schemas. (#11025)
When tools with Zod .transform() or .pipe() in their outputSchema were executed through the Agent pipeline, validation was happening twice - once in Tool.execute() (correct) and again in CoreToolBuilder (incorrect). The second validation received already-transformed data but expected pre-transform data, causing validation errors.
This fix enables proper use of Zod transforms in both inputSchema (for normalizing/cleaning input data) and outputSchema (for transforming output data to be LLM-friendly).
RuntimeContext type to ServerContext to avoid confusion with the user-facing RequestContext (previously called RuntimeContext)playground and isDev options from server adapter constructors - these only set context variables without any actual functionality@mastra/server
RuntimeContext type to ServerContext in route handler typescreateTestRuntimeContext to createTestServerContext in test utilitiesisPlayground parameter to isStudio in formatAgent function@mastra/hono
playground and isDev from HonoVariables typeplayground and isDev context variables in middleware@mastra/express
playground and isDev from Express.Locals interfaceplayground and isDev in response localsAdd delete workflow run API (#10991)
await workflow.deleteWorkflowRunById(runId);
RuntimeContext type to ServerContext to avoid confusion with the user-facing RequestContext (previously called RuntimeContext)playground and isDev options from server adapter constructors - these only set context variables without any actual functionality@mastra/server
RuntimeContext type to ServerContext in route handler typescreateTestRuntimeContext to createTestServerContext in test utilitiesisPlayground parameter to isStudio in formatAgent function@mastra/hono
playground and isDev from HonoVariables typeplayground and isDev context variables in middleware@mastra/express
playground and isDev from Express.Locals interfaceplayground and isDev in response localsRuntimeContext type to ServerContext to avoid confusion with the user-facing RequestContext (previously called RuntimeContext)playground and isDev options from server adapter constructors - these only set context variables without any actual functionality@mastra/server
RuntimeContext type to ServerContext in route handler typescreateTestRuntimeContext to createTestServerContext in test utilitiesisPlayground parameter to isStudio in formatAgent function@mastra/hono
playground and isDev from HonoVariables typeplayground and isDev context variables in middleware@mastra/express
playground and isDev from Express.Locals interfaceplayground and isDev in response localsInternal code refactoring (#10830)
Add support for typed structured output in agent workflow steps (#11014)
When wrapping an agent with createStep() and providing a structuredOutput.schema, the step's outputSchema is now correctly inferred from the provided schema instead of defaulting to { text: string }.
This enables type-safe chaining of agent steps with structured output to subsequent steps:
const articleSchema = z.object({
title: z.string(),
summary: z.string(),
tags: z.array(z.string()),
});
// Agent step with structured output - outputSchema is now articleSchema
const agentStep = createStep(agent, {
structuredOutput: { schema: articleSchema },
});
// Next step can receive the structured output directly
const processStep = createStep({
id: 'process',
inputSchema: articleSchema, // Matches agent's outputSchema
outputSchema: z.object({ tagCount: z.number() }),
execute: async ({ inputData }) => ({
tagCount: inputData.tags.length, // Fully typed!
}),
});
workflow.then(agentStep).then(processStep).commit();
When structuredOutput is not provided, the agent step continues to use the default { text: string } output schema.
Add delete workflow run API (#10991)
await workflow.deleteWorkflowRunById(runId);
Add delete workflow run API (#10991)
await workflow.deleteWorkflowRunById(runId);
Add delete workflow run API (#10991)
await workflow.deleteWorkflowRunById(runId);
Add delete workflow run API (#10991)
await workflow.deleteWorkflowRunById(runId);
Add delete workflow run API (#10991)
await workflow.deleteWorkflowRunById(runId);
Add halfvec type support for large dimension embeddings (#11002)
Adds vectorType option to createIndex() for choosing between full precision (vector) and half precision (halfvec) storage. halfvec uses 2 bytes per dimension instead of 4, enabling indexes on embeddings up to 4000 dimensions.
await pgVector.createIndex({
indexName: 'large-embeddings',
dimension: 3072, // text-embedding-3-large
metric: 'cosine',
vectorType: 'halfvec',
});
Requires pgvector >= 0.7.0 for halfvec support. Docker compose files updated to use pgvector 0.8.0.
useMessagePart hook with useAssistantState (#11039)Remove console.log in playground-ui (#11004)
Use the hash based stringification mechanism of tanstack query to ensure keys ordering (and to keep the caching key valid and consistent) (#11008)
Add delete workflow run API (#10991)
await workflow.deleteWorkflowRunById(runId);
Add the possibility to pass down options to the tanstack query client. Goal is to have cacheable request in cloud and it's not possible for now because of context resolution beeing different (#11026)
fix: persist data-* chunks from writer.custom() to memory storage (#10884)
data-* parts) emitted via writer.custom() in tools@assistant-ui/react to v0.11.47 with native DataMessagePart supportdata-* parts to DataMessagePart format ({ type: 'data', name: string, data: T })@assistant-ui/* packages for compatibilityuseMessagePart hook with useAssistantState (#11039)data-* parts) emitted via writer.custom() in tools@assistant-ui/react to v0.11.47 with native DataMessagePart supportdata-* parts to DataMessagePart format ({ type: 'data', name: string, data: T })@assistant-ui/* packages for compatibilityRuntimeContext type to ServerContext to avoid confusion with the user-facing RequestContext (previously called RuntimeContext)playground and isDev options from server adapter constructors - these only set context variables without any actual functionality@mastra/server
RuntimeContext type to ServerContext in route handler typescreateTestRuntimeContext to createTestServerContext in test utilitiesisPlayground parameter to isStudio in formatAgent function@mastra/hono
playground and isDev from HonoVariables typeplayground and isDev context variables in middleware@mastra/express
playground and isDev from Express.Locals interfaceplayground and isDev in response localsAdd delete workflow run API (#10991)
await workflow.deleteWorkflowRunById(runId);
Add delete workflow run API (#10991)
await workflow.deleteWorkflowRunById(runId);
Add output format support to ElevenLabs voice integration (#11027)
The speak() method now supports specifying audio output formats via the outputFormat option. This enables telephony and VoIP use cases that require specific audio formats like μ-law (ulaw_8000) or PCM formats.
import { ElevenLabsVoice } from '@mastra/voice-elevenlabs';
const voice = new ElevenLabsVoice();
// Generate speech with telephony format (μ-law 8kHz)
const stream = await voice.speak('Hello from Mastra!', {
outputFormat: 'ulaw_8000',
});
// Generate speech with PCM format
const pcmStream = await voice.speak('Hello from Mastra!', {
outputFormat: 'pcm_16000',
});
Supported formats include:
mp3_22050_32, mp3_44100_32, mp3_44100_64, mp3_44100_96, mp3_44100_128, mp3_44100_192pcm_8000, pcm_16000, pcm_22050, pcm_24000, pcm_44100ulaw_8000, alaw_8000 (μ-law and A-law 8kHz for VoIP/telephony)wav, wav_8000, wav_16000If outputFormat is not specified, the method defaults to ElevenLabs' default format (typically mp3_44100_128).
Add delete workflow run API (#10991)
await workflow.deleteWorkflowRunById(runId);
useMessagePart hook with useAssistantState (#11039)Add delete workflow run API (#10991)
await workflow.deleteWorkflowRunById(runId);
Return NetworkDataPart on each agent-execution-event and workflow-execution-event in network streams (#10979)
Fixed tool-call-suspended chunks being dropped in workflow-step-output when using AI SDK. Previously, when an agent inside a workflow step called a tool that got suspended, the tool-call-suspended chunk was not received on the frontend even though tool-input-available chunks were correctly received. (#10988)
The issue occurred because tool-call-suspended was not included in the isMastraTextStreamChunk list, causing it to be filtered out in transformWorkflow. Now tool-call-suspended, tool-call-approval, object, and tripwire chunks are properly included in the text stream chunk list and will be transformed and passed through correctly.
Fixes #10978
Fixed bundling to correctly exclude subpath imports of external packages. Previously, when a package like lodash was marked as external, subpath imports such as lodash/merge were still being bundled incorrectly. Now all subpaths are properly excluded. (#10596)
Fixes #10055
Improved error messages when bundling fails during deployment. (#10997)
What changed:
Fix MCPClient automatic reconnection when session becomes invalid (#10993)
When an MCP server restarts, the session ID becomes invalid causing "Bad Request: No valid session ID provided" errors. The MCPClient now automatically detects session-related errors, reconnects to the server, and retries the tool call.
This fix addresses issue #7675 where MCPClient would fail to reconnect after an MCP server went offline and came back online.
Return NetworkDataPart on each agent-execution-event and workflow-execution-event in network streams (#10982)
Fixed tool-call-suspended chunks being dropped in workflow-step-output when using AI SDK. Previously, when an agent inside a workflow step called a tool that got suspended, the tool-call-suspended chunk was not received on the frontend even though tool-input-available chunks were correctly received. (#10987)
The issue occurred because tool-call-suspended was not included in the isMastraTextStreamChunk list, causing it to be filtered out in transformWorkflow. Now tool-call-suspended, tool-call-approval, object, and tripwire chunks are properly included in the text stream chunk list and will be transformed and passed through correctly.
Fixes #10978
Adds withMastra() for wrapping AI SDK models with Mastra processors and memory. (#10911)
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
import { withMastra } from '@mastra/ai-sdk';
const model = withMastra(openai('gpt-4o'), {
inputProcessors: [myGuardProcessor],
outputProcessors: [myLoggingProcessor],
memory: {
storage,
threadId: 'thread-123',
resourceId: 'user-123',
lastMessages: 10,
},
});
const { text } = await generateText({ model, prompt: 'Hello!' });
Works with generateText, streamText, generateObject, and streamObject.
saveMessageToMemory return type to match API response. The method now correctly returns { messages: (MastraMessageV1 | MastraDBMessage)[] } instead of (MastraMessageV1 | MastraDBMessage)[] to align with the server endpoint response schema. (#10996)[native code] was incorrectly added to the output (#10971)Add stored agents support (#10953)
Agents can now be stored in the database and loaded at runtime. This lets you persist agent configurations and dynamically create executable Agent instances from storage.
import { Mastra } from '@mastra/core';
import { LibSQLStore } from '@mastra/libsql';
const mastra = new Mastra({
storage: new LibSQLStore({ url: ':memory:' }),
tools: { myTool },
scorers: { myScorer },
});
// Create agent in storage via API or directly
await mastra.getStorage().createAgent({
agent: {
id: 'my-agent',
name: 'My Agent',
instructions: 'You are helpful',
model: { provider: 'openai', name: 'gpt-4' },
tools: { myTool: {} },
scorers: { myScorer: { sampling: { type: 'ratio', rate: 0.5 } } },
},
});
// Load and use the agent
const agent = await mastra.getStoredAgentById('my-agent');
const response = await agent.generate({ messages: 'Hello!' });
// List all stored agents with pagination
const { agents, total, hasMore } = await mastra.listStoredAgents({
page: 0,
perPage: 10,
});
Also adds a memory registry to Mastra so stored agents can reference memory instances by key.
Add agentId and agentName attributes to MODEL_GENERATION spans. This allows users to correlate gen_ai.usage metrics with specific agents when analyzing LLM operation spans. The attributes are exported as gen_ai.agent.id and gen_ai.agent.name in the OtelExporter. (#10984)
Fix JSON parsing errors when LLMs output unescaped newlines in structured output strings (#10965)
Some LLMs (particularly when not using native JSON mode) output actual newline characters inside JSON string values instead of properly escaped \n sequences. This breaks JSON parsing and causes structured output to fail.
This change adds preprocessing to escape unescaped control characters (\n, \r, \t) within JSON string values before parsing, making structured output more robust across different LLM providers.
Fix toolCallId propagation in agent network tool execution. The toolCallId property was undefined at runtime despite being required by TypeScript type definitions in AgentToolExecutionContext. Now properly passes the toolCallId through to the tool's context during network tool execution. (#10951)
Exports convertFullStreamChunkToMastra from the stream module for AI SDK stream chunk transformations. (#10911)
file:// URLs (#10960)Add HonoApp interface to eliminate as any cast when passing Hono app to MastraServer. Users can now pass typed Hono apps directly without casting. (#10846)
Fix example type issues in server-adapters
Add HonoApp interface to eliminate as any cast when passing Hono app to MastraServer. Users can now pass typed Hono apps directly without casting. (#10846)
Fix example type issues in server-adapters
Add stored agents support (#10953)
Agents can now be stored in the database and loaded at runtime. This lets you persist agent configurations and dynamically create executable Agent instances from storage.
import { Mastra } from '@mastra/core';
import { LibSQLStore } from '@mastra/libsql';
const mastra = new Mastra({
storage: new LibSQLStore({ url: ':memory:' }),
tools: { myTool },
scorers: { myScorer },
});
// Create agent in storage via API or directly
await mastra.getStorage().createAgent({
agent: {
id: 'my-agent',
name: 'My Agent',
instructions: 'You are helpful',
model: { provider: 'openai', name: 'gpt-4' },
tools: { myTool: {} },
scorers: { myScorer: { sampling: { type: 'ratio', rate: 0.5 } } },
},
});
// Load and use the agent
const agent = await mastra.getStoredAgentById('my-agent');
const response = await agent.generate({ messages: 'Hello!' });
// List all stored agents with pagination
const { agents, total, hasMore } = await mastra.listStoredAgents({
page: 0,
perPage: 10,
});
Also adds a memory registry to Mastra so stored agents can reference memory instances by key.
Add "Not connected" error detection to MCP auto-reconnection (#10994)
Enhanced the MCPClient auto-reconnection feature to also detect and handle "Not connected" protocol errors. When the MCP SDK's transport layer throws this error (typically when the connection is in a disconnected state), the client will now automatically reconnect and retry the operation.
Add stored agents support (#10953)
Agents can now be stored in the database and loaded at runtime. This lets you persist agent configurations and dynamically create executable Agent instances from storage.
import { Mastra } from '@mastra/core';
import { LibSQLStore } from '@mastra/libsql';
const mastra = new Mastra({
storage: new LibSQLStore({ url: ':memory:' }),
tools: { myTool },
scorers: { myScorer },
});
// Create agent in storage via API or directly
await mastra.getStorage().createAgent({
agent: {
id: 'my-agent',
name: 'My Agent',
instructions: 'You are helpful',
model: { provider: 'openai', name: 'gpt-4' },
tools: { myTool: {} },
scorers: { myScorer: { sampling: { type: 'ratio', rate: 0.5 } } },
},
});
// Load and use the agent
const agent = await mastra.getStoredAgentById('my-agent');
const response = await agent.generate({ messages: 'Hello!' });
// List all stored agents with pagination
const { agents, total, hasMore } = await mastra.listStoredAgents({
page: 0,
perPage: 10,
});
Also adds a memory registry to Mastra so stored agents can reference memory instances by key.
Fix default value showing on workflow form after user submits (#10983)
Move useScorers down to trace page to trigger it once for all trace spans (#10985)
Update Observability Trace Spans list UI, so a user can expand/collapse span children/descendants and can filter the list by span type or name (#10378)
Add UI to match with the mastra studio command (#10283)
Fix workflow trigger form overflow (#10986)
Add stored agents support (#10953)
Agents can now be stored in the database and loaded at runtime. This lets you persist agent configurations and dynamically create executable Agent instances from storage.
import { Mastra } from '@mastra/core';
import { LibSQLStore } from '@mastra/libsql';
const mastra = new Mastra({
storage: new LibSQLStore({ url: ':memory:' }),
tools: { myTool },
scorers: { myScorer },
});
// Create agent in storage via API or directly
await mastra.getStorage().createAgent({
agent: {
id: 'my-agent',
name: 'My Agent',
instructions: 'You are helpful',
model: { provider: 'openai', name: 'gpt-4' },
tools: { myTool: {} },
scorers: { myScorer: { sampling: { type: 'ratio', rate: 0.5 } } },
},
});
// Load and use the agent
const agent = await mastra.getStoredAgentById('my-agent');
const response = await agent.generate({ messages: 'Hello!' });
// List all stored agents with pagination
const { agents, total, hasMore } = await mastra.listStoredAgents({
page: 0,
perPage: 10,
});
Also adds a memory registry to Mastra so stored agents can reference memory instances by key.
Fix default value showing on workflow form after user submits (#10983)
Move useScorers down to trace page to trigger it once for all trace spans (#10985)
Update Observability Trace Spans list UI, so a user can expand/collapse span children/descendants and can filter the list by span type or name (#10378)
Fix workflow trigger form overflow (#10986)
Add mastra studio CLI command to serve the built playground as a static server (#10283)
Fix default value showing on workflow form after user submits (#10983)
Move to @posthog/react which is the actual way to use posthog in React. It also fixes (#10967)
Move useScorers down to trace page to trigger it once for all trace spans (#10985)
Update Observability Trace Spans list UI, so a user can expand/collapse span children/descendants and can filter the list by span type or name (#10378)
Fix workflow trigger form overflow (#10986)
The client-js package had its own simpler zodToJsonSchema implementation that was missing critical features from schema-compat. This could cause issues when users pass Zod schemas with z.record() or z.date() through the MastraClient. (#10925)
Now the client uses the same implementation as the rest of the codebase, which includes the Zod v4 z.record() bug fix, date-time format conversion for z.date(), and proper handling of unrepresentable types.
Also removes the now-unused zod-to-json-schema dependency from client-js.
Fix writer.custom not working during workflow resume operations (#10921)
When a workflow step is resumed, the writer parameter was not being properly passed through, causing writer.custom() calls to fail. This fix ensures the writableStream parameter is correctly passed to both run.resume() and run.start() calls in the workflow execution engine, allowing custom events to be emitted properly during resume operations.
Fix traceMap overwrite when multiple root spans share the same traceId (#10903)
Previously, when multiple root spans shared the same traceId (e.g., multiple agent.stream calls in the same trace), the trace data would be overwritten instead of reused. This could cause spans to be orphaned or lost.
Now both exporters check if a trace already exists before creating a new one, matching the behavior of the Langfuse and PostHog exporters.
Fix saveScore not persisting ID correctly, breaking getScoreById retrieval (#10915)
What Changed
Impact Previously, calling getScoreById after saveScore would return null because the generated ID wasn't persisted to the database. This is now fixed across all store implementations, ensuring consistent behavior and data integrity.
Fix saveScore not persisting ID correctly, breaking getScoreById retrieval (#10915)
What Changed
Impact Previously, calling getScoreById after saveScore would return null because the generated ID wasn't persisted to the database. This is now fixed across all store implementations, ensuring consistent behavior and data integrity.
Fix saveScore not persisting ID correctly, breaking getScoreById retrieval (#10915)
What Changed
Impact Previously, calling getScoreById after saveScore would return null because the generated ID wasn't persisted to the database. This is now fixed across all store implementations, ensuring consistent behavior and data integrity.
Fix saveScore not persisting ID correctly, breaking getScoreById retrieval (#10915)
What Changed
Impact Previously, calling getScoreById after saveScore would return null because the generated ID wasn't persisted to the database. This is now fixed across all store implementations, ensuring consistent behavior and data integrity.
Fix saveScore not persisting ID correctly, breaking getScoreById retrieval (#10915)
What Changed
Impact Previously, calling getScoreById after saveScore would return null because the generated ID wasn't persisted to the database. This is now fixed across all store implementations, ensuring consistent behavior and data integrity.
setState is now async (#10944)
setState must now be awaited: await setState({ key: value })stateSchema when validateInputs is enabled (default: true)Add human-in-the-loop support for workflows used in agent (#10871)
Fix tsconfig.json parsing when file contains JSONC comments (#10952)
The hasPaths() function now uses strip-json-comments to properly parse tsconfig.json files that contain comments. Previously, JSON.parse() would fail silently on JSONC comments, causing path aliases like @src/* to be incorrectly treated as npm scoped packages.
Fix saveScore not persisting ID correctly, breaking getScoreById retrieval (#10915)
What Changed
Impact Previously, calling getScoreById after saveScore would return null because the generated ID wasn't persisted to the database. This is now fixed across all store implementations, ensuring consistent behavior and data integrity.
Fix saveScore not persisting ID correctly, breaking getScoreById retrieval (#10915)
What Changed
Impact Previously, calling getScoreById after saveScore would return null because the generated ID wasn't persisted to the database. This is now fixed across all store implementations, ensuring consistent behavior and data integrity.
Fix traceMap overwrite when multiple root spans share the same traceId (#10903)
Previously, when multiple root spans shared the same traceId (e.g., multiple agent.stream calls in the same trace), the trace data would be overwritten instead of reused. This could cause spans to be orphaned or lost.
Now both exporters check if a trace already exists before creating a new one, matching the behavior of the Langfuse and PostHog exporters.
Fix saveScore not persisting ID correctly, breaking getScoreById retrieval (#10915)
What Changed
Impact Previously, calling getScoreById after saveScore would return null because the generated ID wasn't persisted to the database. This is now fixed across all store implementations, ensuring consistent behavior and data integrity.
Add redact option to PinoLogger for PII protection (#10919)
Exposes Pino's native redact option in PinoLogger, allowing sensitive data to be automatically redacted from logs.
import { PinoLogger } from '@mastra/loggers';
const logger = new PinoLogger({
name: 'MyApp',
redact: {
paths: ['*.password', '*.token', '*.apiKey', '*.email'],
censor: '[REDACTED]',
},
});
logger.info('User login', { username: 'john', password: 'secret123' });
// Output: { username: "john", password: "[REDACTED]", msg: "User login" }
Fix saveScore not persisting ID correctly, breaking getScoreById retrieval (#10915)
What Changed
Impact Previously, calling getScoreById after saveScore would return null because the generated ID wasn't persisted to the database. This is now fixed across all store implementations, ensuring consistent behavior and data integrity.
Fix saveScore not persisting ID correctly, breaking getScoreById retrieval (#10915)
What Changed
Impact Previously, calling getScoreById after saveScore would return null because the generated ID wasn't persisted to the database. This is now fixed across all store implementations, ensuring consistent behavior and data integrity.
Fix saveScore not persisting ID correctly, breaking getScoreById retrieval (#10915)
What Changed
Impact Previously, calling getScoreById after saveScore would return null because the generated ID wasn't persisted to the database. This is now fixed across all store implementations, ensuring consistent behavior and data integrity.
PostgresStore was setting this.stores = {} in the constructor and only populating it in the async init() method. This broke Memory because it checks storage.stores.memory synchronously in getInputProcessors() before init() is called. (#10943)
The fix moves domain instance creation to the constructor. This is safe because pg-promise creates database connections lazily when queries are executed.
Fix saveScore not persisting ID correctly, breaking getScoreById retrieval (#10915)
What Changed
Impact Previously, calling getScoreById after saveScore would return null because the generated ID wasn't persisted to the database. This is now fixed across all store implementations, ensuring consistent behavior and data integrity.
Updated OtelExporters, Bridge, and Arize packages to better implement GenAI v1.38.0 Otel Semantic Conventions. See: (#10591) https://github.com/open-telemetry/semantic-conventions/blob/v1.38.0/docs/gen-ai/README.md
feat(observability): Add tags support to OtelExporter, OtelBridge, and ArizeExporter (#10843)
This change adds support for the tracingOptions.tags feature to the OpenTelemetry-based exporters and bridge. Tags are now included as span attributes when present on root spans, following the same pattern as Braintrust and Langfuse exporters.
Changes:
mastra.tags span attribute for root spansmastra.tagstag.tags semantic conventionImplementation Details:
Usage:
const result = await agent.generate({
messages: [{ role: 'user', content: 'Hello' }],
tracingOptions: {
tags: ['production', 'experiment-v2'],
},
});
Fixes #10771
createStorageErrorId and createVectorErrorId). This ensures consistent error ID patterns (MASTRA_STORAGE_{STORE}_{OPERATION}_{STATUS} and MASTRA_VECTOR_{STORE}_{OPERATION}_{STATUS}) across the codebase for better error tracking and debugging. (#10913)Add time-to-first-token (TTFT) support for Braintrust integration (#10840)
Adds time_to_first_token metric to Braintrust spans, populated from the completionStartTime attribute captured when the first streaming chunk arrives.
// time_to_first_token is now automatically sent to Braintrust
// as part of span metrics during streaming
const result = await agent.stream('Hello');
Fix Braintrust Thread view not displaying LLM messages correctly (#10794)
Transforms LLM input/output format to match Braintrust's expected format for Thread view. Input is unwrapped from { messages: [...] } to direct array format, and output is unwrapped from { content: '...' } to direct string format.
Fixed Braintrust span nesting so root spans correctly show as the trace name. (#10876)
Standardize error IDs across all storage and vector stores using centralized helper functions (createStorageErrorId and createVectorErrorId). This ensures consistent error ID patterns (MASTRA_STORAGE_{STORE}_{OPERATION}_{STATUS} and MASTRA_VECTOR_{STORE}_{OPERATION}_{STATUS}) across the codebase for better error tracking and debugging. (#10913)
update chroma dep (#10883)
Add disableInit option to all storage adapters (#10851)
Adds a new disableInit config option to all storage providers that allows users to disable automatic table creation/migrations at runtime. This is useful for CI/CD pipelines where you want to run migrations during deployment with elevated credentials, then run the application with disableInit: true so it doesn't attempt schema changes at runtime.
// CI/CD script - run migrations
const storage = new PostgresStore({
connectionString: DATABASE_URL,
id: 'pg-storage',
});
await storage.init();
// Runtime - skip auto-init
const storage = new PostgresStore({
connectionString: DATABASE_URL,
id: 'pg-storage',
disableInit: true,
});
createStorageErrorId and createVectorErrorId). This ensures consistent error ID patterns (MASTRA_STORAGE_{STORE}_{OPERATION}_{STATUS} and MASTRA_VECTOR_{STORE}_{OPERATION}_{STATUS}) across the codebase for better error tracking and debugging. (#10913)fix: standardize pagination params to page/perPage with backwards compatibility for limit/offset (#10790)
page/perPage and legacy limit/offset params for workflow runs and MCP server listing endpointscreateCombinedPaginationSchema helper for endpoints needing backwards compatibilitylimit and offset as deprecated in client typesfeat: Add partial response support for agent and workflow list endpoints (#10886)
Add optional partial query parameter to /api/agents and /api/workflows endpoints to return minimal data without schemas, reducing payload size for list views:
partial=true: tool schemas (inputSchema, outputSchema) are omittedpartial=true: workflow steps are replaced with stepCount integerpartial=true: workflow root schemas (inputSchema, outputSchema) are omitted# Get partial agent data (no tool schemas)
GET /api/agents?partial=true
# Get full agent data (default behavior)
GET /api/agents
# Get partial workflow data (stepCount instead of steps, no schemas)
GET /api/workflows?partial=true
# Get full workflow data (default behavior)
GET /api/workflows
import { MastraClient } from '@mastra/client-js';
const client = new MastraClient({ baseUrl: 'http://localhost:4111' });
// Get partial agent list (smaller payload)
const partialAgents = await client.listAgents({ partial: true });
// Get full agent list with tool schemas
const fullAgents = await client.listAgents();
// Get partial workflow list (smaller payload)
const partialWorkflows = await client.listWorkflows({ partial: true });
// Get full workflow list with steps and schemas
const fullWorkflows = await client.listWorkflows();
Add disableInit option to all storage adapters (#10851)
Adds a new disableInit config option to all storage providers that allows users to disable automatic table creation/migrations at runtime. This is useful for CI/CD pipelines where you want to run migrations during deployment with elevated credentials, then run the application with disableInit: true so it doesn't attempt schema changes at runtime.
// CI/CD script - run migrations
const storage = new PostgresStore({
connectionString: DATABASE_URL,
id: 'pg-storage',
});
await storage.init();
// Runtime - skip auto-init
const storage = new PostgresStore({
connectionString: DATABASE_URL,
id: 'pg-storage',
disableInit: true,
});
createStorageErrorId and createVectorErrorId). This ensures consistent error ID patterns (MASTRA_STORAGE_{STORE}_{OPERATION}_{STATUS} and MASTRA_VECTOR_{STORE}_{OPERATION}_{STATUS}) across the codebase for better error tracking and debugging. (#10913)Add disableInit option to all storage adapters (#10851)
Adds a new disableInit config option to all storage providers that allows users to disable automatic table creation/migrations at runtime. This is useful for CI/CD pipelines where you want to run migrations during deployment with elevated credentials, then run the application with disableInit: true so it doesn't attempt schema changes at runtime.
// CI/CD script - run migrations
const storage = new PostgresStore({
connectionString: DATABASE_URL,
id: 'pg-storage',
});
await storage.init();
// Runtime - skip auto-init
const storage = new PostgresStore({
connectionString: DATABASE_URL,
id: 'pg-storage',
disableInit: true,
});
createStorageErrorId and createVectorErrorId). This ensures consistent error ID patterns (MASTRA_STORAGE_{STORE}_{OPERATION}_{STATUS} and MASTRA_VECTOR_{STORE}_{OPERATION}_{STATUS}) across the codebase for better error tracking and debugging. (#10913)v1/workflow-stream-vnext codemod. This codemod renames streamVNext(), resumeStreamVNext(), and observeStreamVNext() to their "non-VNext" counterparts. (#10802)Add disableInit option to all storage adapters (#10851)
Adds a new disableInit config option to all storage providers that allows users to disable automatic table creation/migrations at runtime. This is useful for CI/CD pipelines where you want to run migrations during deployment with elevated credentials, then run the application with disableInit: true so it doesn't attempt schema changes at runtime.
// CI/CD script - run migrations
const storage = new PostgresStore({
connectionString: DATABASE_URL,
id: 'pg-storage',
});
await storage.init();
// Runtime - skip auto-init
const storage = new PostgresStore({
connectionString: DATABASE_URL,
id: 'pg-storage',
disableInit: true,
});
createStorageErrorId and createVectorErrorId). This ensures consistent error ID patterns (MASTRA_STORAGE_{STORE}_{OPERATION}_{STATUS} and MASTRA_VECTOR_{STORE}_{OPERATION}_{STATUS}) across the codebase for better error tracking and debugging. (#10913)Add disableInit option to all storage adapters (#10851)
Adds a new disableInit config option to all storage providers that allows users to disable automatic table creation/migrations at runtime. This is useful for CI/CD pipelines where you want to run migrations during deployment with elevated credentials, then run the application with disableInit: true so it doesn't attempt schema changes at runtime.
// CI/CD script - run migrations
const storage = new PostgresStore({
connectionString: DATABASE_URL,
id: 'pg-storage',
});
await storage.init();
// Runtime - skip auto-init
const storage = new PostgresStore({
connectionString: DATABASE_URL,
id: 'pg-storage',
disableInit: true,
});
Add time-to-first-token (TTFT) support for Langfuse integration (#10781)
Adds completionStartTime to model generation spans, which Langfuse uses to calculate TTFT metrics. The timestamp is automatically captured when the first content chunk arrives during streaming.
// completionStartTime is now automatically captured and sent to Langfuse
// enabling TTFT metrics in your Langfuse dashboard
const result = await agent.stream('Hello');
Updated OtelExporters, Bridge, and Arize packages to better implement GenAI v1.38.0 Otel Semantic Conventions. See: (#10591) https://github.com/open-telemetry/semantic-conventions/blob/v1.38.0/docs/gen-ai/README.md
Standardize error IDs across all storage and vector stores using centralized helper functions (createStorageErrorId and createVectorErrorId). This ensures consistent error ID patterns (MASTRA_STORAGE_{STORE}_{OPERATION}_{STATUS} and MASTRA_VECTOR_{STORE}_{OPERATION}_{STATUS}) across the codebase for better error tracking and debugging. (#10913)
fix: generate unique text IDs for Anthropic/Google providers (#10740)
Workaround for duplicate text-start/text-end IDs in multi-step agentic flows.
The @ai-sdk/anthropic and @ai-sdk/google providers use numeric indices ("0", "1", etc.) for text block IDs that reset for each LLM call. This caused duplicate IDs when an agent does TEXT → TOOL → TEXT, breaking message ordering and storage.
The fix replaces numeric IDs with UUIDs, maintaining a map per step so text-start, text-delta, and text-end chunks for the same block share the same UUID. OpenAI's UUIDs pass through unchanged.
Related: #9909
Fix sub-agent requestContext propagation in listAgentTools (#10844)
Sub-agents with dynamic model configurations were broken because requestContext was not being passed to getModel() when creating agent tools. This caused sub-agents using function-based model configurations to receive an empty context instead of the parent's context.
No code changes required for consumers - this fix restores expected behavior for dynamic model configurations in sub-agents.
Fix ToolStream type error when piping streams with different types (#10845)
Changes ToolStream to extend WritableStream<unknown> instead of WritableStream<T>. This fixes the TypeScript error when piping objectStream or fullStream to writer in workflow steps.
Before:
// TypeError: ToolStream<ChunkType> is not assignable to WritableStream<Partial<StoryPlan>>
await response.objectStream.pipeTo(writer);
After:
// Works without type errors
await response.objectStream.pipeTo(writer);
feat: add native Perplexity provider support (#10885)
When sending the first message to a new thread with PostgresStore, users would get a "Thread not found" error. This happened because the thread was created in memory but not persisted to the database before the MessageHistory output processor tried to save messages. (#10881)
Before:
threadObject = await memory.createThread({
// ...
saveThread: false, // thread not in DB yet
});
// Later: MessageHistory calls saveMessages() -> PostgresStore throws "Thread not found"
After:
threadObject = await memory.createThread({
// ...
saveThread: true, // thread persisted immediately
});
// MessageHistory can now save messages without error
Emit error chunk and call onError when agent workflow step fails (#10907)
When a workflow step fails (e.g., tool not found), the error is now properly emitted as an error chunk to the stream and the onError callback is called. This fixes the issue where agent.generate() would throw "promise 'text' was not resolved or rejected" instead of the actual error message.
fix(core): use agent description when converting agent to tool (#10879)
Adds native @ai-sdk/deepseek provider support instead of using the OpenAI-compatible fallback. (#10822)
const agent = new Agent({
model: 'deepseek/deepseek-reasoner',
});
// With provider options for reasoning
const response = await agent.generate('Solve this problem', {
providerOptions: {
deepseek: {
thinking: { type: 'enabled' },
},
},
});
Also updates the doc generation scripts so DeepSeek provider options show up in the generated docs.
Return state too if includeState: true is in outputOptions and workflow run is not successful (#10806)
feat: Add partial response support for agent and workflow list endpoints (#10886)
Add optional partial query parameter to /api/agents and /api/workflows endpoints to return minimal data without schemas, reducing payload size for list views:
partial=true: tool schemas (inputSchema, outputSchema) are omittedpartial=true: workflow steps are replaced with stepCount integerpartial=true: workflow root schemas (inputSchema, outputSchema) are omitted# Get partial agent data (no tool schemas)
GET /api/agents?partial=true
# Get full agent data (default behavior)
GET /api/agents
# Get partial workflow data (stepCount instead of steps, no schemas)
GET /api/workflows?partial=true
# Get full workflow data (default behavior)
GET /api/workflows
import { MastraClient } from '@mastra/client-js';
const client = new MastraClient({ baseUrl: 'http://localhost:4111' });
// Get partial agent list (smaller payload)
const partialAgents = await client.listAgents({ partial: true });
// Get full agent list with tool schemas
const fullAgents = await client.listAgents();
// Get partial workflow list (smaller payload)
const partialWorkflows = await client.listWorkflows({ partial: true });
// Get full workflow list with steps and schemas
const fullWorkflows = await client.listWorkflows();
Fix processInputStep so it runs correctly. (#10909)
createStorageErrorId and createVectorErrorId). This ensures consistent error ID patterns (MASTRA_STORAGE_{STORE}_{OPERATION}_{STATUS} and MASTRA_VECTOR_{STORE}_{OPERATION}_{STATUS}) across the codebase for better error tracking and debugging. (#10913)Remove cast as any from MastraServer in deployer (#10796)
Fixed a bug where ESM shims were incorrectly injected even when the user had already declared __filename or __dirname (#10809)
Add simple virtual check for tsconfigpaths plugin, misbehaves on CI (#10832)
Add disableInit option to all storage adapters (#10851)
Adds a new disableInit config option to all storage providers that allows users to disable automatic table creation/migrations at runtime. This is useful for CI/CD pipelines where you want to run migrations during deployment with elevated credentials, then run the application with disableInit: true so it doesn't attempt schema changes at runtime.
// CI/CD script - run migrations
const storage = new PostgresStore({
connectionString: DATABASE_URL,
id: 'pg-storage',
});
await storage.init();
// Runtime - skip auto-init
const storage = new PostgresStore({
connectionString: DATABASE_URL,
id: 'pg-storage',
disableInit: true,
});
createStorageErrorId and createVectorErrorId). This ensures consistent error ID patterns (MASTRA_STORAGE_{STORE}_{OPERATION}_{STATUS} and MASTRA_VECTOR_{STORE}_{OPERATION}_{STATUS}) across the codebase for better error tracking and debugging. (#10913)createStorageErrorId and createVectorErrorId). This ensures consistent error ID patterns (MASTRA_STORAGE_{STORE}_{OPERATION}_{STATUS} and MASTRA_VECTOR_{STORE}_{OPERATION}_{STATUS}) across the codebase for better error tracking and debugging. (#10913)Fix README and ChangeLog for hono and express adapters (#10873)
Fix abort signal being prematurely aborted in Express adapter (#10901)
The abort signal was being triggered when the request body was parsed by express.json() middleware, causing agent.generate() to return empty responses with 0 tokens. Changed from req.on('close') to res.on('close') to properly detect client disconnection instead of body consumption.
Add disableInit option to all storage adapters (#10851)
Adds a new disableInit config option to all storage providers that allows users to disable automatic table creation/migrations at runtime. This is useful for CI/CD pipelines where you want to run migrations during deployment with elevated credentials, then run the application with disableInit: true so it doesn't attempt schema changes at runtime.
// CI/CD script - run migrations
const storage = new PostgresStore({
connectionString: DATABASE_URL,
id: 'pg-storage',
});
await storage.init();
// Runtime - skip auto-init
const storage = new PostgresStore({
connectionString: DATABASE_URL,
id: 'pg-storage',
disableInit: true,
});
createStorageErrorId and createVectorErrorId). This ensures consistent error ID patterns (MASTRA_STORAGE_{STORE}_{OPERATION}_{STATUS} and MASTRA_VECTOR_{STORE}_{OPERATION}_{STATUS}) across the codebase for better error tracking and debugging. (#10913)Add time-to-first-token (TTFT) support for Langfuse integration (#10781)
Adds completionStartTime to model generation spans, which Langfuse uses to calculate TTFT metrics. The timestamp is automatically captured when the first content chunk arrives during streaming.
// completionStartTime is now automatically captured and sent to Langfuse
// enabling TTFT metrics in your Langfuse dashboard
const result = await agent.stream('Hello');
Fix Langfuse exporter to reuse existing traces when multiple root spans share the same traceId. This resolves an issue where multiple agent.stream() calls with client-side tools would create separate traces in Langfuse instead of grouping them under a single trace. The exporter now checks if a trace already exists before creating a new one, allowing proper trace consolidation for conversations with multiple agent interactions. (#10838)
Fixes #8830
link langfuse prompts and helper functions (#10738)
Add disableInit option to all storage adapters (#10851)
Adds a new disableInit config option to all storage providers that allows users to disable automatic table creation/migrations at runtime. This is useful for CI/CD pipelines where you want to run migrations during deployment with elevated credentials, then run the application with disableInit: true so it doesn't attempt schema changes at runtime.
// CI/CD script - run migrations
const storage = new PostgresStore({
connectionString: DATABASE_URL,
id: 'pg-storage',
});
await storage.init();
// Runtime - skip auto-init
const storage = new PostgresStore({
connectionString: DATABASE_URL,
id: 'pg-storage',
disableInit: true,
});
createStorageErrorId and createVectorErrorId). This ensures consistent error ID patterns (MASTRA_STORAGE_{STORE}_{OPERATION}_{STATUS} and MASTRA_VECTOR_{STORE}_{OPERATION}_{STATUS}) across the codebase for better error tracking and debugging. (#10913)Add support for RequestOptions in elicitation requests to allow custom timeouts and request cancellation. (#10849)
You can now pass RequestOptions when sending elicitation requests:
// Within a tool's execute function
const result = await options.mcp.elicitation.sendRequest(
{
message: 'Please provide your email',
requestedSchema: {
type: 'object',
properties: { email: { type: 'string' } },
},
},
{ timeout: 120000 }, // Custom 2-minute timeout
);
The RequestOptions parameter supports:
timeout: Custom timeout in milliseconds (default: 60000ms)signal: AbortSignal for request cancellationFixes #10834
Fix HTTP SSE fallback to only trigger for 400/404/405 per MCP spec (#10803)
With @modelcontextprotocol/sdk 1.24.0+, SSE fallback now only occurs for HTTP status codes 400, 404, and 405. Other errors (like 401 Unauthorized) are re-thrown for proper handling.
Older SDK versions maintain the existing behavior (always fallback to SSE).
Add injectable fetch into mcp client integration. Follows from modelcontextprotocol/sdk implementation of Streamable HTTP and SSE transports. (#10780)
Add disableInit option to all storage adapters (#10851)
Adds a new disableInit config option to all storage providers that allows users to disable automatic table creation/migrations at runtime. This is useful for CI/CD pipelines where you want to run migrations during deployment with elevated credentials, then run the application with disableInit: true so it doesn't attempt schema changes at runtime.
// CI/CD script - run migrations
const storage = new PostgresStore({
connectionString: DATABASE_URL,
id: 'pg-storage',
});
await storage.init();
// Runtime - skip auto-init
const storage = new PostgresStore({
connectionString: DATABASE_URL,
id: 'pg-storage',
disableInit: true,
});
createStorageErrorId and createVectorErrorId). This ensures consistent error ID patterns (MASTRA_STORAGE_{STORE}_{OPERATION}_{STATUS} and MASTRA_VECTOR_{STORE}_{OPERATION}_{STATUS}) across the codebase for better error tracking and debugging. (#10913)Add disableInit option to all storage adapters (#10851)
Adds a new disableInit config option to all storage providers that allows users to disable automatic table creation/migrations at runtime. This is useful for CI/CD pipelines where you want to run migrations during deployment with elevated credentials, then run the application with disableInit: true so it doesn't attempt schema changes at runtime.
// CI/CD script - run migrations
const storage = new PostgresStore({
connectionString: DATABASE_URL,
id: 'pg-storage',
});
await storage.init();
// Runtime - skip auto-init
const storage = new PostgresStore({
connectionString: DATABASE_URL,
id: 'pg-storage',
disableInit: true,
});
createStorageErrorId and createVectorErrorId). This ensures consistent error ID patterns (MASTRA_STORAGE_{STORE}_{OPERATION}_{STATUS} and MASTRA_VECTOR_{STORE}_{OPERATION}_{STATUS}) across the codebase for better error tracking and debugging. (#10913)Add time-to-first-token (TTFT) support for Braintrust integration (#10840)
Adds time_to_first_token metric to Braintrust spans, populated from the completionStartTime attribute captured when the first streaming chunk arrives.
// time_to_first_token is now automatically sent to Braintrust
// as part of span metrics during streaming
const result = await agent.stream('Hello');
Add time-to-first-token (TTFT) support for Langfuse integration (#10781)
Adds completionStartTime to model generation spans, which Langfuse uses to calculate TTFT metrics. The timestamp is automatically captured when the first content chunk arrives during streaming.
// completionStartTime is now automatically captured and sent to Langfuse
// enabling TTFT metrics in your Langfuse dashboard
const result = await agent.stream('Hello');
Consolidated tool-output chunks from nested agents into single tool-result spans (#10836)
link langfuse prompts and helper functions (#10738)
createStorageErrorId and createVectorErrorId). This ensures consistent error ID patterns (MASTRA_STORAGE_{STORE}_{OPERATION}_{STATUS} and MASTRA_VECTOR_{STORE}_{OPERATION}_{STATUS}) across the codebase for better error tracking and debugging. (#10913)Updated OtelExporters, Bridge, and Arize packages to better implement GenAI v1.38.0 Otel Semantic Conventions. See: (#10591) https://github.com/open-telemetry/semantic-conventions/blob/v1.38.0/docs/gen-ai/README.md
feat(observability): Add tags support to OtelExporter, OtelBridge, and ArizeExporter (#10843)
This change adds support for the tracingOptions.tags feature to the OpenTelemetry-based exporters and bridge. Tags are now included as span attributes when present on root spans, following the same pattern as Braintrust and Langfuse exporters.
Changes:
mastra.tags span attribute for root spansmastra.tagstag.tags semantic conventionImplementation Details:
Usage:
const result = await agent.generate({
messages: [{ role: 'user', content: 'Hello' }],
tracingOptions: {
tags: ['production', 'experiment-v2'],
},
});
Fixes #10771
Updated OtelExporters, Bridge, and Arize packages to better implement GenAI v1.38.0 Otel Semantic Conventions. See: (#10591) https://github.com/open-telemetry/semantic-conventions/blob/v1.38.0/docs/gen-ai/README.md
feat(observability): Add tags support to OtelExporter, OtelBridge, and ArizeExporter (#10843)
This change adds support for the tracingOptions.tags feature to the OpenTelemetry-based exporters and bridge. Tags are now included as span attributes when present on root spans, following the same pattern as Braintrust and Langfuse exporters.
Changes:
mastra.tags span attribute for root spansmastra.tagstag.tags semantic conventionImplementation Details:
Usage:
const result = await agent.generate({
messages: [{ role: 'user', content: 'Hello' }],
tracingOptions: {
tags: ['production', 'experiment-v2'],
},
});
Fixes #10771
Add disableInit option to all storage adapters (#10851)
Adds a new disableInit config option to all storage providers that allows users to disable automatic table creation/migrations at runtime. This is useful for CI/CD pipelines where you want to run migrations during deployment with elevated credentials, then run the application with disableInit: true so it doesn't attempt schema changes at runtime.
// CI/CD script - run migrations
const storage = new PostgresStore({
connectionString: DATABASE_URL,
id: 'pg-storage',
});
await storage.init();
// Runtime - skip auto-init
const storage = new PostgresStore({
connectionString: DATABASE_URL,
id: 'pg-storage',
disableInit: true,
});
createStorageErrorId and createVectorErrorId). This ensures consistent error ID patterns (MASTRA_STORAGE_{STORE}_{OPERATION}_{STATUS} and MASTRA_VECTOR_{STORE}_{OPERATION}_{STATUS}) across the codebase for better error tracking and debugging. (#10913)createStorageErrorId and createVectorErrorId). This ensures consistent error ID patterns (MASTRA_STORAGE_{STORE}_{OPERATION}_{STATUS} and MASTRA_VECTOR_{STORE}_{OPERATION}_{STATUS}) across the codebase for better error tracking and debugging. (#10913)Add StudioConfig and associated context to manage the headers and base URL of a given Mastra instance. (#10804)
This also introduces a header form available from the side bar to edit those headers.
Fix select options overflow when list is long by adding maximum height (#10813)
Removed uneeded calls to the message endpoint when the user is on a new thread (#10872)
Add prominent warning banner in observability UI when token limits are exceeded (finishReason: 'length'). (#10835)
When a model stops generating due to token limits, the span details now display:
This helps developers quickly identify and debug token limit issues in the observation page.
Fixes #8828
Add a specific page for the studio settings and moved the headers configuration form here instead of in a dialog (#10812)
Add tags support to PostHog exporter (#10785)
Include tracingOptions.tags in PostHog event properties as $ai_tags for root spans, enabling filtering and segmentation in PostHog.
const result = await agent.generate({
messages: [{ role: 'user', content: 'Hello' }],
tracingOptions: {
tags: ['production', 'experiment-v2'],
},
});
// PostHog event now includes: { $ai_tags: ["production", "experiment-v2"] }
createStorageErrorId and createVectorErrorId). This ensures consistent error ID patterns (MASTRA_STORAGE_{STORE}_{OPERATION}_{STATUS} and MASTRA_VECTOR_{STORE}_{OPERATION}_{STATUS}) across the codebase for better error tracking and debugging. (#10913)createStorageErrorId and createVectorErrorId). This ensures consistent error ID patterns (MASTRA_STORAGE_{STORE}_{OPERATION}_{STATUS} and MASTRA_VECTOR_{STORE}_{OPERATION}_{STATUS}) across the codebase for better error tracking and debugging. (#10913)fix: standardize pagination params to page/perPage with backwards compatibility for limit/offset (#10790)
page/perPage and legacy limit/offset params for workflow runs and MCP server listing endpointscreateCombinedPaginationSchema helper for endpoints needing backwards compatibilitylimit and offset as deprecated in client typesfeat: Add partial response support for agent and workflow list endpoints (#10886)
Add optional partial query parameter to /api/agents and /api/workflows endpoints to return minimal data without schemas, reducing payload size for list views:
partial=true: tool schemas (inputSchema, outputSchema) are omittedpartial=true: workflow steps are replaced with stepCount integerpartial=true: workflow root schemas (inputSchema, outputSchema) are omitted# Get partial agent data (no tool schemas)
GET /api/agents?partial=true
# Get full agent data (default behavior)
GET /api/agents
# Get partial workflow data (stepCount instead of steps, no schemas)
GET /api/workflows?partial=true
# Get full workflow data (default behavior)
GET /api/workflows
import { MastraClient } from '@mastra/client-js';
const client = new MastraClient({ baseUrl: 'http://localhost:4111' });
// Get partial agent list (smaller payload)
const partialAgents = await client.listAgents({ partial: true });
// Get full agent list with tool schemas
const fullAgents = await client.listAgents();
// Get partial workflow list (smaller payload)
const partialWorkflows = await client.listWorkflows({ partial: true });
// Get full workflow list with steps and schemas
const fullWorkflows = await client.listWorkflows();
createStorageErrorId and createVectorErrorId). This ensures consistent error ID patterns (MASTRA_STORAGE_{STORE}_{OPERATION}_{STATUS} and MASTRA_VECTOR_{STORE}_{OPERATION}_{STATUS}) across the codebase for better error tracking and debugging. (#10913)Add disableInit option to all storage adapters (#10851)
Adds a new disableInit config option to all storage providers that allows users to disable automatic table creation/migrations at runtime. This is useful for CI/CD pipelines where you want to run migrations during deployment with elevated credentials, then run the application with disableInit: true so it doesn't attempt schema changes at runtime.
// CI/CD script - run migrations
const storage = new PostgresStore({
connectionString: DATABASE_URL,
id: 'pg-storage',
});
await storage.init();
// Runtime - skip auto-init
const storage = new PostgresStore({
connectionString: DATABASE_URL,
id: 'pg-storage',
disableInit: true,
});
createStorageErrorId and createVectorErrorId). This ensures consistent error ID patterns (MASTRA_STORAGE_{STORE}_{OPERATION}_{STATUS} and MASTRA_VECTOR_{STORE}_{OPERATION}_{STATUS}) across the codebase for better error tracking and debugging. (#10913)createStorageErrorId and createVectorErrorId). This ensures consistent error ID patterns (MASTRA_STORAGE_{STORE}_{OPERATION}_{STATUS} and MASTRA_VECTOR_{STORE}_{OPERATION}_{STATUS}) across the codebase for better error tracking and debugging. (#10913)Fix select options overflow when list is long by adding maximum height (#10813)
Now when you run npx mastra@beta init, the CLI detects it's running from the beta dist-tag and installs the correct versions. (#10821)
braintrustLogger as a parameter to BraintrustExporter allowing developers to pass in their own braintrust logger. (#10698)getWorkflowRuns
Add automatic restart to restart active workflow runs when server startsAdd support for custom fetch function in MastraClient to enable environments like Tauri that require custom fetch implementations to avoid timeout errors. (#10679)
You can now pass a custom fetch function when creating a MastraClient:
import { MastraClient } from '@mastra/client-js';
// Before: Only global fetch was available
const client = new MastraClient({
baseUrl: 'http://your-api-url',
});
// After: Custom fetch can be passed
const client = new MastraClient({
baseUrl: 'http://your-api-url',
fetch: customFetch, // Your custom fetch implementation
});
If no custom fetch is provided, it falls back to the global fetch function, maintaining backward compatibility.
Fixes #10673
Add timeTravel APIs and add timeTravel feature to studio (#10757)
feat: Add partial response support for agent and workflow list endpoints (#10906)
Add optional partial query parameter to /api/agents and /api/workflows endpoints to return minimal data without schemas, reducing payload size for list views:
partial=true: tool schemas (inputSchema, outputSchema) are omittedpartial=true: workflow steps are replaced with stepCount integerpartial=true: workflow root schemas (inputSchema, outputSchema) are omitted# Get partial agent data (no tool schemas)
GET /api/agents?partial=true
# Get full agent data (default behavior)
GET /api/agents
# Get partial workflow data (stepCount instead of steps, no schemas)
GET /api/workflows?partial=true
# Get full workflow data (default behavior)
GET /api/workflows
import { MastraClient } from '@mastra/client-js';
const client = new MastraClient({ baseUrl: 'http://localhost:4111' });
// Get partial agent list (smaller payload)
const partialAgents = await client.listAgents({ partial: true });
// Get full agent list with tool schemas
const fullAgents = await client.listAgents();
// Get partial workflow list (smaller payload)
const partialWorkflows = await client.listWorkflows({ partial: true });
// Get full workflow list with steps and schemas
const fullWorkflows = await client.listWorkflows();
feat: Add partial response support for agent and workflow list endpoints (#10906)
Add optional partial query parameter to /api/agents and /api/workflows endpoints to return minimal data without schemas, reducing payload size for list views:
partial=true: tool schemas (inputSchema, outputSchema) are omittedpartial=true: workflow steps are replaced with stepCount integerpartial=true: workflow root schemas (inputSchema, outputSchema) are omitted# Get partial agent data (no tool schemas)
GET /api/agents?partial=true
# Get full agent data (default behavior)
GET /api/agents
# Get partial workflow data (stepCount instead of steps, no schemas)
GET /api/workflows?partial=true
# Get full workflow data (default behavior)
GET /api/workflows
import { MastraClient } from '@mastra/client-js';
const client = new MastraClient({ baseUrl: 'http://localhost:4111' });
// Get partial agent list (smaller payload)
const partialAgents = await client.listAgents({ partial: true });
// Get full agent list with tool schemas
const fullAgents = await client.listAgents();
// Get partial workflow list (smaller payload)
const partialWorkflows = await client.listWorkflows({ partial: true });
// Get full workflow list with steps and schemas
const fullWorkflows = await client.listWorkflows();
getWorkflowRuns
Add automatic restart to restart active workflow runs when server startsgetWorkflowRuns
Add automatic restart to restart active workflow runs when server startsunexpected json parse issue, log error but dont fail (#10640)
Emit error chunk and call onError when agent workflow step fails (#10905)
When a workflow step fails (e.g., tool not found), the error is now properly emitted as an error chunk to the stream and the onError callback is called. This fixes the issue where agent.generate() would throw "promise 'text' was not resolved or rejected" instead of the actual error message.
Improved typing for workflow.then to allow the provided steps inputSchema to be a subset of the previous steps outputSchema. Also errors if the provided steps inputSchema is a superset of the previous steps outputSchema. (#10775)
Add timeTravel APIs and add timeTravel feature to studio (#10757)
Fix backport (#10599)
Fix type issue with workflow .parallel() when passing multiple steps, one or more of which has a resumeSchema provided. (#10712)
Handle state update and bailing in foreach steps (#10826)
Fix discriminatedUnion schema information lost when json schema is converted to zod (#10764)
Add restart method to workflow run that allows restarting an active workflow run (#10703)
Add status filter to getWorkflowRuns
Add automatic restart to restart active workflow runs when server starts
Add timeTravel to workflows. This makes it possible to start a workflow run from a particular step in the workflow (#10717)
Example code:
const result = await run.timeTravel({
step: 'step2',
inputData: {
value: 'input',
},
});
Fixed OpenAI reasoning message merging so distinct reasoning items are no longer dropped when they share a message ID. Prevents downstream errors where a function call is missing its required "reasoning" item. See #9005. (#10729)
Commit registered uncommitted workflows automatically (#10829)
Safe stringify objects in telemetry (#10918)
Improve nested ts-config paths resolution for NX users (#10766)
Fix dev playground auth to allow non-protected paths to bypass authentication when MASTRA_DEV=true, while still requiring the x-mastra-dev-playground header for protected endpoints (#10723)
Fixed a bug where ESM shims were incorrectly injected even when the user had already declared __filename or __dirname (#10823)
Add restart method to workflow run that allows restarting an active workflow run (#10703)
Add status filter to getWorkflowRuns
Add automatic restart to restart active workflow runs when server starts
Fix installing external peer deps for cloud deployer (#10787)
Adds --force and --legacy-peer-deps=false flags to npm install command to ensure peer dependencies for external packages are properly installed in the mastra output directory. The --legacy-peer-deps=false flag overrides package manager settings (like pnpm's default of true) to ensure consistent behavior.
Fix backport (#10599)
getWorkflowRuns
Add automatic restart to restart active workflow runs when server startsHandle state update and bailing in foreach steps (#10826)
Add restart method to workflow run that allows restarting an active workflow run (#10703)
Add status filter to getWorkflowRuns
Add automatic restart to restart active workflow runs when server starts
Add timeTravel to workflows. This makes it possible to start a workflow run from a particular step in the workflow (#10717)
Example code:
const result = await run.timeTravel({
step: 'step2',
inputData: {
value: 'input',
},
});
getWorkflowRuns
Add automatic restart to restart active workflow runs when server startsgetWorkflowRuns
Add automatic restart to restart active workflow runs when server startsgetWorkflowRuns
Add automatic restart to restart active workflow runs when server startsgetWorkflowRuns
Add automatic restart to restart active workflow runs when server startsgetWorkflowRuns
Add automatic restart to restart active workflow runs when server startsFix select options overflow when list is long by adding maximum height (#10833)
Add timeTravel APIs and add timeTravel feature to studio (#10757)
Fix discriminatedUnion schema information lost when json schema is converted to zod (#10764)
Add timeTravel APIs and add timeTravel feature to studio (#10757)
feat: Add partial response support for agent and workflow list endpoints (#10906)
Add optional partial query parameter to /api/agents and /api/workflows endpoints to return minimal data without schemas, reducing payload size for list views:
partial=true: tool schemas (inputSchema, outputSchema) are omittedpartial=true: workflow steps are replaced with stepCount integerpartial=true: workflow root schemas (inputSchema, outputSchema) are omitted# Get partial agent data (no tool schemas)
GET /api/agents?partial=true
# Get full agent data (default behavior)
GET /api/agents
# Get partial workflow data (stepCount instead of steps, no schemas)
GET /api/workflows?partial=true
# Get full workflow data (default behavior)
GET /api/workflows
import { MastraClient } from '@mastra/client-js';
const client = new MastraClient({ baseUrl: 'http://localhost:4111' });
// Get partial agent list (smaller payload)
const partialAgents = await client.listAgents({ partial: true });
// Get full agent list with tool schemas
const fullAgents = await client.listAgents();
// Get partial workflow list (smaller payload)
const partialWorkflows = await client.listWorkflows({ partial: true });
// Get full workflow list with steps and schemas
const fullWorkflows = await client.listWorkflows();
getWorkflowRuns
Add automatic restart to restart active workflow runs when server startsFix select options overflow when list is long by adding maximum height (#10833)
Add timeTravel APIs and add timeTravel feature to studio (#10757)
Fix discriminatedUnion schema information lost when json schema is converted to zod (#10764)
Fix select options overflow when list is long by adding maximum height (#10833)
Add timeTravel APIs and add timeTravel feature to studio (#10757)
Fix discriminatedUnion schema information lost when json schema is converted to zod (#10764)
Add restart method to workflow run that allows restarting an active workflow run (#10703)
Add status filter to getWorkflowRuns
Add automatic restart to restart active workflow runs when server starts
Fixed module not found errors during production builds by skipping transitive dependency validation. Production builds now only bundle direct dependencies, which also results in faster deployment times. (#10589)
Fixes #10116 Fixes #10055 Fixes #9951
Add framework-agnostic stream handlers for use outside of Hono/Mastra server (#10628)
handleChatStream: Standalone handler for streaming agent chat in AI SDK formathandleWorkflowStream: Standalone handler for streaming workflow execution in AI SDK formathandleNetworkStream: Standalone handler for streaming agent network execution in AI SDK format
These functions accept all arguments explicitly and return a ReadableStream, making them usable in any framework (Next.js App Router, Express, etc.) without depending on Hono context.Example usage:
import { handleChatStream } from '@mastra/ai-sdk';
import { createUIMessageStreamResponse } from 'ai';
export async function POST(req: Request) {
const params = await req.json();
const stream = await handleChatStream({
mastra,
agentId: 'weatherAgent',
params,
});
return createUIMessageStreamResponse({ stream });
}
New exports:
Support streaming agent text chunks from workflow-step-output (#10540)
Adds support for streaming text and tool call chunks from agents running inside workflows via the workflow-step-output event. When you pipe an agent's stream into a workflow step's writer, the text chunks, tool calls, and other streaming events are automatically included in the workflow stream and converted to UI messages.
Features:
includeTextStreamParts option to WorkflowStreamToAISDKTransformer (defaults to true)isMastraTextStreamChunk type guard to identify Mastra chunks with text streaming datatext-start, text-delta, text-endtool-call, tool-resulttransformers.test.tsworkflowRoute()Example:
const planActivities = createStep({
execute: async ({ mastra, writer }) => {
const agent = mastra?.getAgent('weatherAgent');
const response = await agent.stream('Plan activities');
await response.fullStream.pipeTo(writer);
return { activities: await response.text };
},
});
When served via workflowRoute(), the UI receives incremental text updates as the agent generates its response, providing a smooth streaming experience.
Added support for resuming agent streams in the chat route. You can now pass resumeData in the request body to continue a previous agent stream, enabling long-running conversations and multi-step agent workflows. (#10448)
Add Better Auth authentication provider (#10658)
Adds a new authentication provider for Better Auth, a self-hosted, open-source authentication framework.
import { betterAuth } from 'better-auth';
import { MastraAuthBetterAuth } from '@mastra/auth-better-auth';
import { Mastra } from '@mastra/core';
// Create your Better Auth instance
const auth = betterAuth({
database: {
provider: 'postgresql',
url: process.env.DATABASE_URL!,
},
emailAndPassword: {
enabled: true,
},
});
// Create the Mastra auth provider
const mastraAuth = new MastraAuthBetterAuth({
auth,
});
// Use with Mastra
const mastra = new Mastra({
server: {
auth: mastraAuth,
},
});
feat(storage): support querying messages from multiple threads (#10663)
threadId: string | string[] was being passed to places expecting Scalar typelistMessages across all adapters when threadId is an array_getIncludedMessages to look up message threadId by ID (since message IDs are globally unique)msg-idx:{messageId} index for O(1) message lookups (backwards compatible with fallback to scan for old messages, with automatic backfill)fix: ensure score responses match saved payloads for Mastra Stores. (#10557)
Unify transformScoreRow functions across storage adapters (#10648)
Added a unified transformScoreRow function in @mastra/core/storage that provides schema-driven row transformation for score data. This eliminates code duplication across 10 storage adapters while maintaining store-specific behavior through configurable options:
preferredTimestampFields: Preferred source fields for timestamps (PostgreSQL, Cloudflare D1)convertTimestamps: Convert timestamp strings to Date objects (MSSQL, MongoDB, ClickHouse)nullValuePattern: Skip values matching pattern (ClickHouse's '_null_')fieldMappings: Map source column names to schema fields (LibSQL's additionalLLMContext)Each store adapter now uses the unified function with appropriate options, reducing ~200 lines of duplicate transformation logic while ensuring consistent behavior across all storage backends.
Add support for custom fetch function in MastraClient to enable environments like Tauri that require custom fetch implementations to avoid timeout errors. (#10677)
You can now pass a custom fetch function when creating a MastraClient:
import { MastraClient } from '@mastra/client-js';
// Before: Only global fetch was available
const client = new MastraClient({
baseUrl: 'http://your-api-url',
});
// After: Custom fetch can be passed
const client = new MastraClient({
baseUrl: 'http://your-api-url',
fetch: customFetch, // Your custom fetch implementation
});
If no custom fetch is provided, it falls back to the global fetch function, maintaining backward compatibility.
Fixes #10673
The client-js package had its own simpler zodToJsonSchema implementation that was missing critical features from schema-compat. This could cause issues when users pass Zod schemas with z.record() or z.date() through the MastraClient. (#10730)
Now the client uses the same implementation as the rest of the codebase, which includes the Zod v4 z.record() bug fix, date-time format conversion for z.date(), and proper handling of unrepresentable types.
Also removes the now-unused zod-to-json-schema dependency from client-js.
Fix wrong arguments type in list workflow runs (#10755)
Adjust the generate / stream types to accept tracingOptions (#10742)
fix(a2a): fix streaming and memory support for A2A protocol (#10653)
Client (@mastra/client-js):
sendStreamingMessage to properly return a streaming response instead of attempting to parse it as JSONServer (@mastra/server):
contextId as threadId for memory persistence across conversationsresourceId via params.metadata.resourceId or message.metadata.resourceId, falling back to agentIdfeat(storage): support querying messages from multiple threads (#10663)
threadId: string | string[] was being passed to places expecting Scalar typelistMessages across all adapters when threadId is an array_getIncludedMessages to look up message threadId by ID (since message IDs are globally unique)msg-idx:{messageId} index for O(1) message lookups (backwards compatible with fallback to scan for old messages, with automatic backfill)fix: ensure score responses match saved payloads for Mastra Stores. (#10557)
Unify transformScoreRow functions across storage adapters (#10648)
Added a unified transformScoreRow function in @mastra/core/storage that provides schema-driven row transformation for score data. This eliminates code duplication across 10 storage adapters while maintaining store-specific behavior through configurable options:
preferredTimestampFields: Preferred source fields for timestamps (PostgreSQL, Cloudflare D1)convertTimestamps: Convert timestamp strings to Date objects (MSSQL, MongoDB, ClickHouse)nullValuePattern: Skip values matching pattern (ClickHouse's '_null_')fieldMappings: Map source column names to schema fields (LibSQL's additionalLLMContext)Each store adapter now uses the unified function with appropriate options, reducing ~200 lines of duplicate transformation logic while ensuring consistent behavior across all storage backends.
feat(storage): support querying messages from multiple threads (#10663)
threadId: string | string[] was being passed to places expecting Scalar typelistMessages across all adapters when threadId is an array_getIncludedMessages to look up message threadId by ID (since message IDs are globally unique)msg-idx:{messageId} index for O(1) message lookups (backwards compatible with fallback to scan for old messages, with automatic backfill)fix: ensure score responses match saved payloads for Mastra Stores. (#10557)
Unify transformScoreRow functions across storage adapters (#10648)
Added a unified transformScoreRow function in @mastra/core/storage that provides schema-driven row transformation for score data. This eliminates code duplication across 10 storage adapters while maintaining store-specific behavior through configurable options:
preferredTimestampFields: Preferred source fields for timestamps (PostgreSQL, Cloudflare D1)convertTimestamps: Convert timestamp strings to Date objects (MSSQL, MongoDB, ClickHouse)nullValuePattern: Skip values matching pattern (ClickHouse's '_null_')fieldMappings: Map source column names to schema fields (LibSQL's additionalLLMContext)Each store adapter now uses the unified function with appropriate options, reducing ~200 lines of duplicate transformation logic while ensuring consistent behavior across all storage backends.
feat(storage): support querying messages from multiple threads (#10663)
threadId: string | string[] was being passed to places expecting Scalar typelistMessages across all adapters when threadId is an array_getIncludedMessages to look up message threadId by ID (since message IDs are globally unique)msg-idx:{messageId} index for O(1) message lookups (backwards compatible with fallback to scan for old messages, with automatic backfill)Convex storage and vector adapter improvements: (#10421)
id field for Mastra record ID and by_record_id index for efficient lookupssaveMessages and updateMessages to automatically update thread updatedAt timestampslistMessages to properly fetch messages from different threads when using includesaveResource to preserve undefined metadata instead of converting to empty objectConvexAdminClient to use Convex HTTP API directly with proper admin authentication@mastra/convex/server for easy schema setupChanged .branch() result schema to make all branch output fields optional. (#10693)
Breaking change: Branch outputs are now optional since only one branch executes at runtime. Update your workflow schemas to handle optional branch results.
Before:
const workflow = createWorkflow({...})
.branch([
[condition1, stepA], // outputSchema: { result: z.string() }
[condition2, stepB], // outputSchema: { data: z.number() }
])
.map({
finalResult: { step: stepA, path: 'result' } // Expected non-optional
});
After:
const workflow = createWorkflow({...})
.branch([
[condition1, stepA],
[condition2, stepB],
])
.map({
finalResult: {
step: stepA,
path: 'result' // Now optional - provide fallback
}
});
Why: Branch conditionals execute only one path, so non-executed branches don't produce outputs. The type system now correctly reflects this runtime behavior.
Related issue: https://github.com/mastra-ai/mastra/issues/10642
Memory system now uses processors. Memory processors (MessageHistory, SemanticRecall, WorkingMemory) are now exported from @mastra/memory/processors and automatically added to the agent pipeline based on your memory config. Core processors (ToolCallFilter, TokenLimiter) remain in @mastra/core/processors. (#9254)
Add reserved keys in RequestContext for secure resourceId/threadId setting from middleware (#10657)
This allows middleware to securely set resourceId and threadId via reserved keys in RequestContext (MASTRA_RESOURCE_ID_KEY and MASTRA_THREAD_ID_KEY), which take precedence over client-provided values for security.
feat(workflows): add suspendData parameter to step execute function (#10734)
Adds a new suspendData parameter to workflow step execute functions that provides access to the data originally passed to suspend() when the step was suspended. This enables steps to access context about why they were suspended when they are later resumed.
New Features:
suspendData parameter automatically populated in step execute function when resumingsuspendSchemaExample:
const step = createStep({
suspendSchema: z.object({ reason: z.string() }),
resumeSchema: z.object({ approved: z.boolean() }),
execute: async ({ suspend, suspendData, resumeData }) => {
if (!resumeData?.approved) {
return await suspend({ reason: 'Approval required' });
}
// Access original suspend data when resuming
console.log(`Resuming after: ${suspendData?.reason}`);
return { result: 'Approved' };
},
});
feat(storage): support querying messages from multiple threads (#10663)
threadId: string | string[] was being passed to places expecting Scalar typelistMessages across all adapters when threadId is an array_getIncludedMessages to look up message threadId by ID (since message IDs are globally unique)msg-idx:{messageId} index for O(1) message lookups (backwards compatible with fallback to scan for old messages, with automatic backfill)Adds trace tagging support to the BrainTrust and Langfuse tracing exporters. (#10765)
Add messageList parameter to processOutputStream for accessing remembered messages during streaming (#10608)
Unify transformScoreRow functions across storage adapters (#10648)
Added a unified transformScoreRow function in @mastra/core/storage that provides schema-driven row transformation for score data. This eliminates code duplication across 10 storage adapters while maintaining store-specific behavior through configurable options:
preferredTimestampFields: Preferred source fields for timestamps (PostgreSQL, Cloudflare D1)convertTimestamps: Convert timestamp strings to Date objects (MSSQL, MongoDB, ClickHouse)nullValuePattern: Skip values matching pattern (ClickHouse's '_null_')fieldMappings: Map source column names to schema fields (LibSQL's additionalLLMContext)Each store adapter now uses the unified function with appropriate options, reducing ~200 lines of duplicate transformation logic while ensuring consistent behavior across all storage backends.
unexpected json parse issue, log error but dont fail (#10241)
Fixed a bug in agent networks where sometimes the task name was empty (#10629)
Adds tool-result and tool-error chunks to the processor.processOutputStream path. Processors now have access to these two chunks. (#10645)
Include .input in workflow results for both engines and remove the option to omit them from Inngest workflows. (#10688)
getSpeakers endpoint returns an empty array if voice is not configured on the agent and getListeners endpoint returns { enabled: false } if voice is not figured on the agent. (#10560)
When no voice is set on agent don't throw error, by default set voice to undefined rather than DefaultVoice which throws errors when it is accessed.
SimpleAuth and improved CloudAuth (#10490)
When LLMs like Claude Sonnet 4.5 and Gemini 2.4 call tools with all-optional parameters, they send args: undefined instead of args: {}. This caused validation to fail with "root: Required". (#10728)
The fix normalizes undefined/null to {} for object schemas and [] for array schemas before validation.
Fixed tool validation error messages so logs show Zod validation errors directly instead of hiding them inside structured JSON. (#10579)
Fix error when spreading config objects in Mastra constructor (#10718)
Adds validation guards to handle undefined/null values that can occur when config objects are spread ({ ...config }). Previously, if getters or non-enumerable properties resulted in undefined values during spread, the constructor would throw cryptic errors when accessing .id or .name on undefined objects.
Fix GPT-5/o3 reasoning models failing with "required reasoning item" errors when using memory with tools. Empty reasoning is now stored with providerMetadata to preserve OpenAI's item_reference. (#10585)
Fix generateTitle model type to accept AI SDK LanguageModelV2 (#10541)
Updated the generateTitle.model config option to accept MastraModelConfig instead of MastraLanguageModel. This allows users to pass raw AI SDK LanguageModelV2 models (e.g., anthropic.languageModel('claude-3-5-haiku-20241022')) directly without type errors.
Previously, passing a standard LanguageModelV2 would fail because MastraLanguageModelV2 has different doGenerate/doStream return types. Now MastraModelConfig is used consistently across:
memory/types.ts - generateTitle.model configagent.ts - genTitle, generateTitleFromUserMessage, resolveTitleGenerationConfigagent-legacy.ts - AgentLegacyCapabilities interfaceFix message ordering when using toAISdkV5Messages or prepareStep (#10686)
Messages without createdAt timestamps were getting shuffled because they all received identical timestamps during conversion. Now messages are assigned monotonically increasing timestamps via generateCreatedAt(), preserving input order.
Before:
Input: [user: "hello", assistant: "Hi!", user: "bye"]
Output: [user: "bye", assistant: "Hi!", user: "hello"] // shuffled!
After:
Input: [user: "hello", assistant: "Hi!", user: "bye"]
Output: [user: "hello", assistant: "Hi!", user: "bye"] // correct order
Fix Scorer not using custom gateways registered with Mastra (#10778)
Scorers now have access to custom gateways when resolving models. Previously, calling resolveModelConfig in the scorer didn't pass the Mastra instance, so custom gateways were never available.
Fix workflow run status not being updated from storage snapshot in createRun (#10664)
When createRun is called with an existing runId, it now correctly updates the run's status from the storage snapshot. This fixes the issue where different workflow instances (e.g., different API requests) would get a run with 'pending' status instead of the correct status from storage (e.g., 'suspended').
Pass resourceId and threadId to network agent's subAgent when it has its own memory (#10592)
use agent.getMemory to fetch the memory instance on the Agent class to make sure that storage gets set if memory doesn't set it itself. (#10556)
Built-in processors that use internal agents (PromptInjectionDetector, ModerationProcessor, PIIDetector, LanguageDetector, StructuredOutputProcessor) now accept providerOptions to control model behavior. (#10651)
This lets you pass provider-specific settings like reasoningEffort for OpenAI thinking models:
const processor = new PromptInjectionDetector({
model: 'openai/o1-mini',
threshold: 0.7,
strategy: 'block',
providerOptions: {
openai: {
reasoningEffort: 'low',
},
},
});
Improved typing for workflow.then to allow the provided steps inputSchema to be a subset of the previous steps outputSchema. Also errors if the provided steps inputSchema is a superset of the previous steps outputSchema. (#10763)
Fix type issue with workflow .parallel() when passing multiple steps, one or more of which has a resumeSchema provided. (#10708)
Adds bidirectional integration with otel tracing via a new @mastra/otel-bridge package. (#10482)
Adds processInputStep method to the Processor interface. Unlike processInput which runs once at the start, this runs at each step of the agentic loop (including tool call continuations). (#10650)
const processor: Processor = {
id: 'my-processor',
processInputStep: async ({ messages, messageList, stepNumber, systemMessages }) => {
// Transform messages at each step before LLM call
return messageList;
},
};
When using output processors with agent.generate(), result.text was returning the unprocessed LLM response instead of the processed text. (#10735)
Before:
const result = await agent.generate('hello');
result.text; // "hello world" (unprocessed)
result.response.messages[0].content[0].text; // "HELLO WORLD" (processed)
After:
const result = await agent.generate('hello');
result.text; // "HELLO WORLD" (processed)
The bug was caused by the text delayed promise being resolved twice - first correctly with the processed text, then overwritten with the unprocessed buffered text.
Refactored default engine to fit durable execution better, and the inngest engine to match. (#10627) Also fixes requestContext persistence by relying on inngest step memoization.
Unifies some of the stepResults and error formats in both engines.
Allow direct access to server app handle directly from Mastra instance. (#10598)
// Before: HTTP request to localhost
const response = await fetch(`http://localhost:5000/api/tools`);
// After: Direct call via app.fetch()
const app = mastra.getServerApp<Hono>();
const response = await app.fetch(new Request('http://internal/api/tools'));
mastra.getServerApp<T>() to access the underlying Hono/Express appmastra.getMastraServer() and mastra.setMastraServer() for adapter accessMastraServerBase class in @mastra/core/server for adapter implementationsFix network agent not getting text-delta from subAgent when .stream is used (#10533)
Fix discriminatedUnion schema information lost when json schema is converted to zod (#10500)
Fix writer.custom not working during workflow resume operations (#10720)
When a workflow step is resumed, the writer parameter was not being properly passed through, causing writer.custom() calls to fail. This fix ensures the writableStream parameter is correctly passed to both run.resume() and run.start() calls in the workflow execution engine, allowing custom events to be emitted properly during resume operations.
Fix corrupted provider-registry.json file in global cache and regenerate corrupted files (#10606)
Fix TypeScript error when using Zod schemas in defaultOptions.structuredOutput (#10710)
Previously, defining structuredOutput.schema in defaultOptions would cause a TypeScript error because the type only accepted undefined. Now any valid OutputSchema is correctly accepted.
Add support for providerOptions when defining tools. This allows developers to specify provider-specific configurations (like Anthropic's cacheControl) per tool. (#10649)
createTool({
id: 'my-tool',
providerOptions: {
anthropic: { cacheControl: { type: 'ephemeral' } },
},
// ...
});
Fixed OpenAI reasoning message merging so distinct reasoning items are no longer dropped when they share a message ID. Prevents downstream errors where a function call is missing its required "reasoning" item. See #9005. (#10614)
Improve nested ts-config paths resolution for NX users (#6243)
Fix dev playground auth to allow non-protected paths to bypass authentication when MASTRA_DEV=true, while still requiring the x-mastra-dev-playground header for protected endpoints (#10722)
Unified MastraServer API with MCP transport routes (#10644)
Breaking Changes:
HonoServerAdapter to MastraServer in @mastra/honoExpressServerAdapter to MastraServer in @mastra/expressServerAdapter to MastraServerBase in @mastra/serverNew Features:
/api/mcp/:serverId/mcp (HTTP) and /api/mcp/:serverId/sse (SSE)express.json() middleware compatibility for MCP routes@mastra/server/authTesting:
@internal/server-adapter-test-utilsFixed module not found errors during production builds by skipping transitive dependency validation. Production builds now only bundle direct dependencies, which also results in faster deployment times. (#10587)
Fixes #10116 Fixes #10055 Fixes #9951
Allow direct access to server app handle directly from Mastra instance. (#10598)
// Before: HTTP request to localhost
const response = await fetch(`http://localhost:5000/api/tools`);
// After: Direct call via app.fetch()
const app = mastra.getServerApp<Hono>();
const response = await app.fetch(new Request('http://internal/api/tools'));
mastra.getServerApp<T>() to access the underlying Hono/Express appmastra.getMastraServer() and mastra.setMastraServer() for adapter accessMastraServerBase class in @mastra/core/server for adapter implementationsFixed bundling to correctly exclude subpath imports of external packages. Previously, when a package like lodash was marked as external, subpath imports such as lodash/merge were still being bundled incorrectly. Now all subpaths are properly excluded. (#10588)
Fixes #10055
Improved error messages when bundling fails during deployment. (#10756)
What changed:
SimpleAuth and improved CloudAuth (#10490)
Fix installing external peer deps for cloud deployer (#10783)
Adds --force and --legacy-peer-deps=false flags to npm install command to ensure peer dependencies for external packages are properly installed in the mastra output directory. The --legacy-peer-deps=false flag overrides package manager settings (like pnpm's default of true) to ensure consistent behavior.
.netlify/v1/config.json file to not let Netlify bundle the functions (since Mastra already bundles its output). Also, re-enable ESM shimming. (#10405)Add DuckDB vector store implementation (#10760)
Adds DuckDB as a vector store provider for Mastra, enabling embedded high-performance vector storage without requiring an external server.
import { DuckDBVector } from '@mastra/duckdb';
const vectorStore = new DuckDBVector({
id: 'my-store',
path: ':memory:', // or './vectors.duckdb' for persistence
});
await vectorStore.createIndex({
indexName: 'docs',
dimension: 1536,
metric: 'cosine',
});
await vectorStore.upsert({
indexName: 'docs',
vectors: [[0.1, 0.2, ...]],
metadata: [{ text: 'hello world' }],
});
const results = await vectorStore.query({
indexName: 'docs',
queryVector: [0.1, 0.2, ...],
topK: 10,
filter: { text: 'hello world' },
});
feat(storage): support querying messages from multiple threads (#10663)
threadId: string | string[] was being passed to places expecting Scalar typelistMessages across all adapters when threadId is an array_getIncludedMessages to look up message threadId by ID (since message IDs are globally unique)msg-idx:{messageId} index for O(1) message lookups (backwards compatible with fallback to scan for old messages, with automatic backfill)fix: ensure score responses match saved payloads for Mastra Stores. (#10557)
Unify transformScoreRow functions across storage adapters (#10648)
Added a unified transformScoreRow function in @mastra/core/storage that provides schema-driven row transformation for score data. This eliminates code duplication across 10 storage adapters while maintaining store-specific behavior through configurable options:
preferredTimestampFields: Preferred source fields for timestamps (PostgreSQL, Cloudflare D1)convertTimestamps: Convert timestamp strings to Date objects (MSSQL, MongoDB, ClickHouse)nullValuePattern: Skip values matching pattern (ClickHouse's '_null_')fieldMappings: Map source column names to schema fields (LibSQL's additionalLLMContext)Each store adapter now uses the unified function with appropriate options, reducing ~200 lines of duplicate transformation logic while ensuring consistent behavior across all storage backends.
Add ElasticSearch vector store support (#10741)
New @mastra/elasticsearch package providing vector similarity search using ElasticSearch 8.x+ with dense_vector fields.
import { ElasticSearchVector } from '@mastra/elasticsearch';
const vectorDB = new ElasticSearchVector({
url: 'http://localhost:9200',
id: 'my-vectors',
});
await vectorDB.createIndex({
indexName: 'embeddings',
dimension: 1536,
metric: 'cosine',
});
await vectorDB.upsert({
indexName: 'embeddings',
vectors: [embedding],
metadata: [{ source: 'doc.pdf' }],
});
const results = await vectorDB.query({
indexName: 'embeddings',
queryVector: queryEmbedding,
topK: 10,
filter: { source: 'doc.pdf' },
});
getReasoningFromRunOutput utility function for extracting reasoning text from scorer run outputs. This enables scorers to access chain-of-thought reasoning from models like deepseek-reasoner in preprocess functions. (#10684)Include .input in workflow results for both engines and remove the option to omit them from Inngest workflows. (#10688)
Using createStep with a nested Inngest workflow now returns the workflow itself, maintaining the correct .invoke() execution flow Inngest workflows need to operate. (#10689)
Refactored default engine to fit durable execution better, and the inngest engine to match. (#10627) Also fixes requestContext persistence by relying on inngest step memoization.
Unifies some of the stepResults and error formats in both engines.
Miscellanous bug fixes and test fixes: (#10515)
Emit workflow-step-result and workflow-step-finish when step fails in inngest workflow (#10555)
feat(storage): support querying messages from multiple threads (#10663)
threadId: string | string[] was being passed to places expecting Scalar typelistMessages across all adapters when threadId is an array_getIncludedMessages to look up message threadId by ID (since message IDs are globally unique)msg-idx:{messageId} index for O(1) message lookups (backwards compatible with fallback to scan for old messages, with automatic backfill)fix: ensure score responses match saved payloads for Mastra Stores. (#10557)
Unify transformScoreRow functions across storage adapters (#10648)
Added a unified transformScoreRow function in @mastra/core/storage that provides schema-driven row transformation for score data. This eliminates code duplication across 10 storage adapters while maintaining store-specific behavior through configurable options:
preferredTimestampFields: Preferred source fields for timestamps (PostgreSQL, Cloudflare D1)convertTimestamps: Convert timestamp strings to Date objects (MSSQL, MongoDB, ClickHouse)nullValuePattern: Skip values matching pattern (ClickHouse's '_null_')fieldMappings: Map source column names to schema fields (LibSQL's additionalLLMContext)Each store adapter now uses the unified function with appropriate options, reducing ~200 lines of duplicate transformation logic while ensuring consistent behavior across all storage backends.
Add projectName config option for LangSmith exporter (#10762)
You can now specify which LangSmith project to send traces to via the projectName config option. This overrides the LANGCHAIN_PROJECT environment variable.
new LangSmithExporter({
apiKey: process.env.LANGSMITH_API_KEY,
projectName: 'my-custom-project',
});
feat(storage): support querying messages from multiple threads (#10663)
threadId: string | string[] was being passed to places expecting Scalar typelistMessages across all adapters when threadId is an array_getIncludedMessages to look up message threadId by ID (since message IDs are globally unique)msg-idx:{messageId} index for O(1) message lookups (backwards compatible with fallback to scan for old messages, with automatic backfill)Unify transformScoreRow functions across storage adapters (#10648)
Added a unified transformScoreRow function in @mastra/core/storage that provides schema-driven row transformation for score data. This eliminates code duplication across 10 storage adapters while maintaining store-specific behavior through configurable options:
preferredTimestampFields: Preferred source fields for timestamps (PostgreSQL, Cloudflare D1)convertTimestamps: Convert timestamp strings to Date objects (MSSQL, MongoDB, ClickHouse)nullValuePattern: Skip values matching pattern (ClickHouse's '_null_')fieldMappings: Map source column names to schema fields (LibSQL's additionalLLMContext)Each store adapter now uses the unified function with appropriate options, reducing ~200 lines of duplicate transformation logic while ensuring consistent behavior across all storage backends.
Fix MCPClient automatic reconnection when session becomes invalid (#10660)
When an MCP server restarts, the session ID becomes invalid causing "Bad Request: No valid session ID provided" errors. The MCPClient now automatically detects session-related errors, reconnects to the server, and retries the tool call.
This fix addresses issue #7675 where MCPClient would fail to reconnect after an MCP server went offline and came back online.
Populate RequestContext from options.extra for workflows and agents (#10655)
Workflows and agents exposed as MCP tools now receive all keys from options.extra directly on the RequestContext. This allows workflows and agents to access authentication information (authInfo, sessionId, requestId, etc.) via requestContext.get('key') when exposed via MCPServer.
Fix timeout parameter position in listTools call (#10609)
The MCP SDK's listTools signature is listTools(params?, options?). Timeout was incorrectly passed as params (1st arg) instead of options (2nd arg), causing timeouts to not be applied to requests.
Unified MastraServer API with MCP transport routes (#10644)
Breaking Changes:
HonoServerAdapter to MastraServer in @mastra/honoExpressServerAdapter to MastraServer in @mastra/expressServerAdapter to MastraServerBase in @mastra/serverNew Features:
/api/mcp/:serverId/mcp (HTTP) and /api/mcp/:serverId/sse (SSE)express.json() middleware compatibility for MCP routes@mastra/server/authTesting:
@internal/server-adapter-test-utilsAdd MCP Roots capability support (fixes #8660) (#10646)
This adds support for the MCP Roots capability, allowing clients to expose filesystem roots to MCP servers like @modelcontextprotocol/server-filesystem.
New Features:
roots option to server configuration for specifying allowed directoriesroots capability when roots are configuredroots/list requests from servers per MCP specclient.roots getter to access configured rootsclient.setRoots() to dynamically update rootsclient.sendRootsListChanged() to notify servers of root changesUsage:
const client = new MCPClient({
servers: {
filesystem: {
command: 'npx',
args: ['-y', '@modelcontextprotocol/server-filesystem', '/tmp'],
roots: [
{ uri: 'file:///tmp', name: 'Temp Directory' },
{ uri: 'file:///home/user/projects', name: 'Projects' },
],
},
},
});
Before this fix, the filesystem server would log:
"Client does not support MCP Roots, using allowed directories set from server args"
After this fix, the server properly receives roots from the client:
"Updated allowed directories from MCP roots: 2 valid directories"
Add MCP progress notification support and refactor client structure (#10637)
enableProgressTracking: true in server config and use client.progress.onUpdate(handler) to receive progress updates during long-running tool operations.ProgressClientActions class for handling progress notificationsactions/ directory (elicitation, prompt, resource, progress)types.ts file for better code organizationMemory system now uses processors. Memory processors (MessageHistory, SemanticRecall, WorkingMemory) are now exported from @mastra/memory/processors and automatically added to the agent pipeline based on your memory config. Core processors (ToolCallFilter, TokenLimiter) remain in @mastra/core/processors. (#9254)
Schema-based working memory now uses merge semantics instead of replace semantics. (#10659)
Before: Each working memory update replaced the entire memory, causing data loss across conversation turns.
After: For schema-based working memory:
null to delete itTemplate-based (Markdown) working memory retains the existing replace semantics.
This fixes issue #7775 where users building profile-like schemas would lose information from previous turns.
feat(storage): support querying messages from multiple threads (#10663)
threadId: string | string[] was being passed to places expecting Scalar typelistMessages across all adapters when threadId is an array_getIncludedMessages to look up message threadId by ID (since message IDs are globally unique)msg-idx:{messageId} index for O(1) message lookups (backwards compatible with fallback to scan for old messages, with automatic backfill)Fix recall() to return newest messages when using lastMessages config (#10543)
When using lastMessages: N config without an explicit orderBy, the recall() function was returning the OLDEST N messages instead of the NEWEST N messages. This completely breaks conversation history for any thread that grows beyond the lastMessages limit.
feat(storage): support querying messages from multiple threads (#10663)
threadId: string | string[] was being passed to places expecting Scalar typelistMessages across all adapters when threadId is an array_getIncludedMessages to look up message threadId by ID (since message IDs are globally unique)msg-idx:{messageId} index for O(1) message lookups (backwards compatible with fallback to scan for old messages, with automatic backfill)fix: ensure score responses match saved payloads for Mastra Stores. (#10557)
Unify transformScoreRow functions across storage adapters (#10648)
Added a unified transformScoreRow function in @mastra/core/storage that provides schema-driven row transformation for score data. This eliminates code duplication across 10 storage adapters while maintaining store-specific behavior through configurable options:
preferredTimestampFields: Preferred source fields for timestamps (PostgreSQL, Cloudflare D1)convertTimestamps: Convert timestamp strings to Date objects (MSSQL, MongoDB, ClickHouse)nullValuePattern: Skip values matching pattern (ClickHouse's '_null_')fieldMappings: Map source column names to schema fields (LibSQL's additionalLLMContext)Each store adapter now uses the unified function with appropriate options, reducing ~200 lines of duplicate transformation logic while ensuring consistent behavior across all storage backends.
feat(storage): support querying messages from multiple threads (#10663)
threadId: string | string[] was being passed to places expecting Scalar typelistMessages across all adapters when threadId is an array_getIncludedMessages to look up message threadId by ID (since message IDs are globally unique)msg-idx:{messageId} index for O(1) message lookups (backwards compatible with fallback to scan for old messages, with automatic backfill)fix: ensure score responses match saved payloads for Mastra Stores. (#10557)
Unify transformScoreRow functions across storage adapters (#10648)
Added a unified transformScoreRow function in @mastra/core/storage that provides schema-driven row transformation for score data. This eliminates code duplication across 10 storage adapters while maintaining store-specific behavior through configurable options:
preferredTimestampFields: Preferred source fields for timestamps (PostgreSQL, Cloudflare D1)convertTimestamps: Convert timestamp strings to Date objects (MSSQL, MongoDB, ClickHouse)nullValuePattern: Skip values matching pattern (ClickHouse's '_null_')fieldMappings: Map source column names to schema fields (LibSQL's additionalLLMContext)Each store adapter now uses the unified function with appropriate options, reducing ~200 lines of duplicate transformation logic while ensuring consistent behavior across all storage backends.
Adds trace tagging support to the BrainTrust and Langfuse tracing exporters. (#10765)
Adds bidirectional integration with otel tracing via a new @mastra/otel-bridge package. (#10482)
Unified MastraServer API with MCP transport routes (#10644)
Breaking Changes:
HonoServerAdapter to MastraServer in @mastra/honoExpressServerAdapter to MastraServer in @mastra/expressServerAdapter to MastraServerBase in @mastra/serverNew Features:
/api/mcp/:serverId/mcp (HTTP) and /api/mcp/:serverId/sse (SSE)express.json() middleware compatibility for MCP routes@mastra/server/authTesting:
@internal/server-adapter-test-utilsAllow direct access to server app handle directly from Mastra instance. (#10598)
// Before: HTTP request to localhost
const response = await fetch(`http://localhost:5000/api/tools`);
// After: Direct call via app.fetch()
const app = mastra.getServerApp<Hono>();
const response = await app.fetch(new Request('http://internal/api/tools'));
mastra.getServerApp<T>() to access the underlying Hono/Express appmastra.getMastraServer() and mastra.setMastraServer() for adapter accessMastraServerBase class in @mastra/core/server for adapter implementationsAdd stream data redaction to prevent sensitive information leaks in agent stream API responses. (#10705)
The stream API endpoint now automatically redacts request data from stream chunks (step-start, step-finish, finish) which could contain system prompts, tool definitions, and API keys. Redaction is enabled by default and can be disabled for debugging/internal services via streamOptions.redact.
Unified MastraServer API with MCP transport routes (#10644)
Breaking Changes:
HonoServerAdapter to MastraServer in @mastra/honoExpressServerAdapter to MastraServer in @mastra/expressServerAdapter to MastraServerBase in @mastra/serverNew Features:
/api/mcp/:serverId/mcp (HTTP) and /api/mcp/:serverId/sse (SSE)express.json() middleware compatibility for MCP routes@mastra/server/authTesting:
@internal/server-adapter-test-utilsAllow direct access to server app handle directly from Mastra instance. (#10598)
// Before: HTTP request to localhost
const response = await fetch(`http://localhost:5000/api/tools`);
// After: Direct call via app.fetch()
const app = mastra.getServerApp<Hono>();
const response = await app.fetch(new Request('http://internal/api/tools'));
mastra.getServerApp<T>() to access the underlying Hono/Express appmastra.getMastraServer() and mastra.setMastraServer() for adapter accessMastraServerBase class in @mastra/core/server for adapter implementationsAdd stream data redaction to prevent sensitive information leaks in agent stream API responses. (#10705)
The stream API endpoint now automatically redacts request data from stream chunks (step-start, step-finish, finish) which could contain system prompts, tool definitions, and API keys. Redaction is enabled by default and can be disabled for debugging/internal services via streamOptions.redact.
fix(pg): qualify vector type to fix schema lookup (#10786)
feat(storage): support querying messages from multiple threads (#10663)
threadId: string | string[] was being passed to places expecting Scalar typelistMessages across all adapters when threadId is an array_getIncludedMessages to look up message threadId by ID (since message IDs are globally unique)msg-idx:{messageId} index for O(1) message lookups (backwards compatible with fallback to scan for old messages, with automatic backfill)fix: ensure score responses match saved payloads for Mastra Stores. (#10557)
Unify transformScoreRow functions across storage adapters (#10648)
Added a unified transformScoreRow function in @mastra/core/storage that provides schema-driven row transformation for score data. This eliminates code duplication across 10 storage adapters while maintaining store-specific behavior through configurable options:
preferredTimestampFields: Preferred source fields for timestamps (PostgreSQL, Cloudflare D1)convertTimestamps: Convert timestamp strings to Date objects (MSSQL, MongoDB, ClickHouse)nullValuePattern: Skip values matching pattern (ClickHouse's '_null_')fieldMappings: Map source column names to schema fields (LibSQL's additionalLLMContext)Each store adapter now uses the unified function with appropriate options, reducing ~200 lines of duplicate transformation logic while ensuring consistent behavior across all storage backends.
Fix wrong hook arg type when retrieving workflow runs (#10755)
Fix discriminatedUnion schema information lost when json schema is converted to zod (#10500)
Hide time travel on map steps in Studio (#10631)
Added tracing options to workflow runs and agent generate / stream / network. You can now configure tracing options, custom request context keys, and parent trace/span IDs through a new "Tracing options" tab in the workflow/agent ui UI. (#10742)
Usage:
The workflow settings are now accessible via the new useTracingSettings hook and TracingSettingsProvider:
import { TracingSettingsProvider, useWorkflowSettings } from '@mastra/playground-ui';
// Wrap your workflow components with the provider
<TracingSettingsProvider entityId="my-workflow" entityType="workflow">
<YourWorkflowUI />
</TracingSettingsProvider>;
// Access settings in child components
const { settings, setSettings } = useTracingSettings();
// Configure tracing options
setSettings({
tracingOptions: {
metadata: { userId: '123' },
requestContextKeys: ['user.email'],
traceId: 'abc123',
},
});
Tracing options are persisted per workflow/agent in localStorage and automatically applied to all workflow/agent executions.
Add maxSize support for HTML chunking strategies (#10654)
Added support for the maxSize option in HTML chunking strategies (headers and sections), allowing users to control the maximum chunk size when chunking HTML documents. Previously, HTML chunks could be excessively large when sections contained substantial content.
Changes:
maxSize support to headers strategy - applies RecursiveCharacterTransformer after header-based splittingmaxSize support to sections strategy - applies RecursiveCharacterTransformer after section-based splittingsplitHtmlByHeaders content extraction bug - changed from broken nextElementSibling to working parentNode.childNodes approachUsage:
import { MDocument } from '@mastra/rag';
const doc = MDocument.fromHTML(htmlContent);
const chunks = await doc.chunk({
strategy: 'html',
headers: [
['h1', 'Header 1'],
['h2', 'Header 2'],
['h3', 'Header 3'],
],
maxSize: 512, // Control chunk size
overlap: 50, // Optional overlap for context
});
Results from real arXiv paper test:
Fixes #7942
Unified MastraServer API with MCP transport routes (#10644)
Breaking Changes:
HonoServerAdapter to MastraServer in @mastra/honoExpressServerAdapter to MastraServer in @mastra/expressServerAdapter to MastraServerBase in @mastra/serverNew Features:
/api/mcp/:serverId/mcp (HTTP) and /api/mcp/:serverId/sse (SSE)express.json() middleware compatibility for MCP routes@mastra/server/authTesting:
@internal/server-adapter-test-utilsAllow direct access to server app handle directly from Mastra instance. (#10598)
// Before: HTTP request to localhost
const response = await fetch(`http://localhost:5000/api/tools`);
// After: Direct call via app.fetch()
const app = mastra.getServerApp<Hono>();
const response = await app.fetch(new Request('http://internal/api/tools'));
mastra.getServerApp<T>() to access the underlying Hono/Express appmastra.getMastraServer() and mastra.setMastraServer() for adapter accessMastraServerBase class in @mastra/core/server for adapter implementationsfix(a2a): fix streaming and memory support for A2A protocol (#10653)
Client (@mastra/client-js):
sendStreamingMessage to properly return a streaming response instead of attempting to parse it as JSONServer (@mastra/server):
contextId as threadId for memory persistence across conversationsresourceId via params.metadata.resourceId or message.metadata.resourceId, falling back to agentIdAdd stream data redaction to prevent sensitive information leaks in agent stream API responses. (#10705)
The stream API endpoint now automatically redacts request data from stream chunks (step-start, step-finish, finish) which could contain system prompts, tool definitions, and API keys. Redaction is enabled by default and can be disabled for debugging/internal services via streamOptions.redact.
threadId: string | string[] was being passed to places expecting Scalar typelistMessages across all adapters when threadId is an array_getIncludedMessages to look up message threadId by ID (since message IDs are globally unique)msg-idx:{messageId} index for O(1) message lookups (backwards compatible with fallback to scan for old messages, with automatic backfill)Unify transformScoreRow functions across storage adapters (#10648)
Added a unified transformScoreRow function in @mastra/core/storage that provides schema-driven row transformation for score data. This eliminates code duplication across 10 storage adapters while maintaining store-specific behavior through configurable options:
preferredTimestampFields: Preferred source fields for timestamps (PostgreSQL, Cloudflare D1)convertTimestamps: Convert timestamp strings to Date objects (MSSQL, MongoDB, ClickHouse)nullValuePattern: Skip values matching pattern (ClickHouse's '_null_')fieldMappings: Map source column names to schema fields (LibSQL's additionalLLMContext)Each store adapter now uses the unified function with appropriate options, reducing ~200 lines of duplicate transformation logic while ensuring consistent behavior across all storage backends.
Add Vertex AI support to GoogleVoice provider (#10661)
vertexAI, project, and location configuration optionsisUsingVertexAI(), getProject(), getLocation()GOOGLE_CLOUD_PROJECT and GOOGLE_CLOUD_LOCATION environment variablesThis makes @mastra/voice-google consistent with @mastra/voice-google-gemini-live and enables enterprise deployments using Google Cloud project-based authentication instead of API keys.
Fix discriminatedUnion schema information lost when json schema is converted to zod (#10500)
Hide time travel on map steps in Studio (#10631)
During npm create-mastra you can now optionally initialize a git repository in the newly created project. The setup wizard will prompt for this option. (#9792)
Fix discriminatedUnion schema information lost when json schema is converted to zod (#10500)
Hide time travel on map steps in Studio (#10631)
Fixed tool list empty state when there are no agents so the page renders correctly. (#10711)
Agent responses now stream live through workflows and networks, with complete execution metadata flowing to your UI.
In workflows, pipe agent streams directly through steps:
const planActivities = createStep({
execute: async ({ mastra, writer }) => {
const agent = mastra?.getAgent('weatherAgent');
const response = await agent.stream('Plan activities');
await response.fullStream.pipeTo(writer);
return { activities: await response.text };
}
});
In networks, each step now tracks properly—unique IDs, iteration counts, task info, and agent handoffs all flow through with correct sequencing. No more duplicated steps or missing metadata.
Both surface text chunks, tool calls, and results as they happen, so users see progress in real time instead of waiting for the full response.
CompositeVoice now accepts AI SDK voice models directly—use OpenAI for transcription, ElevenLabs for speech, or any combination you want.
import { CompositeVoice } from "@mastra/core/voice";
import { openai } from "@ai-sdk/openai";
import { elevenlabs } from "@ai-sdk/elevenlabs";
const voice = new CompositeVoice({
input: openai.transcription('whisper-1'),
output: elevenlabs.speech('eleven_turbo_v2'),
});
const audio = await voice.speak("Hello from AI SDK!");
const transcript = await voice.listen(audio);
Works with OpenAI, ElevenLabs, Groq, Deepgram, LMNT, Hume, and more. AI SDK models are automatically wrapped, so you can swap providers without changing your code.
Support streaming agent text chunks from workflow-step-output
Adds support for streaming text and tool call chunks from agents running inside workflows via the workflow-step-output event. When you pipe an agent's stream into a workflow step's writer, the text chunks, tool calls, and other streaming events are automatically included in the workflow stream and converted to UI messages.
Features:
includeTextStreamParts option to WorkflowStreamToAISDKTransformer (defaults to true)isMastraTextStreamChunk type guard to identify Mastra chunks with text streaming datatext-start, text-delta, text-endtool-call, tool-resulttransformers.test.tsworkflowRoute()Example:
const planActivities = createStep({
execute: async ({ mastra, writer }) => {
const agent = mastra?.getAgent('weatherAgent');
const response = await agent.stream('Plan activities');
await response.fullStream.pipeTo(writer);
return { activities: await response.text };
}
});
When served via workflowRoute(), the UI receives incremental text updates as the agent generates its response, providing a smooth streaming experience. (#10568)
Fix chat route to use agent ID instead of agent name for resolution. The /chat/:agentId endpoint now correctly resolves agents by their ID property (e.g., weather-agent) instead of requiring the camelCase variable name (e.g., weatherAgent). This fixes issue #10469 where URLs like /chat/weather-agent would return 404 errors. (#10565)
Fixes propagation of custom data chunks from nested workflows in branches to the root stream when using toAISdkV5Stream with {from: 'workflow'}.
Previously, when a nested workflow within a branch used writer.custom() to write data-* chunks, those chunks were wrapped in workflow-step-output events and not extracted, causing them to be dropped from the root stream.
Changes:
workflow-step-output chunks in transformWorkflow() to extract and propagate data-* chunksWhen a workflow-step-output chunk contains a data-* chunk in its payload.output, the transformer now extracts it and returns it directly to the root stream
Added comprehensive test coverage for nested workflows with branches and custom data propagation
This ensures that custom data chunks written via writer.custom() in nested workflows (especially those within branches) are properly propagated to the root stream, allowing consumers to receive progress updates, metrics, and other custom data from nested workflow steps. (#10447)
Fix network data step formatting in AI SDK stream transformation
Previously, network execution steps were not being tracked correctly in the AI SDK stream transformation. Steps were being duplicated rather than updated, and critical metadata like step IDs, iterations, and task information was missing or incorrectly structured.
Changes:
AgentNetworkToAISDKTransformer to properly maintain step state throughout execution lifecycleSteps are now identified by unique IDs and updated in place rather than creating duplicates
Added proper iteration and task metadata to each step in the network execution flow
Fixed agent, workflow, and tool execution events to correctly populate step data
Updated network stream event types to include networkId, workflowId, and consistent runId tracking
Added test coverage for network custom data chunks with comprehensive validation
This ensures the AI SDK correctly represents the full execution flow of agent networks with accurate step sequencing and metadata. (#10432)
[0.x] Make workflowRoute includeTextStreamParts option default to false (#10574)
Add support for tool-call-approval and tool-call-suspended events in chatRoute (#10205)
Backports the messageMetadata and onError support from PR #10313 to the 0.x branch, adding these features to toAISdkFormat function.
messageMetadata parameter to toAISdkFormat optionsstart and finish chunks when providedAdded onError parameter to toAISdkFormat options
safeParseErrorObject utility when not providedAdded safeParseErrorObject utility function for error parsing
Updated AgentStreamToAISDKTransformer to accept and use messageMetadata and onError
Updated JSDoc documentation with parameter descriptions and usage examples
Added comprehensive test suite for messageMetadata functionality (6 test cases)
Fixed existing test file to use toAISdkFormat instead of removed toAISdkV5Stream
New tests verify:
messageMetadata is called with correct part structure
Metadata is included in start and finish chunks
Proper handling when messageMetadata is not provided or returns null/undefined
Function is called for each relevant part in the stream
Uses UIMessageStreamOptions<UIMessage>['messageMetadata'] and UIMessageStreamOptions<UIMessage>['onError'] types from AI SDK v5 for full type compatibility
Backport of: https://github.com/mastra-ai/mastra/pull/10313 (#10314)
Fixed workflow routes to properly receive request context from middleware. This aligns the behavior of workflowRoute with chatRoute, ensuring that context set in middleware is consistently forwarded to workflows.
When both middleware and request body provide a request context, the middleware value now takes precedence, and a warning is emitted to help identify potential conflicts.
Fix base64 encoded images with threads - issue #10480
Fixed "Invalid URL" error when using base64 encoded images (without data: prefix) in agent calls with threads and resources. Raw base64 strings are now automatically converted to proper data URIs before being processed.
Changes:
Updated attachments-to-parts.ts to detect and convert raw base64 strings to data URIs
Fixed MessageList image processing to handle raw base64 in two locations:
aiV4CoreMessageToV1PromptMessagemastraDBMessageToAIV4UIMessageAdded comprehensive tests for base64 images, data URIs, and HTTP URLs with threads
Breaking Change: None - this is a bug fix that maintains backward compatibility while adding support for raw base64 strings. (#10483)
SimpleAuth and improved CloudAuth (#10569)
Fixed OpenAI schema compatibility when using agent.generate() or agent.stream() with structuredOutput.
Optional field handling: .optional() fields are converted to .nullable() with a transform that converts null → undefined, preserving optional semantics while satisfying OpenAI's strict mode requirements
Preserves nullable fields: Intentionally .nullable() fields remain unchanged
Deep transformation: Handles .optional() fields at any nesting level (objects, arrays, unions, etc.)
JSON Schema objects: Not transformed, only Zod schemas
const agent = new Agent({
name: 'data-extractor',
model: { provider: 'openai', modelId: 'gpt-4o' },
instructions: 'Extract user information',
});
const schema = z.object({
name: z.string(),
age: z.number().optional(),
deletedAt: z.date().nullable(),
});
// Schema is automatically transformed for OpenAI compatibility
const result = await agent.generate('Extract: John, deleted yesterday', {
structuredOutput: { schema },
});
// Result: { name: 'John', age: undefined, deletedAt: null }
(#10454)
deleteVectors, deleteFilter when upserting, updateVector filter (#10244) (#10244)
Fix generateTitle model type to accept AI SDK LanguageModelV2
Updated the generateTitle.model config option to accept MastraModelConfig instead of MastraLanguageModel. This allows users to pass raw AI SDK LanguageModelV2 models (e.g., anthropic.languageModel('claude-3-5-haiku-20241022')) directly without type errors.
Previously, passing a standard LanguageModelV2 would fail because MastraLanguageModelV2 has different doGenerate/doStream return types. Now MastraModelConfig is used consistently across:
memory/types.ts - generateTitle.model config
agent.ts - genTitle, generateTitleFromUserMessage, resolveTitleGenerationConfig
agent-legacy.ts - AgentLegacyCapabilities interface (#10567)
Fix message metadata not persisting when using simple message format. Previously, custom metadata passed in messages (e.g., {role: 'user', content: 'text', metadata: {userId: '123'}}) was not being saved to the database. This occurred because the CoreMessage conversion path didn't preserve metadata fields.
Now metadata is properly preserved for all message input formats:
Simple CoreMessage format: {role, content, metadata}
Full UIMessage format: {role, content, parts, metadata}
AI SDK v5 ModelMessage format with metadata
Fixes #8556 (#10488)
feat: Composite auth implementation (#10359)
Fix requireApproval property being ignored for tools passed via toolsets, clientTools, and memoryTools parameters. The requireApproval flag now correctly propagates through all tool conversion paths, ensuring tools requiring approval will properly request user approval before execution. (#10562)
Fix Azure Foundry rate limit handling for -1 values (#10409)
Fix model headers not being passed through gateway system
Previously, custom headers specified in MastraModelConfig were not being passed through the gateway system to model providers. This affected:
OpenRouter (preventing activity tracking with HTTP-Referer and X-Title)
Custom providers using custom URLs (headers not passed to createOpenAICompatible)
Custom gateway implementations (headers not available in resolveLanguageModel)
Now headers are correctly passed through the entire gateway system:
Base MastraModelGateway interface updated to accept headers
ModelRouterLanguageModel passes headers from config to all gateways
OpenRouter receives headers for activity tracking
Custom URL providers receive headers via createOpenAICompatible
Custom gateways can access headers in their resolveLanguageModel implementation
Example usage:
// Works with OpenRouter
const agent = new Agent({
name: 'my-agent',
instructions: 'You are a helpful assistant.',
model: {
id: 'openrouter/anthropic/claude-3-5-sonnet',
headers: {
'HTTP-Referer': 'https://myapp.com',
'X-Title': 'My Application',
},
},
});
// Also works with custom providers
const customAgent = new Agent({
name: 'custom-agent',
instructions: 'You are a helpful assistant.',
model: {
id: 'custom-provider/model',
url: 'https://api.custom.com/v1',
apiKey: 'key',
headers: {
'X-Custom-Header': 'custom-value',
},
},
});
Fixes https://github.com/mastra-ai/mastra/issues/9760 (#10564)
fix(agent): persist messages before tool suspension
Fixes issues where thread and messages were not saved before suspension when tools require approval or call suspend() during execution. This caused conversation history to be lost if users refreshed during tool approval or suspension.
Backend changes (@mastra/core):
Add assistant messages to messageList immediately after LLM execution
Flush messages synchronously before suspension to persist state
Create thread if it doesn't exist before flushing
Add metadata helpers to persist and remove tool approval state
Pass saveQueueManager and memory context through workflow for immediate persistence
Frontend changes (@mastra/react):
Extract runId from pending approvals to enable resumption after refresh
Convert pendingToolApprovals (DB format) to requireApprovalMetadata (runtime format)
Handle both dynamic-tool and tool-{NAME} part types for approval state
Change runId from hardcoded agentId to unique uuid()
UI changes (@mastra/playground-ui):
Handle tool calls awaiting approval in message initialization
Convert approval metadata format when loading initial messages
Fixes #9745, #9906 (#10369)
Fix race condition in parallel tool stream writes
Introduces a write queue to ToolStream to serialize access to the underlying stream, preventing writer locked errors (#10463)
Remove unneeded console warning when flushing messages and no threadId or saveQueueManager is found. (#10369)
Fixes GPT-5 reasoning which was failing on subsequent tool calls with the error:
Item 'fc_xxx' of type 'function_call' was provided without its required 'reasoning' item: 'rs_xxx'
(#10489)
Add optional includeRawChunks parameter to agent execution options, allowing users to include raw chunks in stream output where supported by the model provider. (#10456)
When mastra dev runs, multiple processes can write to provider-registry.json concurrently (auto-refresh, syncGateways, syncGlobalCacheToLocal). This causes file corruption where the end of the JSON appears twice, making it unparseable.
The fix uses atomic writes via the write-to-temp-then-rename pattern. Instead of:
fs.writeFileSync(filePath, content, 'utf-8');
We now do:
const tempPath = `${filePath}.${process.pid}.${Date.now()}.${randomSuffix}.tmp`;
fs.writeFileSync(tempPath, content, 'utf-8');
fs.renameSync(tempPath, filePath); // atomic on POSIX
fs.rename() is atomic on POSIX systems when both paths are on the same filesystem, so concurrent writes will each complete fully rather than interleaving. (#10529)
Ensures that data chunks written via writer.custom() always bubble up directly to the top-level stream, even when nested in sub-agents. This allows tools to emit custom progress updates, metrics, and other data that can be consumed at any level of the agent hierarchy.
Added bubbling logic in sub-agent execution: When sub-agents execute, data chunks (chunks with type starting with data-) are detected and written via writer.custom() instead of writer.write(), ensuring they bubble up directly without being wrapped in tool-output chunks.
Added comprehensive tests:
Test for writer.custom() with direct tool execution
Test for writer.custom() with sub-agent tools (nested execution)
Test for mixed usage of writer.write() and writer.custom() in the same tool
When a sub-agent's tool uses writer.custom() to write data chunks, those chunks appear in the sub-agent's stream. The parent agent's execution logic now detects these chunks and uses writer.custom() to bubble them up directly, preserving their structure and making them accessible at the top level.
This ensures that:
Data chunks bubble up correctly through nested agent hierarchies
Regular chunks continue to be wrapped in tool-output as expected (#10309)
Adds ability to create custom MastraModelGateway's that can be added to the Mastra class instance under the gateways property. Giving you typescript autocompletion in any model picker string.
import { MastraModelGateway, type ProviderConfig } from '@mastra/core/llm';
import { createOpenAICompatible } from '@ai-sdk/openai-compatible';
import type { LanguageModelV2 } from '@ai-sdk/provider';
class MyCustomGateway extends MastraModelGateway {
readonly id = 'custom';
readonly name = 'My Custom Gateway';
async fetchProviders(): Promise<Record<string, ProviderConfig>> {
return {
'my-provider': {
name: 'My Provider',
models: ['model-1', 'model-2'],
apiKeyEnvVar: 'MY_API_KEY',
gateway: this.id,
},
};
}
buildUrl(modelId: string, envVars?: Record<string, string>): string {
return 'https://api.my-provider.com/v1';
}
async getApiKey(modelId: string): Promise<string> {
const apiKey = process.env.MY_API_KEY;
if (!apiKey) throw new Error('MY_API_KEY not set');
return apiKey;
}
async resolveLanguageModel({
modelId,
providerId,
apiKey,
}: {
modelId: string;
providerId: string;
apiKey: string;
}): Promise<LanguageModelV2> {
const baseURL = this.buildUrl(`${providerId}/${modelId}`);
return createOpenAICompatible({
name: providerId,
apiKey,
baseURL,
}).chatModel(modelId);
}
}
new Mastra({
gateways: {
myGateway: new MyCustomGateway(),
},
});
(#10535)
Support AI SDK voice models
Mastra now supports AI SDK's transcription and speech models directly in CompositeVoice, enabling seamless integration with a wide range of voice providers through the AI SDK ecosystem. This allows you to use models from OpenAI, ElevenLabs, Groq, Deepgram, LMNT, Hume, and many more for both speech-to-text (transcription) and text-to-speech capabilities.
AI SDK models are automatically wrapped when passed to CompositeVoice, so you can mix and match AI SDK models with existing Mastra voice providers for maximum flexibility.
import { CompositeVoice } from "@mastra/core/voice";
import { openai } from "@ai-sdk/openai";
import { elevenlabs } from "@ai-sdk/elevenlabs";
// Use AI SDK models directly with CompositeVoice
const voice = new CompositeVoice({
input: openai.transcription('whisper-1'), // AI SDK transcription model
output: elevenlabs.speech('eleven_turbo_v2'), // AI SDK speech model
});
// Convert text to speech
const audioStream = await voice.speak("Hello from AI SDK!");
// Convert speech to text
const transcript = await voice.listen(audioStream);
console.log(transcript);
Fixes #9947 (#10558)
Fix network data step formatting in AI SDK stream transformation
Previously, network execution steps were not being tracked correctly in the AI SDK stream transformation. Steps were being duplicated rather than updated, and critical metadata like step IDs, iterations, and task information was missing or incorrectly structured.
Changes:
AgentNetworkToAISDKTransformer to properly maintain step state throughout execution lifecycleSteps are now identified by unique IDs and updated in place rather than creating duplicates
Added proper iteration and task metadata to each step in the network execution flow
Fixed agent, workflow, and tool execution events to correctly populate step data
Updated network stream event types to include networkId, workflowId, and consistent runId tracking
Added test coverage for network custom data chunks with comprehensive validation
This ensures the AI SDK correctly represents the full execution flow of agent networks with accurate step sequencing and metadata. (#10432)
Fix generating provider-registry.json (#10535)
Fix message-list conversion issues when persisting messages before tool suspension: filter internal metadata fields (__originalContent) from UI messages, keep reasoning field empty for consistent cache keys during message deduplication, and only include providerMetadata on parts when defined. (#10552)
Fix agent.generate() to use model's doGenerate method instead of doStream
When calling agent.generate(), the model's doGenerate method is now correctly invoked instead of always using doStream. This aligns the non-streaming generation path with the intended behavior where providers can implement optimized non-streaming responses. (#10572)
fix(agent): persist messages before tool suspension
Fixes issues where thread and messages were not saved before suspension when tools require approval or call suspend() during execution. This caused conversation history to be lost if users refreshed during tool approval or suspension.
Backend changes (@mastra/core):
Add assistant messages to messageList immediately after LLM execution
Flush messages synchronously before suspension to persist state
Create thread if it doesn't exist before flushing
Add metadata helpers to persist and remove tool approval state
Pass saveQueueManager and memory context through workflow for immediate persistence
Frontend changes (@mastra/react):
Extract runId from pending approvals to enable resumption after refresh
Convert pendingToolApprovals (DB format) to requireApprovalMetadata (runtime format)
Handle both dynamic-tool and tool-{NAME} part types for approval state
Change runId from hardcoded agentId to unique uuid()
UI changes (@mastra/playground-ui):
Handle tool calls awaiting approval in message initialization
Convert approval metadata format when loading initial messages
Fixes #9745, #9906 (#10369)
Configurable resourceId in react useChat (#10561)
fix(agent): persist messages before tool suspension
Fixes issues where thread and messages were not saved before suspension when tools require approval or call suspend() during execution. This caused conversation history to be lost if users refreshed during tool approval or suspension.
Backend changes (@mastra/core):
Add assistant messages to messageList immediately after LLM execution
Flush messages synchronously before suspension to persist state
Create thread if it doesn't exist before flushing
Add metadata helpers to persist and remove tool approval state
Pass saveQueueManager and memory context through workflow for immediate persistence
Frontend changes (@mastra/react):
Extract runId from pending approvals to enable resumption after refresh
Convert pendingToolApprovals (DB format) to requireApprovalMetadata (runtime format)
Handle both dynamic-tool and tool-{NAME} part types for approval state
Change runId from hardcoded agentId to unique uuid()
UI changes (@mastra/playground-ui):
Handle tool calls awaiting approval in message initialization
Convert approval metadata format when loading initial messages
Fixes #9745, #9906 (#10369)
Fixed OpenAI schema compatibility when using agent.generate() or agent.stream() with structuredOutput.
Optional field handling: .optional() fields are converted to .nullable() with a transform that converts null → undefined, preserving optional semantics while satisfying OpenAI's strict mode requirements
Preserves nullable fields: Intentionally .nullable() fields remain unchanged
Deep transformation: Handles .optional() fields at any nesting level (objects, arrays, unions, etc.)
JSON Schema objects: Not transformed, only Zod schemas
const agent = new Agent({
name: 'data-extractor',
model: { provider: 'openai', modelId: 'gpt-4o' },
instructions: 'Extract user information',
});
const schema = z.object({
name: z.string(),
age: z.number().optional(),
deletedAt: z.date().nullable(),
});
// Schema is automatically transformed for OpenAI compatibility
const result = await agent.generate('Extract: John, deleted yesterday', {
structuredOutput: { schema },
});
// Result: { name: 'John', age: undefined, deletedAt: null }
(#10454)