RAG ingestion and query operations are now visible in Mastra tracing with new span types (e.g., RAG_INGESTION, RAG_EMBEDDING, RAG_VECTOR_OPERATION, RAG_ACTION, GRAPH_ACTION) plus helpers like startRagIngestion() / withRagIngestion(). Instrumentation is opt-in via an observabilityContext, and @mastra/rag automatically threads context from agent TOOL_CALL spans into vector and graph tools.
@mastra/observability CloudExporter can now batch and upload logs, metrics, scores, and feedback in addition to tracing spans, enabling a single exporter path to Mastra Cloud for all signals. This also changes the endpoint configuration to use a base collector URL and derive publish paths automatically.
excludeSpanTypes and spanFilter were added to ObservabilityInstanceConfig in both @mastra/core and @mastra/observability, allowing you to drop entire span categories (e.g., MODEL_CHUNK) or apply predicate-based filtering before export—useful for pay-per-span backends.
@mastra/core MessageList can now accept AI SDK v6 UI/model messages and project stored messages via messageList.get.all.aiV6.ui(), supporting v6 approval request/response flows. @mastra/ai-sdk adds toAISdkMessages() to load stored Mastra messages into AI SDK v5 or v6 chat UIs.
Observability log correlation is fixed so logs inside agent runs carry the active span correlation fields (restoring trace↔log linking), and deepClean() now applies to all signals and better preserves Map/Set/Error detail. MCP tool discovery now retries after reconnectable errors and the MCP server returns spec-correct 404s for stale sessions; memory recall gains more precise browsing (partType, toolName, threadId: "current", anchor paging), and message parts now include createdAt timestamps for accurate part-level timing.
endpoint URL (publisher paths are derived automatically) when using CloudExporter for Mastra Cloud uploads.Added excludeSpanTypes and spanFilter options to ObservabilityInstanceConfig for selectively filtering spans before export. Use excludeSpanTypes to drop entire categories of spans by type (e.g., MODEL_CHUNK, MODEL_STEP) or spanFilter for fine-grained predicate-based filtering by attributes, metadata, entity, or any combination. Both options help reduce noise and costs in observability platforms that charge per-span. (#15131)
excludeSpanTypes example:
excludeSpanTypes: [SpanType.MODEL_CHUNK, SpanType.MODEL_STEP, SpanType.WORKFLOW_SLEEP];
spanFilter example:
spanFilter: span => {
if (span.type === SpanType.MODEL_CHUNK) return false;
if (span.type === SpanType.TOOL_CALL && span.attributes?.success) return false;
return true;
};
Add RAG observability (#10898) (#15137)
Surfaces RAG ingestion and query operations in Mastra's AI tracing.
New span types in @mastra/core/observability:
RAG_INGESTION (root) — wraps an ingestion pipeline runRAG_EMBEDDING — embedding call (used by ingestion and query)RAG_VECTOR_OPERATION — vector store I/O (query/upsert/delete/fetch)RAG_ACTION — chunk / extract_metadata / rerankGRAPH_ACTION — non-RAG graph build / traverse / update / pruneNew helpers exported from @mastra/core/observability:
startRagIngestion(opts) — manual: returns { span, observabilityContext }withRagIngestion(opts, fn) — scoped: runs fn(observabilityContext),
attaches the return value as the span's output, routes thrown errors to
span.error(...)Wired in @mastra/rag:
vectorQuerySearch emits RAG_EMBEDDING (mode: query) and
RAG_VECTOR_OPERATION (operation: query)rerank / rerankWithScorer emit RAG_ACTION (action: rerank)MDocument.chunk emits RAG_ACTION (action: chunk) and
RAG_ACTION (action: extract_metadata)createGraphRAGTool emits GRAPH_ACTION (action: build / traverse)createVectorQueryTool and createGraphRAGTool thread
observabilityContext from the agent's TOOL_CALL span automaticallyAll new instrumentation is opt-in: functions accept an optional
observabilityContext and no-op when absent, so existing callers are
unaffected.
Update provider registry and model documentation with latest models and providers (8db7663)
Added AI SDK v6 UI message support to MessageList in @mastra/core. (#14592)
MessageList can now accept AI SDK v6 UI and model messages in add(...), and project stored messages with messageList.get.all.aiV6.ui(). This adds first-class handling for v6 approval request and response message flows.
Fix observability log correlation: logs emitted from inside an agent run were being persisted with entityId, runId, traceId, and the other correlation fields set to null, breaking trace ↔ log linking in Mastra Studio and downstream observability tools. Logs now correctly carry the active span's correlation context end to end. (#15148)
Added createdAt timestamps to message parts in message history. (#15121)
Message parts now keep their own creation timestamps so downstream code can preserve part-level timing instead of relying only on the parent message timestamp.
After:
{ type: 'text', text: 'hello', createdAt: 1712534400000 }
Added toAISdkMessages() for loading stored Mastra messages into AI SDK v5 or v6 chat UIs. (#14592)
Use the default v5 behavior or pass { version: 'v6' } when your app is typed against AI SDK v6 useChat() message types.
llm span in Datadog (previously the per-call spans were reported as task, so Datadog's "Model Calls" count was wrong and per-call inputs/outputs were not rendered as messages).workflow span instead of llm, so it no longer looks like an extra LLM call.llm spans, so Datadog no longer double-counts tokens against the wrapper.llm spans inherit modelName and modelProvider from their parent generation, so the model is still attached in the Datadog UI.Improved MCP tool discovery to retry once after reconnectable connection errors like Connection closed during tools/list. (#15141)
MCPClient.listToolsets(), listToolsetsWithErrors(), and listTools() now attempt a reconnect before treating transient discovery failures as missing tools.
Fixed MCP server to return HTTP 404 (instead of 400) when a client sends a stale or unknown session ID. Per the MCP spec, this tells clients to re-initialize with a new session, which fixes broken tool calls after server redeploys. (#15160)
Updated the recall tool to support more precise message browsing for agents. (#15116)
Agents using recall can now pass partType and toolName to narrow message results to specific parts, such as tool calls or tool results for one tool. This change also adds threadId: "current" support across recall modes and anchor: "start" | "end" for no-cursor message paging, making it easier to inspect recent thread activity and past tool usage.
Fixed reflection threshold not respecting per-record overrides set via the PATCH API. Previously, lowering the reflection threshold for a specific record had no effect on the actual reflection trigger — only the default 40k threshold was used. Now per-record overrides are correctly applied in both sync and async reflection paths. (#15170)
Improved observational memory formatting to use part timestamps when rendering dates and times. (#15121)
Observer history now follows part-level timing more closely, so the rendered memory context is more accurate when messages contain parts created at different times.
Fixed message history doubling when using Observational Memory with the Mastra gateway. The local ObservationalMemoryProcessor now detects when the agent's model is routed through the Mastra gateway and skips its input/output processing, since the gateway handles OM server-side. (#15161)
Added CloudExporter support for Mastra Observability logs, metrics, scores, and feedback. (#15124)
CloudExporter now batches and uploads all Mastra Observability signals to Mastra Cloud, not just tracing spans.
This includes a breaking change to the CloudExporter endpoint format. We now pass a base endpoint URL and let let the exporter derive the standard publish paths automatically.
import { CloudExporter, Observability } from '@mastra/observability';
const observability = new Observability({
configs: {
default: {
serviceName: 'my-app',
exporters: [
new CloudExporter({
endpoint: 'https://collector.example.com',
}),
],
},
},
});
// Traces, logs, metrics, scores, and feedback now all publish through CloudExporter.
After updating the exporter endpoint config, the exporter will continue to work for traces, and the same exporter will now also publish structured logs, auto-extracted metrics, scores, and feedback records.
Added excludeSpanTypes and spanFilter options to ObservabilityInstanceConfig for selectively filtering spans before export. Use excludeSpanTypes to drop entire categories of spans by type (e.g., MODEL_CHUNK, MODEL_STEP) or spanFilter for fine-grained predicate-based filtering by attributes, metadata, entity, or any combination. Both options help reduce noise and costs in observability platforms that charge per-span. (#15131)
excludeSpanTypes example:
excludeSpanTypes: [SpanType.MODEL_CHUNK, SpanType.MODEL_STEP, SpanType.WORKFLOW_SLEEP];
spanFilter example:
spanFilter: span => {
if (span.type === SpanType.MODEL_CHUNK) return false;
if (span.type === SpanType.TOOL_CALL && span.attributes?.success) return false;
return true;
};
ObservabilityBus now honors per-instance serializationOptions (maxStringLength, maxDepth, maxArrayLength, maxObjectKeys) when deep-cleaning log/metric/score/feedback payloads, matching the behavior of tracing spans. Previously these signals always used the built-in defaults regardless of user configuration. (#15138)
Apply deepClean() to all observability signals (logs, metrics, scores, feedback) before fanning out to exporters and bridges. Previously only tracing spans were deep-cleaned at construction time, leaving free-form payload fields on other signals (e.g. log.data, log.metadata, metric.metadata, metric.costContext.costMetadata, score.metadata, feedback.metadata) susceptible to circular references, oversized strings, and other non-serializable values. Sanitization now happens centrally in ObservabilityBus.emit() so every signal leaving the bus is bounded and JSON-safe. (#15135)
deepClean() now preserves data for Map, Set, and richer Error objects. Previously Maps and Sets were serialized as empty {} (entries silently dropped) and Errors only kept name/message. Maps are now converted to plain objects of entries, Sets to arrays (both respecting maxObjectKeys/maxArrayLength and cycle detection), and Errors additionally preserve stack and recursively cleaned cause. (#15136)
Search input can now be collapsed into a compact icon button with tooltip and auto-focuses when expanded (#15130)
Added DataKeysAndValues component — a compound component for displaying key-value pairs in a grid layout with support for single or two-column modes and section headers (#15126)
Added DateTimeRangePicker component — a date range selector with preset options (last 24h, 7d, 30d, etc.) and a custom range mode with dual calendar and time pickers (#15128)
Added DataCodeSection component — a read-only code viewer with JSON syntax highlighting, search, multiline toggle, and an expandable fullscreen dialog (#15125)
Added DataPanel compound component — a container for detail panels with header, navigation, close button, loading, and empty states (#15127)
New inline Traces page replacing the old dialog-based Observability page. Trace, span, and score details now open in stacked side panels instead of full-screen dialogs. URL deep-linking supports traceId, spanId, tab, and scoreId params. Includes new TracesDataList, DataList.Pagination, DataList.Subheader components, and Evaluate Trace / Save as Dataset Item actions. (#15139)
Fixed publishing older agent versions (#15154)
Fixed agent editor to allow publishing older read-only versions. Previously, the Publish button was disabled when viewing a previous version. Now a "Publish This Version" button appears, enabling users to set any older version as the published version.
Fixed Publish button being clickable without a saved draft
The Publish button is now disabled until a draft version is saved. Previously, making edits would enable the Publish button even without a saved draft, which caused an error when clicked.
Eliminated spurious 404 error logs for code-only agents
The agent versions endpoint now checks both code-registered and stored agents before returning 404, and the frontend conditionally fetches stored agent details only when versions exist. This prevents noisy error logs when navigating to the editor for agents that haven't been published yet.
Changed editor sections to be collapsed by default
The System Prompt, Tools, and Variables sections in the agent editor are now collapsed by default when navigating to the editor page.
Add RAG observability (#10898) (#15137)
Surfaces RAG ingestion and query operations in Mastra's AI tracing.
New span types in @mastra/core/observability:
RAG_INGESTION (root) — wraps an ingestion pipeline runRAG_EMBEDDING — embedding call (used by ingestion and query)RAG_VECTOR_OPERATION — vector store I/O (query/upsert/delete/fetch)RAG_ACTION — chunk / extract_metadata / rerankGRAPH_ACTION — non-RAG graph build / traverse / update / pruneNew helpers exported from @mastra/core/observability:
startRagIngestion(opts) — manual: returns { span, observabilityContext }withRagIngestion(opts, fn) — scoped: runs fn(observabilityContext),
attaches the return value as the span's output, routes thrown errors to
span.error(...)Wired in @mastra/rag:
vectorQuerySearch emits RAG_EMBEDDING (mode: query) and
RAG_VECTOR_OPERATION (operation: query)rerank / rerankWithScorer emit RAG_ACTION (action: rerank)MDocument.chunk emits RAG_ACTION (action: chunk) and
RAG_ACTION (action: extract_metadata)createGraphRAGTool emits GRAPH_ACTION (action: build / traverse)createVectorQueryTool and createGraphRAGTool thread
observabilityContext from the agent's TOOL_CALL span automaticallyAll new instrumentation is opt-in: functions accept an optional
observabilityContext and no-op when absent, so existing callers are
unaffected.
Fixed publishing older agent versions (#15154)
Fixed agent editor to allow publishing older read-only versions. Previously, the Publish button was disabled when viewing a previous version. Now a "Publish This Version" button appears, enabling users to set any older version as the published version.
Fixed Publish button being clickable without a saved draft
The Publish button is now disabled until a draft version is saved. Previously, making edits would enable the Publish button even without a saved draft, which caused an error when clicked.
Eliminated spurious 404 error logs for code-only agents
The agent versions endpoint now checks both code-registered and stored agents before returning 404, and the frontend conditionally fetches stored agent details only when versions exist. This prevents noisy error logs when navigating to the editor for agents that haven't been published yet.
Changed editor sections to be collapsed by default
The System Prompt, Tools, and Variables sections in the agent editor are now collapsed by default when navigating to the editor page.
Fixed the Responses API to use the agent default model when create requests omit model. (#15140)
The following packages were updated with dependency changes only:
@mastra/memory now lets you route observer and reflector calls to different models based on input size using ModelByInputTokens. Short inputs can go to a fast, cheap model while longer ones get sent to a more capable one -- all configured declaratively with token thresholds. Tracing shows which model was selected and why.
@mastra/mongodb now stores versioned datasets with full item history and time-travel queries, plus experiment results and CRUD. If you're already using MongoDBStore, this works automatically with no extra setup.
New @mastra/auth-okta package brings SSO authentication and role-based access control via Okta. Map Okta groups to Mastra permissions, verify JWTs against Okta's JWKS endpoint, and manage sessions -- or pair Okta RBAC with a different auth provider like Auth0 or Clerk.
Added dataset-agent association and experiment status tracking for the Evaluate workflow. (#14470)
targetType and targetIds fields to datasets, enabling association with agents, scorers, or workflows. Datasets can now be linked to multiple entities.status field to experiment results ('needs-review', 'reviewed', 'complete') for review queue workflow.Added agent version support for experiments. When triggering an experiment, you can now pass an agentVersion parameter to pin which agent version to use. The agent version is stored with the experiment and returned in experiment responses. (#14562)
const client = new MastraClient();
await client.triggerDatasetExperiment({
datasetId: "my-dataset",
targetType: "agent",
targetId: "my-agent",
version: 3, // pin to dataset version 3
agentVersion: "ver_abc123" // pin to a specific agent version
});
Added tool suspension handling to the Harness. (#14611)
When a tool calls suspend() during execution, the harness now emits a tool_suspended event, reports agent_end with reason 'suspended', and exposes respondToToolSuspension() to resume execution with user-provided data.
harness.subscribe((event) => {
if (event.type === "tool_suspended") {
// event.toolName, event.suspendPayload, event.resumeSchema
}
});
// Resume after collecting user input
await harness.respondToToolSuspension({ resumeData: { confirmed: true } });
Added agentId to the agent tool execution context. Tools executed by an agent can now access context.agent.agentId to identify which agent is calling them. This enables tools to look up agent metadata, share workspace configuration with sub-agents, or customize behavior per agent. (#14502)
Improved observability metrics and logs storage support. (#14607)
Add optional ?path= query param to workspace skill routes for disambiguating same-named skills. (#14430)
Skill routes continue to use :skillName in the URL path (no breaking change). When two skills share the same name (e.g. from different directories), pass the optional ?path= query parameter to select the exact skill:
GET /workspaces/:workspaceId/skills/:skillName?path=skills/brand-guidelines
SkillMetadata now includes a path field, and the list() method returns all same-named skills for disambiguation. The client SDK's getSkill() accepts an optional skillPath parameter for disambiguation.
Added ModelByInputTokens in @mastra/memory for token-threshold-based model selection in Observational Memory. (#14614)
When configured, OM automatically selects different observer or reflector models based on the actual input token count at the time the OM call runs.
Example usage:
import { Memory, ModelByInputTokens } from "@mastra/memory";
const memory = new Memory({
options: {
observationalMemory: {
model: new ModelByInputTokens({
upTo: {
10_000: "google/gemini-2.5-flash",
40_000: "openai/gpt-4o",
1_000_000: "openai/gpt-4.5"
}
})
}
}
});
The upTo keys are inclusive upper bounds. OM resolves the matching tier directly at the observer or reflector call site. If the input exceeds the largest configured threshold, OM throws an error.
Improved Observational Memory tracing so traces show the observer and reflector spans and make it easier to see which resolved model was used at runtime.
Update provider registry and model documentation with latest models and providers (68ed4e9)
Fixed Harness.destroy() to properly clean up heartbeats and workspace on teardown. (#14568)
Fixed null detection in tool input validation to check actual values at failing paths instead of relying on error message string matching. This ensures null values from LLMs are correctly handled even when validators produce error messages that don't contain the word "null" (e.g., "must be string"). Fixes #14476. (#14496)
Fixed missing tool lists in agent traces for streaming runs. Exporters like Datadog LLM Observability now receive the tools available to the agent. (#14550)
Fix consecutive tool-only loop iterations being merged into a single assistant message block. When the agentic loop runs multiple iterations that each produce only tool calls, the LLM would misinterpret them as parallel calls from a single turn. A step-start boundary is now inserted between iterations to ensure they are treated as sequential steps. (#14652)
Improved custom OpenAI-compatible model configuration guidance in the models docs. (#14594)
Added client/server body schemas for feedback and scores that omit the timestamp field, allowing it to be set server-side (#14470)
Workspace skills now surface all same-named skills for disambiguation. (#14430)
When multiple skills share the same name (e.g., a local brand-guidelines skill and one from node_modules), list() now returns all of them instead of only the tie-break winner. This lets agents and UIs see every available skill, along with its path and source type.
Tie-breaking behavior:
get(name) still returns a single skill using source-type priority: local > managed > externalget(name) throws an error — rename one or move it to a different source typeget(path) bypasses tie-breaking entirely and returns the exact skillAgents and UIs now receive all same-named skills with their paths, which improves disambiguation in prompts and tool calls.
const skills = await workspace.skills.list();
// Returns both local and external "brand-guidelines" skills
const exact = await workspace.skills.get("node_modules/@myorg/skills/brand-guidelines");
// Fetches the external copy directly by path
Fixed Anthropic 'tool_use ids were found without tool_result blocks immediately after' error. When client tools (e.g. execute_command) and provider tools (e.g. web_search) are called in parallel, the tool ordering in message history could cause Anthropic to reject subsequent requests, making the thread unrecoverable. Tool blocks are now correctly split to satisfy Anthropic's ordering requirements. (#14648)
Fix Zod v3 and Zod v4 compatibility across public structured-output APIs. (#14464)
Mastra agent and client APIs accept schemas from either zod/v3 or zod/v4, matching the documented peer dependency range and preserving TypeScript compatibility for both Zod versions.
Fix Zod v3 and Zod v4 compatibility across public structured-output APIs. (#14464)
Mastra agent and client APIs accept schemas from either zod/v3 or zod/v4, matching the documented peer dependency range and preserving TypeScript compatibility for both Zod versions.
fetchGroupsFromOkta now propagate so the outer .catch evicts the entry and retries on next requestid_token_hint to logout URL (required by Okta)OKTA_CLIENT_SECRET, OKTA_REDIRECT_URI, OKTA_COOKIE_PASSWORD) in README and examplesMastraAuthOktaOptions docs to include all fields (session config, scopes, etc.)getUserId cross-provider lookup pathAdded client SDK methods for dataset experiments and item generation. (#14470)
triggerExperiment() method to dataset resources for running experiments with configurable target type and IDgenerateItems() method for LLM-powered test data generationclusterFailures() method for analyzing experiment failuresAdded agent version support for experiments. When triggering an experiment, you can now pass an agentVersion parameter to pin which agent version to use. The agent version is stored with the experiment and returned in experiment responses. (#14562)
const client = new MastraClient();
await client.triggerDatasetExperiment({
datasetId: "my-dataset",
targetType: "agent",
targetId: "my-agent",
version: 3, // pin to dataset version 3
agentVersion: "ver_abc123" // pin to a specific agent version
});
Add optional ?path= query param to workspace skill routes for disambiguating same-named skills. (#14430)
Skill routes continue to use :skillName in the URL path (no breaking change). When two skills share the same name (e.g. from different directories), pass the optional ?path= query parameter to select the exact skill:
GET /workspaces/:workspaceId/skills/:skillName?path=skills/brand-guidelines
SkillMetadata now includes a path field, and the list() method returns all same-named skills for disambiguation. The client SDK's getSkill() accepts an optional skillPath parameter for disambiguation.
Updated skill search result types and query parameters to use skillName/skillNames instead of skillPath/skillPaths for consistency with the name-based public API. (#14430)
Added storage type detection to the Metrics Dashboard. The /system/packages endpoint now returns observabilityStorageType, identifying the observability storage backend. The dashboard shows an empty state when the storage does not support metrics (e.g. PostgreSQL, LibSQL), and displays a warning when using in-memory storage since metrics are not persisted across server restarts. Also added a docs link button to the Metrics page header. (#14620)
import { MastraClient } from "@mastra/client-js";
const client = new MastraClient();
const system = await client.getSystemPackages();
// system.observabilityStorageType contains the class name of the observability store:
// - 'ObservabilityInMemory' → metrics work but are not persisted across restarts
// - 'ObservabilityPG', 'ObservabilityLibSQL', etc. → metrics not supported
if (system.observabilityStorageType === "ObservabilityInMemory") {
console.warn("Metrics are not persisted — data will be lost on server restart.");
}
const SUPPORTED = new Set(["ObservabilityInMemory"]);
if (!SUPPORTED.has(system.observabilityStorageType ?? "")) {
console.error("Metrics require in-memory observability storage.");
}
Fix Zod v3 and Zod v4 compatibility across public structured-output APIs. (#14464)
Mastra agent and client APIs accept schemas from either zod/v3 or zod/v4, matching the documented peer dependency range and preserving TypeScript compatibility for both Zod versions.
Added adapter auth middleware helpers for raw framework routes. (#14458)
Use createAuthMiddleware({ mastra }) when you mount routes directly on a Hono, Express, Fastify, or Koa app and still want Mastra auth to run. Set requiresAuth: false when you need to reuse the same helper chain on a public route.
app.get("/custom/protected", createAuthMiddleware({ mastra }), handler);
Added adapter auth middleware helpers for raw framework routes. (#14458)
Use createAuthMiddleware({ mastra }) when you mount routes directly on a Hono, Express, Fastify, or Koa app and still want Mastra auth to run. Set requiresAuth: false when you need to reuse the same helper chain on a public route.
app.get("/custom/protected", createAuthMiddleware({ mastra }), handler);
Added adapter auth middleware helpers for raw framework routes. (#14458)
Use createAuthMiddleware({ mastra }) when you mount routes directly on a Hono, Express, Fastify, or Koa app and still want Mastra auth to run. Set requiresAuth: false when you need to reuse the same helper chain on a public route.
app.get("/custom/protected", createAuthMiddleware({ mastra }), handler);
Added adapter auth middleware helpers for raw framework routes. (#14458)
Use createAuthMiddleware({ mastra }) when you mount routes directly on a Hono, Express, Fastify, or Koa app and still want Mastra auth to run. Set requiresAuth: false when you need to reuse the same helper chain on a public route.
app.get("/custom/protected", createAuthMiddleware({ mastra }), handler);
Added storage support for dataset targeting and experiment status fields. (#14470)
targetType (text) and targetIds (jsonb) columns to datasets table for entity associationtags (jsonb) column to datasets table for tag vocabularystatus column to experiment results for review workflow trackingAdded agent version support for experiments. When triggering an experiment, you can now pass an agentVersion parameter to pin which agent version to use. The agent version is stored with the experiment and returned in experiment responses. (#14562)
const client = new MastraClient();
await client.triggerDatasetExperiment({
datasetId: "my-dataset",
targetType: "agent",
targetId: "my-agent",
version: 3, // pin to dataset version 3
agentVersion: "ver_abc123" // pin to a specific agent version
});
Added ModelByInputTokens in @mastra/memory for token-threshold-based model selection in Observational Memory. (#14614)
When configured, OM automatically selects different observer or reflector models based on the actual input token count at the time the OM call runs.
Example usage:
import { Memory, ModelByInputTokens } from "@mastra/memory";
const memory = new Memory({
options: {
observationalMemory: {
model: new ModelByInputTokens({
upTo: {
10_000: "google/gemini-2.5-flash",
40_000: "openai/gpt-4o",
1_000_000: "openai/gpt-4.5"
}
})
}
}
});
The upTo keys are inclusive upper bounds. OM resolves the matching tier directly at the observer or reflector call site. If the input exceeds the largest configured threshold, OM throws an error.
Improved Observational Memory tracing so traces show the observer and reflector spans and make it easier to see which resolved model was used at runtime.
google/gemini-2.5-flash by using stronger compression guidance and starting it at a higher compression level during reflection. google/gemini-2.5-flash is unusually good at generating long, faithful outputs. That made reflection retries more likely to preserve too much detail and miss the compression target, wasting tokens in the process. (#14612)Added datasets and experiments storage support to the MongoDB store. (#14556)
Datasets — Full dataset management with versioned items. Create, update, and delete datasets and their items with automatic version tracking. Supports batch insert/delete operations, time-travel queries to retrieve items at any past version, and item history tracking.
Experiments — Run and track experiments against datasets. Full CRUD for experiments and per-item experiment results, with pagination, filtering, and cascade deletion.
Both domains are automatically available when using MongoDBStore — no additional configuration needed.
const store = new MongoDBStore({ uri: "mongodb://localhost:27017", dbName: "my-app" });
// Datasets
const dataset = await store.getStorage("datasets").createDataset({ name: "my-dataset" });
await store.getStorage("datasets").addItem({ datasetId: dataset.id, input: { prompt: "hello" } });
// Experiments
const experiment = await store.getStorage("experiments").createExperiment({ name: "run-1", datasetId: dataset.id });
Added storage support for dataset targeting and experiment status fields. (#14470)
targetType (text) and targetIds (jsonb) columns to datasets table for entity associationtags (jsonb) column to datasets table for tag vocabularystatus column to experiment results for review workflow trackingAdded agent version support for experiments. When triggering an experiment, you can now pass an agentVersion parameter to pin which agent version to use. The agent version is stored with the experiment and returned in experiment responses. (#14562)
const client = new MastraClient();
await client.triggerDatasetExperiment({
datasetId: "my-dataset",
targetType: "agent",
targetId: "my-agent",
version: 3, // pin to dataset version 3
agentVersion: "ver_abc123" // pin to a specific agent version
});
Added Evaluate tab to the agent playground with full dataset management, scorer editing, experiment execution, and review workflow. (#14470)
Evaluate tab — A new sidebar-driven tab for managing datasets, scorers, and experiments within the agent playground. Key features:
Review tab — A dedicated review workflow for experiment results:
Added dataset and agent version selectors to the experiment evaluate tab. You can now choose which dataset version and agent version to use when running an experiment. Version information is displayed in the experiment sidebar, results panel header, and Past Runs list. Added a copy button next to the agent version selector to easily copy version IDs. (#14562)
Added EntityList.NoMatch component that displays a message when search filtering returns no results. Applied to all entity list pages: Agents, Workflows, Tools, Scorers, Processors, Prompts, Datasets, and MCP Servers. (#14621)
Added agent version support for experiments. When triggering an experiment, you can now pass an agentVersion parameter to pin which agent version to use. The agent version is stored with the experiment and returned in experiment responses. (#14562)
const client = new MastraClient();
await client.triggerDatasetExperiment({
datasetId: "my-dataset",
targetType: "agent",
targetId: "my-agent",
version: 3, // pin to dataset version 3
agentVersion: "ver_abc123" // pin to a specific agent version
});
Removed 'Create an Agent' button from agent list page and table empty state. Removed 'Create Scorer' button from top-level scorers page. Removed stored/code source icons (AgentSourceIcon) from agent headers, combobox, and table. Renamed 'Versions' tab to 'Editor' in agent page tabs. Added GaugeIcon to the 'Create Scorer' button in the review tab. (#14555)
Added metrics dashboard with KPI cards, trace volume, latency, model usage, and scores visualizations. Includes filtering by date range, agents, models, and providers. Added HorizontalBars, MetricsCard, MetricsKpiCard, MetricsLineChart, MetricsFlexGrid, and MetricsDataTable design system components. (#14491)
Internal cleanup and linting fixes (#14497)
Add optional ?path= query param to workspace skill routes for disambiguating same-named skills. (#14430)
Skill routes continue to use :skillName in the URL path (no breaking change). When two skills share the same name (e.g. from different directories), pass the optional ?path= query parameter to select the exact skill:
GET /workspaces/:workspaceId/skills/:skillName?path=skills/brand-guidelines
SkillMetadata now includes a path field, and the list() method returns all same-named skills for disambiguation. The client SDK's getSkill() accepts an optional skillPath parameter for disambiguation.
Updated skill search result types and query parameters to use skillName/skillNames instead of skillPath/skillPaths for consistency with the name-based public API. (#14430)
Added experimental entity list components with skeleton loading states, error handling, and dedicated empty state components for all list pages. Gated behind MASTRA_EXPERIMENTAL_UI environment variable. (#14547)
Added storage type detection to the Metrics Dashboard. The /system/packages endpoint now returns observabilityStorageType, identifying the observability storage backend. The dashboard shows an empty state when the storage does not support metrics (e.g. PostgreSQL, LibSQL), and displays a warning when using in-memory storage since metrics are not persisted across server restarts. Also added a docs link button to the Metrics page header. (#14620)
import { MastraClient } from "@mastra/client-js";
const client = new MastraClient();
const system = await client.getSystemPackages();
// system.observabilityStorageType contains the class name of the observability store:
// - 'ObservabilityInMemory' → metrics work but are not persisted across restarts
// - 'ObservabilityPG', 'ObservabilityLibSQL', etc. → metrics not supported
if (system.observabilityStorageType === "ObservabilityInMemory") {
console.warn("Metrics are not persisted — data will be lost on server restart.");
}
const SUPPORTED = new Set(["ObservabilityInMemory"]);
if (!SUPPORTED.has(system.observabilityStorageType ?? "")) {
console.error("Metrics require in-memory observability storage.");
}
Added ModelByInputTokens in @mastra/memory for token-threshold-based model selection in Observational Memory. (#14614)
When configured, OM automatically selects different observer or reflector models based on the actual input token count at the time the OM call runs.
Example usage:
import { Memory, ModelByInputTokens } from "@mastra/memory";
const memory = new Memory({
options: {
observationalMemory: {
model: new ModelByInputTokens({
upTo: {
10_000: "google/gemini-2.5-flash",
40_000: "openai/gpt-4o",
1_000_000: "openai/gpt-4.5"
}
})
}
}
});
The upTo keys are inclusive upper bounds. OM resolves the matching tier directly at the observer or reflector call site. If the input exceeds the largest configured threshold, OM throws an error.
Improved Observational Memory tracing so traces show the observer and reflector spans and make it easier to see which resolved model was used at runtime.
Redesigned the agent instruction blocks editor with a Notion-like document feel. Blocks no longer show line numbers or block numbers, have tighter spacing, and display a subtle hover highlight. Reference blocks now show a sync-block header with a popover for block details, "Open original", "De-reference block", and "Used by" agents. Inline blocks can be converted to saved prompt blocks via a new "Save as prompt block" action in the hover toolbar. The prompt block edit sidebar now shows a "Used by" section listing which agents reference the block. Added a lineNumbers prop to CodeEditor to optionally hide line numbers. (#14563)
<CodeEditor language="markdown" lineNumbers={false} />
Fix Zod v3 and Zod v4 compatibility across public structured-output APIs. (#14464)
Mastra agent and client APIs accept schemas from either zod/v3 or zod/v4, matching the documented peer dependency range and preserving TypeScript compatibility for both Zod versions.
Fixed schema-compat ESM imports for Zod JSON Schema helpers. (#14617)
@mastra/schema-compat no longer uses createRequire in its Zod v4 adapter or runtime eval tests, which avoids createRequire-related ESM issues while preserving support for zod/v3 and zod/v4.
Fix Zod v3 and Zod v4 compatibility across public structured-output APIs. (#14464)
Mastra agent and client APIs accept schemas from either zod/v3 or zod/v4, matching the documented peer dependency range and preserving TypeScript compatibility for both Zod versions.
Added dataset-agent association and experiment status tracking for the Evaluate workflow. (#14470)
targetType and targetIds fields to datasets, enabling association with agents, scorers, or workflows. Datasets can now be linked to multiple entities.status field to experiment results ('needs-review', 'reviewed', 'complete') for review queue workflow.Added getAuthenticatedUser() to @mastra/server/auth so server middleware can resolve the configured auth user without changing route auth behavior. (#14458)
Example
import { getAuthenticatedUser } from "@mastra/server/auth";
const user = await getAuthenticatedUser({
mastra,
token,
request: c.req.raw
});
Added new observability API endpoints and client methods for logs, scores, feedback, metrics (aggregate, breakdown, time series, percentiles), and discovery (metric names, label keys/values, entity types/names, service names, environments, tags) (#14470)
Added agent version support for experiments. When triggering an experiment, you can now pass an agentVersion parameter to pin which agent version to use. The agent version is stored with the experiment and returned in experiment responses. (#14562)
const client = new MastraClient();
await client.triggerDatasetExperiment({
datasetId: "my-dataset",
targetType: "agent",
targetId: "my-agent",
version: 3, // pin to dataset version 3
agentVersion: "ver_abc123" // pin to a specific agent version
});
Add optional ?path= query param to workspace skill routes for disambiguating same-named skills. (#14430)
Skill routes continue to use :skillName in the URL path (no breaking change). When two skills share the same name (e.g. from different directories), pass the optional ?path= query parameter to select the exact skill:
GET /workspaces/:workspaceId/skills/:skillName?path=skills/brand-guidelines
SkillMetadata now includes a path field, and the list() method returns all same-named skills for disambiguation. The client SDK's getSkill() accepts an optional skillPath parameter for disambiguation.
Updated skill search result types and query parameters to use skillName/skillNames instead of skillPath/skillPaths for consistency with the name-based public API. (#14430)
Added storage type detection to the Metrics Dashboard. The /system/packages endpoint now returns observabilityStorageType, identifying the observability storage backend. The dashboard shows an empty state when the storage does not support metrics (e.g. PostgreSQL, LibSQL), and displays a warning when using in-memory storage since metrics are not persisted across server restarts. Also added a docs link button to the Metrics page header. (#14620)
import { MastraClient } from "@mastra/client-js";
const client = new MastraClient();
const system = await client.getSystemPackages();
// system.observabilityStorageType contains the class name of the observability store:
// - 'ObservabilityInMemory' → metrics work but are not persisted across restarts
// - 'ObservabilityPG', 'ObservabilityLibSQL', etc. → metrics not supported
if (system.observabilityStorageType === "ObservabilityInMemory") {
console.warn("Metrics are not persisted — data will be lost on server restart.");
}
const SUPPORTED = new Set(["ObservabilityInMemory"]);
if (!SUPPORTED.has(system.observabilityStorageType ?? "")) {
console.error("Metrics require in-memory observability storage.");
}
Fix Zod v3 and Zod v4 compatibility across public structured-output APIs. (#14464)
Mastra agent and client APIs accept schemas from either zod/v3 or zod/v4, matching the documented peer dependency range and preserving TypeScript compatibility for both Zod versions.
Added ModelByInputTokens in @mastra/memory for token-threshold-based model selection in Observational Memory. (#14614)
When configured, OM automatically selects different observer or reflector models based on the actual input token count at the time the OM call runs.
Example usage:
import { Memory, ModelByInputTokens } from "@mastra/memory";
const memory = new Memory({
options: {
observationalMemory: {
model: new ModelByInputTokens({
upTo: {
10_000: "google/gemini-2.5-flash",
40_000: "openai/gpt-4o",
1_000_000: "openai/gpt-4.5"
}
})
}
}
});
The upTo keys are inclusive upper bounds. OM resolves the matching tier directly at the observer or reflector call site. If the input exceeds the largest configured threshold, OM throws an error.
Improved Observational Memory tracing so traces show the observer and reflector spans and make it easier to see which resolved model was used at runtime.
The following packages were updated with dependency changes only:
@mastra/core now supports AI Gateway tools (e.g. gateway.tools.perplexitySearch()) as provider-executed tools: it infers providerExecuted, merges streamed provider results back into the originating tool calls, and skips local execution when the provider already returned a result.
Observational memory persistence is more stable via dated message boundary delimiters and chunking, and @mastra/memory adds getObservationsAsOf() to retrieve the exact observation set active at a given message timestamp (useful for replay/debugging and consistent prompting).
@mastra/mcp adds per-server operational tooling—reconnectServer(serverName), listToolsetsWithErrors(), and getServerStderr(serverName)—to improve reliability and debugging of MCP stdio/server integrations.
Update provider registry and model documentation with latest models and providers (51970b3)
Added dated message boundary delimiters when activating buffered observations for improved cache stability. (#14367)
Fixed provider-executed tool calls being saved out of order or without results in memory replay. (Fixes #13762) (#13860)
Fix generateEmptyFromSchema to accept both string and pre-parsed object JSON schema inputs, recursively initialize nested object properties, and respect default values. Updated WorkingMemoryTemplate type to a discriminated union supporting Record<string, unknown> content for JSON format templates. Removed duplicate private schema generator in the working-memory processor in favor of the shared utility. (#14310)
Fixed provider-executed tool calls (e.g. Anthropic web_search) being dropped or incorrectly persisted when deferred by the provider. Tool call parts are now persisted in stream order, and deferred tool results are correctly merged back into the originating message. (#14282)
Fixed replaceString utility to properly escape $ characters in replacement strings. Previously, patterns like $& in the replacement text would be interpreted as regex backreferences instead of literal text. (#14434)
Fixed tool invocation updates to preserve providerExecuted and providerMetadata from the original tool call when updating to result state. (#14431)
@mastra/core: patch (#14327)
Added spanId alongside traceId across user-facing execution results that return tracing identifiers (including agent stream/generate and workflow run results) so integrations can query observability vendors by run root span ID
Add AI Gateway tool support in the agentic loop. (#14016)
Gateway tools (e.g., gateway.tools.perplexitySearch()) are provider-executed but, unlike native provider tools (e.g., openai.tools.webSearch()), the LLM provider does not store their results server-side. The agentic loop now correctly infers providerExecuted for these tools, merges streamed provider results with their corresponding tool calls, and skips local execution when a provider result is already present.
Fixes #13190
Fixed schema-based working memory typing so workingMemory.schema accepts supported schemas such as Zod and JSON Schema. (#14363)
Fixed workspace search being wiped when skills refresh. Previously, calling skills.refresh() or triggering a skills re-discovery via maybeRefresh() would clear the entire BM25 search index, including auto-indexed workspace content. Now only skill entries are removed from the index during refresh, preserving workspace search results. (#14287)
Added client/server body schemas for feedback and scores that omit the timestamp field, allowing it to be set server-side (#14270)
Fixed processor state not persisting between processOutputStream and processOutputResult when processors are wrapped in workflows. State set during stream processing is now correctly accessible in processOutputResult. (#14279)
Fixed type inference for requestContext schemas when using Zod v3 and v4. Agent and tool configurations now correctly infer RequestContext types from Zod schemas and other StandardSchema-compatible schemas. (#14363)
Fixed Studio showing unauthenticated state when using MastraJwtAuth with custom headers. MastraJwtAuth now implements the IUserProvider interface (getCurrentUser/getUser), so the Studio capabilities endpoint can resolve the authenticated user from the JWT Bearer token. (#14411)
Also added an optional mapUser option to customize how JWT claims are mapped to user fields:
new MastraJwtAuth({
secret: process.env.JWT_SECRET,
mapUser: payload => ({
id: payload.userId,
name: payload.displayName,
email: payload.mail,
}),
});
Closes #14350
cookieDomain option to MastraAuthStudioOptions for explicit configurationMASTRA_COOKIE_DOMAIN environment variable as fallback.mastra.ai domain (prevents false positives from malicious URLs).mastra.ai auto-detectionAdded MASTRA_HOST environment variable support for configuring the server bind address. Previously, the host could only be set via server.host in the Mastra config. Now it follows the same pattern as PORT: config value takes precedence, then env var, then defaults to localhost. (#14313)
Added a new MASTRA_TEMPLATES Studio runtime flag to control whether the Templates section appears in the sidebar. (#14309)
MASTRA_TEMPLATES=true now enables Templates navigation in Studio.false or unset), Templates is hidden.Fixed tsconfig path aliases during build when imports use .js-style module specifiers. (#13998)
Fixed apiPrefix server option not being applied to the underlying Hono server instance. Routes, welcome page, Swagger UI, and studio HTML handler now all respect the configured apiPrefix instead of hardcoding /api. (#14325)
.env variables to wrangler.jsonc to prevent secrets from leaking into source control. (#14302)
.env are no longer merged into the vars field of the generated wrangler config.vars from the CloudflareDeployer constructor are still written as before.npx wrangler secret bulk .env.Added support for constructing ElasticSearchVector with a pre-configured Elasticsearch client. You can now pass either a client instance or connection parameters (url and optional auth), giving you full control over client configuration when needed. (#12802)
Using connection parameters:
const vectorDB = new ElasticSearchVector({
id: 'my-store',
url: 'http://localhost:9200',
auth: { apiKey: 'my-key' },
});
Using a pre-configured client:
import { Client } from '@elastic/elasticsearch';
const client = new Client({ node: 'http://localhost:9200' });
const vectorDB = new ElasticSearchVector({ id: 'my-store', client });
Fixed: PinoLogger now supports JSON output for log aggregators (#14306)
Previously, PinoLogger always used pino-pretty which produced multiline colored output, breaking log aggregators like Datadog, Loki, and CloudWatch. A new prettyPrint option allows switching to single-line JSON output.
Added new MCP client APIs for per-server control and diagnostics. (#14377)
reconnectServer(serverName) to reconnect a single MCP server without restarting all servers.listToolsetsWithErrors() to return both toolsets and per-server errors.getServerStderr(serverName) to inspect piped stderr for stdio servers.Example
const { toolsets, errors } = await mcpClient.listToolsetsWithErrors();
await mcpClient.reconnectServer('slack');
const stderr = mcpClient.getServerStderr('slack');
Improved (#14260)
@modelcontextprotocol/sdk from ^1.17.5 to ^1.27.1.Deprecated
version usage in @mastra/mcp.Migration
client.prompts.get({ name: 'explain-code', version: 'v1', args })client.prompts.get({ name: 'explain-code-v1', args })MastraPrompt is available for migration and is also deprecated.Fixed observational memory triggering observation while provider-executed tool calls are still pending, which could split messages and cause errors on follow-up turns. (#14282)
Fixed working memory tool description to accurately reflect merge behavior. The previous description incorrectly stated "Set a field to null to remove it" but null values are stripped by validation before reaching the merge logic. The updated description clarifies: omit fields to preserve existing data, and pass complete arrays or omit them since arrays are replaced entirely. (#14424)
Limit oversized observational-memory tool results before they reach the observer. (#14344)
This strips large encryptedContent blobs and truncates remaining tool result payloads to keep observer prompts and token estimates aligned with what the model actually sees.
Improved observational memory cache stability by splitting persisted observations into separate prompt chunks using dated message boundary delimiters. (#14367)
Added getObservationsAsOf() utility to retrieve the observations that were active at a specific point in time. This enables filtering observation history by message creation date.
import { getObservationsAsOf } from '@mastra/memory';
// Get observations that existed when a specific message was created
const observations = getObservationsAsOf(record.activeObservations, message.createdAt);
Added dedicated session page for agents at /agents/<agentId>/session. This minimal view shows only the chat interface without the sidebar or information pane, making it ideal for quick internal testing or sharing with non-technical team members. If request context presets are configured, a preset dropdown appears in the header. (#13754)
Added hideModelSwitcher prop to AgentChat and Thread components to allow hiding the model picker in the composer.
Fixed crash during template installation by ensuring error values from stream events are converted to strings before being passed to the UI. This prevents the 'e?.includes is not a function' TypeError when the server returns non-string error payloads. (#14267)
Fixed crash when template installation errors are non-string values (e.g. objects). The error is now safely converted to a string before calling .includes(), preventing the 'e?.includes is not a function' TypeError in the studio. (#14267)
The following packages were updated with dependency changes only:
Mastra now ships zod-based storage schemas and in-memory implementations for all observability signals (scores, logs, feedback, metrics, discovery), with full type inference and a base ObservabilityStorage that includes default method implementations.
@mastra/agentfsThe new AgentFSFilesystem workspace provider adds Turso/SQLite-backed, database-persistent file storage for agents across sessions via agentfs-sdk.
EventBuffer batching@mastra/observability exporters/event bus were updated to align with renamed core observability types, and an EventBuffer was added to batch non-tracing signals with configurable flush intervals.
@mastra/server/schemasA new @mastra/server/schemas export provides utility types (RouteMap, InferPathParams, InferBody, InferResponse, etc.) that automatically infer request/response types from SERVER_ROUTES, including routes added via createRoute().
Observational Memory adds observation.previousObserverTokens to truncate the “Previous Observations” context to a token budget (or omit/disable truncation), reducing observer prompt size in long conversations.
MetricType (counter/gauge/histogram) is deprecated — metrics are now raw events with aggregation at query timescorerId instead of scorerNameObservabilityBus constructor now takes a config object (cardinalityFilter, autoExtractMetrics); setCardinalityFilter() and enableAutoExtractedMetrics() were removedAdded observability storage domain schemas and implementations (#14214)
Introduced comprehensive storage schemas and in-memory implementations for all observability signals (scores, logs, feedback, metrics, discovery). All schemas are zod-based with full type inference. The ObservabilityStorage base class includes default implementations for all new methods.
Breaking changes:
MetricType (counter/gauge/histogram) is deprecated — metrics are now raw events with aggregation at query timescorerId instead of scorerName for scorer identificationUpdate provider registry and model documentation with latest models and providers (ea86967)
Fixed provider tools (e.g. openai.tools.webSearch()) being silently dropped when using a custom gateway that returns AI SDK v6 (V3) models. The router now remaps tool types from provider-defined to provider when delegating to V3 models, so provider tools work correctly through gateways. Fixes #13667. (#13895)
Fixed TypeScript type errors in onStepFinish and onFinish callbacks, and resolved compatibility issues with createOpenRouter() across different AI SDK versions. (#14229)
Fixed a bug where thread metadata (e.g. title, custom properties) passed via options.memory.thread was discarded when MASTRA_THREAD_ID_KEY was set in the request context. The thread ID from context still takes precedence, but all other user-provided thread properties are now preserved. (#13146)
Fixed workspace tools such as mastra_workspace_list_files and mastra_workspace_read_file failing with WorkspaceNotAvailableError in some execution paths. (#14228)
Workspace tools now work consistently across execution paths.
Added observer context optimization for Observational Memory. The observation.previousObserverTokens field reduces Observer input token costs for long-running conversations: (#13568)
2000): Truncates the 'Previous Observations' section to a token budget, keeping the most recent observations and automatically replacing already-reflected lines with the buffered reflection summary. Set to 0 to omit previous observations entirely, or false to disable truncation and keep the full observation history.const memory = new Memory({
options: {
observationalMemory: {
model: 'google/gemini-2.5-flash',
observation: {
previousObserverTokens: 10_000,
},
},
},
});
Added AgentFSFilesystem workspace provider — a Turso/SQLite-backed filesystem via the agentfs-sdk that gives agents persistent, database-backed file storage across sessions. (#13450)
Basic usage
import { Workspace } from '@mastra/core/workspace';
import { AgentFSFilesystem } from '@mastra/agentfs';
const workspace = new Workspace({
filesystem: new AgentFSFilesystem({
agentId: 'my-agent',
}),
});
Bump esbuild from ^0.25.10 to ^0.27.3 to resolve Go stdlib CVEs (CVE-2025-22871, CVE-2025-61729) flagged by npm audit in consumer projects. (#13124)
Fixed Agent-to-Agent requests to return a clear error message when the agent ID parameter is missing. (#14229)
Add dynamicPackages bundler config for runtime-loaded packages and auto-detect pino (#11779)
Adds a new dynamicPackages bundler config option for packages that are loaded
dynamically at runtime and cannot be detected by static analysis (e.g.,
pino.transport({ target: "pino-opentelemetry-transport" })).
Usage:
import { Mastra } from '@mastra/core';
export const mastra = new Mastra({
bundler: {
dynamicPackages: ['my-custom-transport', 'some-plugin'],
},
});
Additionally, pino transport targets are now automatically detected from the bundled code, so most pino users won't need any configuration.
This keeps externals for its intended purpose (packages to not bundle) and
provides a clear mechanism for dynamic packages that need to be in the output
package.json.
Fixes #10893
Added observer context optimization for Observational Memory. The observation.previousObserverTokens field reduces Observer input token costs for long-running conversations: (#13568)
2000): Truncates the 'Previous Observations' section to a token budget, keeping the most recent observations and automatically replacing already-reflected lines with the buffered reflection summary. Set to 0 to omit previous observations entirely, or false to disable truncation and keep the full observation history.const memory = new Memory({
options: {
observationalMemory: {
model: 'google/gemini-2.5-flash',
observation: {
previousObserverTokens: 10_000,
},
},
},
});
Updated exporters and event bus to use renamed observability types from @mastra/core. Added EventBuffer for batching non-tracing signals with configurable flush intervals. (#14214)
Breaking changes:
ObservabilityBus now takes a config object in its constructor (cardinalityFilter, autoExtractMetrics); setCardinalityFilter() and enableAutoExtractedMetrics() removedFixed agent playground panels growing together when content overflows. Left and right columns now scroll independently. (#14244)
Fixed an agent chat editor crash in Playground UI caused by duplicate CodeMirror state instances. (#14241)
Improved studio loading performance by lazy-loading the Prettier code formatter. Prettier and its plugins are now loaded on-demand when formatting is triggered, rather than being bundled in the initial page load. (#13934)
Improved list-style pages across the Playground UI (agents, datasets, MCPs, processors, prompt blocks, scorers, tools, workflows) with a new list layout and updated empty states. (#14173)
@mastra/schema-compat: patch (#14195)
Fixed published @mastra/schema-compat types so AI SDK v5 schemas resolve correctly for consumers
Fixed false z.toJSONSchema is not available errors for compatible Zod versions. (#14264)
What changed
Added @mastra/server/schemas export with utility types that infer path params, query params, request body, and response types from any route in SERVER_ROUTES. When you add a new route via createRoute(), it automatically appears in the RouteMap type — no manual contract needed. (#14008)
import type { RouteMap, RouteContract, InferPathParams, InferBody, InferResponse } from '@mastra/server/schemas';
type GetAgentParams = InferPathParams<RouteMap['GET /agents/:agentId']>;
// => { agentId: string }
type GenerateBody = InferBody<RouteMap['POST /agents/:agentId/generate']>;
// => { messages: CoreMessage[], ... }
type AgentResponse = InferResponse<RouteMap['GET /agents/:agentId']>;
// => { name: string, tools: ..., ... }
Fixed OpenAPI spec for custom route paths. Custom routes registered via registerApiRoute are served at the root path (e.g. /health), not under /api. The OpenAPI spec now correctly represents this so that API tools and clients using the spec will resolve them to the correct URL. (#13930)
Fixed an unnecessary runtime dependency in @mastra/server, reducing install size for consumers. Moved @mastra/schema-compat from dependencies to devDependencies since it is only needed at build time. (#14223)
The following packages were updated with dependency changes only:
@mastra/cloudflare adds a new Durable Objects–backed storage implementation (in addition to KV), with SQLite persistence, batch operations, and table/column validation—enabling more robust stateful storage on Cloudflare.
LocalFilesystem no longer treats absolute paths like /file.txt as workspace-relative; absolute paths now resolve to real filesystem locations (with containment checks), relative paths resolve against basePath, and ~/ expands to the home directory.
MCP tool calls are now traced with a dedicated MCP_TOOL_CALL span type (with server name/version metadata), Studio adds MCP-specific timeline styling, processor-triggered aborts are fully visible in traces, and workflow suspend/resume now preserves trace continuity under the original span.
Fixes include the agent loop continuing correctly when onIterationComplete returns continue: true, and preventing exponential token growth by running token-based message pruning at every step (including tool call continuations).
Sandbox process IDs are now string-based (ProcessHandle.pid: string) to support session IDs across providers, and sandboxes/filesystems expose underlying SDK instances via new provider-specific getters (e.g., sandbox.daytona, sandbox.blaxel, filesystem.client for S3, filesystem.storage/bucket for GCS).
LocalFilesystem no longer treats absolute paths (e.g. /src/index.ts) as basePath-relative; update callers to pass relative paths when targeting the workspace.ProcessHandle.pid changed from number to string; update any code that assumes numeric PIDs (including processes.get(...)).MCP tool calls now use MCP_TOOL_CALL span type instead of TOOL_CALL in traces. CoreToolBuilder detects mcpMetadata on tools and creates spans with MCP server name, version, and tool description attributes. (#13274)
Absolute paths now resolve to real filesystem locations instead of being treated as workspace-relative. (#13804)
Previously, LocalFilesystem in contained mode treated absolute paths like /file.txt as shorthand for basePath/file.txt (a "virtual-root" convention). This could silently resolve paths to unexpected locations — for example, /home/user/.config/file.txt would resolve to basePath/home/user/.config/file.txt instead of the real path.
Now:
/) are real filesystem paths, subject to containment checksfile.txt, src/index.ts) resolve against basePath~/Documents) expand to the home directoryIf your code passes paths like /file.txt to workspace filesystem methods expecting them to resolve relative to basePath, change them to relative paths:
// Before
await filesystem.readFile('/src/index.ts');
// After
await filesystem.readFile('src/index.ts');
Also fixed:
allowedPaths resolving against the working directory instead of basePath, causing unexpected permission errors when basePath differed from cwdallowedPaths directories that don't exist yet (e.g., during skills discovery)Changed ProcessHandle.pid type from number to string to support sandbox providers that use non-numeric process identifiers (e.g., session IDs). (#13591)
Before:
const handle = await sandbox.processes.spawn('node server.js');
handle.pid; // number
await sandbox.processes.get(42);
After:
const handle = await sandbox.processes.spawn('node server.js');
handle.pid; // string (e.g., '1234' for local, 'session-abc' for Daytona)
await sandbox.processes.get('1234');
Added a mastra/<version> User-Agent header to all provider API requests (OpenAI, Anthropic, Google, Mistral, Groq, xAI, DeepSeek, and others) across models.dev, Netlify, and Azure gateways for better traffic attribution. (#13087)
Update provider registry and model documentation with latest models and providers (9cede11)
Fixed processor-triggered aborts not appearing in traces. Processor spans now include abort details (reason, retry flag, metadata) and agent-level spans capture the same information when an abort short-circuits the agent run. This makes guardrail and processor aborts fully visible in tracing dashboards. (#14038)
Fix agent loop not continuing when onIterationComplete returns continue: true (#14170)
Fixed exponential token growth during multi-step agent workflows by implementing processInputStep on TokenLimiterProcessor and removing the redundant processInput method. Token-based message pruning now runs at every step of the agentic loop (including tool call continuations), keeping the in-memory message list within budget before each LLM call. Also refactored Tiktoken encoder to use the shared global singleton from getTiktoken() instead of creating a new instance per processor. (#13929)
Sub-agents with defaultOptions.memory configurations were having their memory settings overridden when called as tools from a parent agent. The parent unconditionally passed its own memory option (with newly generated thread/resource IDs), which replaced the sub-agent's intended memory configuration due to shallow object merging. (#11561)
This fix checks if the sub-agent has its own defaultOptions.memory before applying parent-derived memory settings. Sub-agents without their own memory config continue to receive parent-derived IDs as a fallback.
No code changes required for consumers - sub-agents with explicit defaultOptions.memory will now work correctly when used via the agents: {} option.
Fixed listConfiguredInputProcessors() and listConfiguredOutputProcessors() returning a combined workflow instead of individual processors. Previously, these methods wrapped all configured processors into a single committed workflow, making it impossible to inspect or look up processors by ID. Now they return the raw flat array of configured processors as intended. (#14158)
fetchWithRetry now backs off in sequence 2s → 4s → 8s and then caps at 10s. (#14159)
Preserve trace continuity across workflow suspend/resume for workflows run by the default engine, so resumed workflows appear as children of the original span in tracing tools. (#12276)
Added provider-specific blaxel getter to access the underlying Blaxel SandboxInstance directly. Deprecated the generic instance getter in favor of the new blaxel getter for better IDE discoverability and consistency with other sandbox providers. (#14166)
// Before
const blaxelSandbox = sandbox.instance;
// After
const blaxelSandbox = sandbox.blaxel;
Use provider-native string process IDs directly as ProcessHandle.pid, removing the previous parseInt() workaround. (#13591)
const handle = await sandbox.processes.spawn('node server.js');
handle.pid; // string — the Blaxel SDK's native process ID
feat: add Cloudflare Durable Objects storage adapter (#12366)
Adds a new Durable Objects-based storage implementation alongside the existing KV store. Includes SQL-backed persistence via DO's SQLite storage, batch operations, and proper table/column validation.
Added provider-specific daytona getter to access the underlying Daytona Sandbox instance directly. Deprecated the generic instance getter in favor of the new daytona getter for better IDE discoverability and consistency with other sandbox providers. (#14166)
// Before
const daytonaSandbox = sandbox.instance;
// After
const daytonaSandbox = sandbox.daytona;
Improved Daytona process handling to use provider session IDs directly as ProcessHandle.pid. (#13591)
const handle = await sandbox.processes.spawn('node server.js');
await sandbox.processes.get(handle.pid);
Fixed sandbox reconnection when Daytona sandbox is externally stopped or times out due to inactivity. Previously, the error thrown by the Daytona SDK (e.g. "failed to resolve container IP") did not match the known dead-sandbox patterns, so the automatic retry logic would not trigger and the error would propagate to the user. Added two new error patterns to correctly detect stopped sandboxes and trigger automatic recovery. (#14175)
Improved Studio load times by serving compressed static assets in both deploy and dev. Large bundles now download much faster and use significantly less bandwidth. (#13945)
--- (#14162)
@mastra/deployer: patch
Fixed deployment dependency resolution so required schema compatibility packages are resolved automatically.
Fixed gzip compression being applied globally to all API routes, causing JSON responses to be unreadable by clients that don't auto-decompress. Compression is now scoped to studio static assets only. (#14190)
ProcessHandle.pid is now a string. Numeric PIDs from the E2B SDK are stringified automatically. (#13591)
const handle = await sandbox.processes.spawn('node server.js');
handle.pid; // string (e.g., '1234')
Added public storage and bucket getters to access the underlying Google Cloud Storage instances directly. Use these when you need GCS features not exposed through the WorkspaceFilesystem interface. (#14166)
const gcsStorage = filesystem.storage;
const gcsBucket = filesystem.bucket;
mcpMetadata (server name and version) to every tool it creates, enabling automatic MCP_TOOL_CALL span tracing without user code changes. (#13274)Added stderr and cwd options to stdio server configuration so you can control child process error output and set the server working directory. (#13959)
import { MCPClient } from '@mastra/mcp';
const mcp = new MCPClient({
servers: {
myServer: {
command: 'node',
args: ['server.js'],
stderr: 'pipe',
cwd: '/path/to/server',
},
},
});
Observational Memory now performs local threshold checks with lower CPU and memory overhead. (#14178)
This update keeps the same multimodal thresholding behavior for image-aware inputs, so existing Observational Memory configurations continue to work without changes.
Updated form field components in Studio to use the new FieldBlock design system pattern. Replaced legacy SelectField, InputField, and SearchField with SelectFieldBlock, TextFieldBlock, and SearchFieldBlock across all domain components (observability, scores, datasets, templates). Refined button styles and layout improvements. (#14138)
Fixed Studio form crash when workflow input schemas contain z.array() fields with Zod v4. Array, union, and intersection fields now render and accept input correctly in the workflow run form. (#14131)
Added MCP-specific icon and color in trace timeline for mcp_tool_call spans. (#13274)
Change file browser root path from / to . so workspace navigation starts from the workspace directory instead of the host filesystem root. (#13804)
Added public client getter to access the underlying S3Client instance directly. Use this when you need S3 features not exposed through the WorkspaceFilesystem interface (e.g., presigned URLs, multipart uploads). (#14166)
const s3Client = filesystem.client;
zod/v4 compat layer from Zod 3.25.x is used. Schemas like ask_user and other harness tools were not being properly converted to JSON Schema when ~standard.jsonSchema was absent, causing type: "None" errors from the Anthropic API. (#14157)The following packages were updated with dependency changes only:
Agents can now use model functions that return full fallback arrays (ModelWithRetries[]), enabling context-driven model routing (tier/region/etc.) with nested/async selection and proper maxRetries inheritance.
Mastra adds Standard Schema normalization (toStandardSchema, standardSchemaToJSONSchema) across Zod v3/v4, AI SDK Schema, and JSON Schema via @mastra/schema-compat, unifying schema handling and improving strict-mode provider compatibility.
New onValidationError hook on ServerConfig and createRoute() lets you control status codes and response bodies for Zod validation failures, supported consistently in Hono/Express/Fastify/Koa adapters.
requestContext is now captured on tracing spans (and persisted in ClickHouse/PG/LibSQL/MSSQL span tables) and is supported on dataset items and experiments, allowing request-scoped metadata (tenant/user/flags) to flow through evaluation and observability.
Semantic recall is significantly faster across multiple adapters (notably Postgres for very large threads), PgVector adds metadataIndexes for btree indexing filtered metadata fields, and @mastra/pg now supports pgvector bit and sparsevec vector types.
^3.25.0 (for v3) or ^4.0.0 (for v4 support).feat: support dynamic functions returning model fallback arrays (#11975)
Agents can now use dynamic functions that return entire fallback arrays based on runtime context. This enables:
const agent = new Agent({
model: ({ requestContext }) => {
const tier = requestContext.get('tier');
if (tier === 'premium') {
return [
{ model: 'openai/gpt-4', maxRetries: 2 },
{ model: 'anthropic/claude-3-opus', maxRetries: 1 },
];
}
return [{ model: 'openai/gpt-3.5-turbo', maxRetries: 1 }];
},
});
const agent = new Agent({
model: ({ requestContext }) => {
const region = requestContext.get('region');
return [
{
model: ({ requestContext }) => {
// Select model variant based on region
return region === 'eu' ? 'openai/gpt-4-eu' : 'openai/gpt-4';
},
maxRetries: 2,
},
{ model: 'anthropic/claude-3-opus', maxRetries: 1 },
];
},
maxRetries: 1, // Agent-level default for models without explicit maxRetries
});
const agent = new Agent({
model: async ({ requestContext }) => {
// Fetch user's tier from database
const userId = requestContext.get('userId');
const user = await db.users.findById(userId);
if (user.tier === 'enterprise') {
return [
{ model: 'openai/gpt-4', maxRetries: 3 },
{ model: 'anthropic/claude-3-opus', maxRetries: 2 },
];
}
return [{ model: 'openai/gpt-3.5-turbo', maxRetries: 1 }];
},
});
MastraModelConfig (single model) or ModelWithRetries[] (array)maxRetries inherit agent-level maxRetries defaultModelFallbacks with all required fields filled inmaxRetries when not explicitly specifiedgetModelList() now correctly handles dynamic functions that return arraysgetLLM() and getModel() return behavior while adding dynamic fallback array supportNo breaking changes. All existing model configurations continue to work:
model: 'openai/gpt-4'model: [{ model: 'openai/gpt-4', maxRetries: 2 }]model: ({ requestContext }) => 'openai/gpt-4'model: ({ requestContext }) => [{ model: 'openai/gpt-4', maxRetries: 2 }]Closes #11951
Added onValidationError hook to ServerConfig and createRoute(). When a request fails Zod schema validation (query parameters, request body, or path parameters), this hook lets you customize the error response — including the HTTP status code and response body — instead of the default 400 response. Set it on the server config to apply globally, or on individual routes to override per-route. All server adapters (Hono, Express, Fastify, Koa) support this hook. (#13477)
const mastra = new Mastra({
server: {
onValidationError: (error, context) => ({
status: 422,
body: {
ok: false,
errors: error.issues.map(i => ({
path: i.path.join('.'),
message: i.message,
})),
source: context,
},
}),
},
});
Added requestContext field to tracing spans. Each span now automatically captures a snapshot of the active RequestContext, making request-scoped values like user IDs, tenant IDs, and feature flags available when viewing traces. (#14020)
Added allowedWorkspaceTools to HarnessSubagent. Subagents now automatically inherit the parent agent's workspace. Use allowedWorkspaceTools to restrict which workspace tools a subagent can see: (#13940)
const subagent: HarnessSubagent = {
id: 'explore',
name: 'Explore',
allowedWorkspaceTools: ['view', 'search_content', 'find_files'],
};
Enabled tracing for tool executions through mcp server (#12804)
Traces now appear in the Observability UI for MCP server tool calls
Added result to processOutputResult args, providing resolved generation data (usage, text, steps, finishReason) directly. This replaces raw stream chunks with an easy-to-use OutputResult object containing the same data available in the onFinish callback. (#13810)
const usageProcessor: Processor = {
id: 'usage-processor',
processOutputResult({ result, messages }) {
console.log(`Text: ${result.text}`);
console.log(`Tokens: ${result.usage.inputTokens} in, ${result.usage.outputTokens} out`);
console.log(`Finish reason: ${result.finishReason}`);
console.log(`Steps: ${result.steps.length}`);
return messages;
},
};
Added requestContext support for dataset items and experiments. (#13938)
Dataset items now accept an optional requestContext field when adding or updating items. This lets you store per-item request context alongside inputs and ground truths.
Datasets now support a requestContextSchema field to describe the expected shape of request context on items.
Experiments now accept a requestContext option that gets passed through to agent.generate() during execution. Per-item request context merges with (and takes precedence over) the experiment-level context.
// Add item with request context
await dataset.addItem({
input: messages,
groundTruth: expectedOutput,
requestContext: { userId: '123', locale: 'en' },
});
// Run experiment with global request context
await runExperiment(mastra, {
datasetId: 'my-dataset',
targetType: 'agent',
targetId: 'my-agent',
requestContext: { environment: 'staging' },
});
Add Zod v4 and Standard Schema support (#12238)
z.record() calls to use 2-argument form (key + value schema) as required by Zod v4ZodError.errors to ZodError.issues (Zod v4 API change)@ai-sdk/provider versions for Zod v4 compatibilitypackages/core/src/schema/ module that re-exports from @mastra/schema-compatPublicSchema type for schema parameterstoStandardSchema() for normalizing schemas across Zod v3, Zod v4, AI SDK Schema, and JSON SchemastandardSchemaToJSONSchema() for JSON Schema conversion@mastra/schema-compat/adapters/ai-sdk, @mastra/schema-compat/adapters/zod-v3, @mastra/schema-compat/adapters/json-schemaunrepresentable: 'any' supportBREAKING CHANGE: Minimum Zod version is now ^3.25.0 for v3 compatibility or ^4.0.0 for v4
Update provider registry and model documentation with latest models and providers (332c014)
fix(workflows): add generic bail signature with overloads. The bail() function now uses method overloads - bail(result: TStepOutput) for backward compatibility and bail<T>(result: ...) for workflow type inference. This allows flexible early exits while maintaining type safety for workflow chaining. Runtime validation will be added in a follow-up. (#12211)
Fixed structured output parsing when JSON string fields include fenced JSON examples. (#13948)
Fixed writer being undefined in processOutputStream for all output processors. The root cause was that processPart in ProcessorRunner did not pass the writer to executeWorkflowAsProcessor in the outputStream phase. Since all user processors are wrapped into workflows via combineProcessorsIntoWorkflow, this meant no processor ever received a writer. Custom output processors (like guardrail processors) can now reliably use writer.custom() to emit stream events. (#14111)
Added JSON repair for malformed tool call arguments from LLM providers. When an LLM (e.g., Kimi/K2) generates broken JSON for tool call arguments, Mastra now attempts to fix common errors (missing quotes on property names, single quotes, trailing commas) before falling back to undefined. This reduces silent tool execution failures caused by minor JSON formatting issues. See https://github.com/mastra-ai/mastra/issues/11078 (#14033)
Fixed Windows shell command execution to avoid visible cmd.exe popups and broken output piping. (#13886)
Fixed OpenAI reasoning models (e.g. gpt-5-mini) failing with "function*call was provided without its required reasoning item" when the agent loops back after a tool call. The issue was that callProviderMetadata.openai carrying fc*\*item IDs was not being stripped alongside reasoning parts, causing the AI SDK to senditem_referenceinstead of inlinefunction_call content. (#14144)
Output processors can now inspect, modify, or block custom data-* chunks emitted by tools via writer.custom() during streaming. Processors must opt in by setting processDataParts = true to receive these chunks in processOutputStream. (#13823)
class MyDataProcessor extends Processor {
processDataParts = true;
processOutputStream(part, { abort }) {
if (part.type === 'data-sensitive') {
abort('Blocked sensitive data');
}
return part;
}
}
Fixed agent tool calls not being surfaced in evented workflow streams. Added StreamChunkWriter abstraction and stream format configuration ('legacy' | 'vnext') to forward agent stream chunks through the workflow output stream. (#12692)
Fixed OpenAI strict mode schema rejection when using agent networks with structured output. The compat layer was skipped when modelId was undefined, causing optional fields to be missing from the required array. (Fixes #12284) (#13695)
Fixed activeTools to also enforce at execution time, not just at the model prompt. Tool calls to tools not in the active set are now rejected with a ToolNotFoundError. (#13949)
Fix build failures on Windows when running build:patch-commonjs during pnpm run setup (#14029)
Experiments now fail immediately with a clear error when triggered on a dataset with zero items, instead of getting stuck in "pending" status forever. The experiment trigger API returns HTTP 400 for empty datasets. Unexpected errors during async experiment setup are now logged and mark the experiment as failed. (#14031)
fix: respect lastMessages: false in recall() to disable conversation history (#12951)
Setting lastMessages: false in Memory options now correctly prevents recall() from returning previous messages. Previously, the agent would retain the full conversation history despite this setting being disabled.
Callers can still pass perPage: false explicitly to recall() to retrieve all messages (e.g., for displaying thread history in a UI).
@mastra/core: patch (#14103)
Fixed reasoning content being lost in multi-turn conversations with thinking models (kimi-k2.5, DeepSeek-R1) when using OpenAI-compatible providers (e.g., OpenRouter).
Previously, reasoning content could be discarded during streaming, causing 400 errors when the model tried to continue the conversation. Multi-turn conversations now preserve reasoning content correctly across all turns.
fix(workflows): propagate logger to executionEngine (#12517)
When a custom logger is set on a Workflow via __registerPrimitives or __setLogger, it is now correctly propagated to the internal executionEngine. This ensures workflow step execution errors are logged through the custom logger instead of the default ConsoleLogger, enabling proper observability integration.
Added permission denied handling for dataset pages. Datasets now show a "Permission Denied" screen when the user lacks access, matching the behavior of agents, workflows, and other resources. (#13876)
Fixed stream freezing when using Anthropic's Programmatic Tool Calling (PTC). Streams that contain only tool-input streaming chunks (without explicit tool-call chunks) now correctly synthesize tool-call events and complete without hanging. See #12390. (#12400)
Fixed subagents receiving parent's tool call/result parts in their context messages. On subsequent queries in a conversation, these references to tools the subagent doesn't have caused models (especially via custom gateways) to return blank or incorrect results. Parent delegation tool artifacts are now stripped from context before forwarding to subagents. (#13927)
Memory now automatically creates btree indexes on thread_id and resource_id metadata fields when using PgVector. This prevents sequential scans on the memory_messages vector table, resolving performance issues under high load. (#14034)
Fixes #12109
Clarified the idGenerator documentation to reflect the current context-aware function signature and documented the available IdGeneratorContext fields used for type-specific ID generation. (#14081)
Reasoning content from OpenAI models is now stripped from conversation history before replaying it to the LLM, preventing API errors on follow-up messages while preserving reasoning data in the database. Fixes #12980. (#13418)
Added transient option for data chunks to skip database persistence. Chunks marked as transient are streamed to the client for live display but not saved to storage, reducing bloat from large streaming outputs. (#13869)
await context.writer?.custom({
type: 'data-my-stream',
data: { output: line },
transient: true,
});
Workspace tools now use this to mark stdout/stderr streaming chunks as transient.
Fixed message ID mismatch between generate/stream response and memory-stored messages. When an agent used memory, the message IDs returned in the response (e.g. response.uiMessages[].id) could differ from the IDs persisted to the database. This was caused by a format conversion that stripped message IDs during internal re-processing. Messages now retain their original IDs throughout the entire save pipeline. (#13796)
Fixed assistant messages to persist content.metadata.modelId during streaming. (#12969)
This ensures stored and processed assistant messages keep the model identifier.
Developers can now reliably read content.metadata.modelId from downstream storage adapters and processors.
Fixed savePerStep: true not actually persisting messages to storage during step execution. Previously, onStepFinish only accumulated messages in the in-memory MessageList but never flushed them to the storage backend. The only persistence path was executeOnFinish, which is skipped when the stream is aborted. Now messages are flushed to storage after each completed step, so they survive page refreshes and stream aborts. Fixes https://github.com/mastra-ai/mastra/issues/13984 (#14030)
Fixed agentic loop continuing indefinitely when model hits max output tokens (finishReason: 'length'). Previously, only 'stop' and 'error' were treated as termination conditions, causing runaway token generation up to maxSteps when using structuredOutput with generate(). The loop now correctly stops on 'length' finish reason. Fixes #13012. (#13861)
Fixed tool-call arguments being silently lost when LLMs append internal tokens to JSON (#13400)
LLMs (particularly via OpenRouter and OpenAI) sometimes append internal tokens like <|call|>, <|endoftext|>, or <|end|> to otherwise valid JSON in streamed tool-call arguments. Previously, these inputs would fail JSON.parse and the tool call would silently lose its arguments (set to undefined).
Now, sanitizeToolCallInput strips these token patterns before parsing, recovering valid data that was previously discarded. Valid JSON containing <|...|> inside string values is left untouched. Truly malformed JSON still gracefully returns undefined.
Fixes https://github.com/mastra-ai/mastra/issues/13185 and https://github.com/mastra-ai/mastra/issues/13261.
Fixed agent loop stopping prematurely when LLM returns tool calls with finishReason 'stop'. Some models (e.g., OpenAI gpt-5.3-codex) return 'stop' even when tool calls are present, causing the agent to halt instead of processing tool results and continuing. The agent now correctly continues the loop whenever tool calls exist, regardless of finishReason. (#14043)
Fixed (#14133)
Fixed observational memory activation using outdated buffered observations in some long-running threads. Activation now uses the latest thread state so the correct observations are promoted. (#13955)
Fixed model fallback retry behavior. Non-retryable errors (401, 403) are no longer retried on the same model before falling back. Retryable errors (429, 500) are now only retried by a single layer (p-retry) instead of being duplicated across two layers, preventing (maxRetries + 1)² total calls. The per-model maxRetries setting now correctly controls how many times p-retry retries on that specific model before the fallback loop moves to the next model. (#14039)
Added processor-driven response message ID rotation so streamed assistant IDs use the rotated ID. (#13887)
Processors that run outside the agent loop no longer need synthetic response message IDs.
Fixed a regression where dynamic model functions returning a single v1 model were treated as model arrays. (#14018)
Fixed requestContext not being forwarded to tools dynamically added by input processors. (#13827)
Added 'sandbox_access_request' to the HarnessEvent union type, enabling type-safe handling of sandbox access request events without requiring type casts. (#13648)
Fix wrong threadId and resourceId being sent to subagent (#13868)
handleChatStream not merging providerOptions from params and defaultOptions. Previously, params.providerOptions would completely replace defaultOptions.providerOptions instead of merging them. Now provider-specific keys from both sources are merged, with params.providerOptions taking precedence for the same provider. (#13820)requestContext column to the spans table. Request context data from tracing is now persisted alongside other span data. (#14020)Added resilient column handling to insert and update operations. Unknown columns in records are now silently dropped instead of causing SQL errors, ensuring forward compatibility when newer domain packages add fields that haven't been migrated yet. (#14021)
For example, calling db.insert({ tableName, record: { id: '1', title: 'Hello', futureField: 'value' } }) will silently ignore futureField if it doesn't exist in the database table, rather than throwing. The same applies to update — unknown fields in the data payload are dropped before building the SQL statement.
Fixed slow semantic recall in the libsql, Cloudflare D1, and ClickHouse storage adapters. Recall performance no longer degrades as threads grow larger. (Fixes #11702) (#14022)
Added requestContext field to dataset item API endpoints and requestContextSchema to dataset CRUD endpoints. Added requestContext option to the experiment trigger endpoint, which gets forwarded to agent execution during experiments. (#13938)
Usage with @mastra/client-js:
// Create a dataset with a request context schema
await client.createDataset({
name: 'my-dataset',
requestContextSchema: {
type: 'object',
properties: { region: { type: 'string' } },
},
});
// Add an item with request context
await client.addDatasetItem({
datasetId: 'my-dataset',
input: { prompt: 'Hello' },
requestContext: { region: 'us-east-1' },
});
// Trigger an experiment with request context forwarded to agent
await client.triggerDatasetExperiment({
datasetId: 'my-dataset',
agentId: 'my-agent',
requestContext: { region: 'eu-west-1' },
});
Added resilient column handling to insert and update operations. Unknown columns in records are now silently dropped instead of causing SQL errors, ensuring forward compatibility when newer domain packages add fields that haven't been migrated yet. (#14021)
For example, calling db.insert({ tableName, record: { id: '1', title: 'Hello', futureField: 'value' } }) will silently ignore futureField if it doesn't exist in the database table, rather than throwing. The same applies to update — unknown fields in the data payload are dropped before building the SQL statement.
Fixed slow semantic recall in the libsql, Cloudflare D1, and ClickHouse storage adapters. Recall performance no longer degrades as threads grow larger. (Fixes #11702) (#14022)
key:value format (e.g. instance_name:career-scout-api) were having :true appended to the value in the Datadog UI, resulting in career-scout-api:true instead of career-scout-api. Tags are now correctly split into proper key-value pairs before being sent to Datadog's LLM Observability API. (#13900)x-mastra-dev-playground header to the allowed CORS headers list. This resolves the browser error when the playground UI (running on a different port) makes requests to the Mastra dev server. (#14097)Added onValidationError hook to ServerConfig and createRoute(). When a request fails Zod schema validation (query parameters, request body, or path parameters), this hook lets you customize the error response — including the HTTP status code and response body — instead of the default 400 response. Set it on the server config to apply globally, or on individual routes to override per-route. All server adapters (Hono, Express, Fastify, Koa) support this hook. (#13477)
const mastra = new Mastra({
server: {
onValidationError: (error, context) => ({
status: 422,
body: {
ok: false,
errors: error.issues.map(i => ({
path: i.path.join('.'),
message: i.message,
})),
source: context,
},
}),
},
});
Added onValidationError hook to ServerConfig and createRoute(). When a request fails Zod schema validation (query parameters, request body, or path parameters), this hook lets you customize the error response — including the HTTP status code and response body — instead of the default 400 response. Set it on the server config to apply globally, or on individual routes to override per-route. All server adapters (Hono, Express, Fastify, Koa) support this hook. (#13477)
const mastra = new Mastra({
server: {
onValidationError: (error, context) => ({
status: 422,
body: {
ok: false,
errors: error.issues.map(i => ({
path: i.path.join('.'),
message: i.message,
})),
source: context,
},
}),
},
});
Added onValidationError hook to ServerConfig and createRoute(). When a request fails Zod schema validation (query parameters, request body, or path parameters), this hook lets you customize the error response — including the HTTP status code and response body — instead of the default 400 response. Set it on the server config to apply globally, or on individual routes to override per-route. All server adapters (Hono, Express, Fastify, Koa) support this hook. (#13477)
const mastra = new Mastra({
server: {
onValidationError: (error, context) => ({
status: 422,
body: {
ok: false,
errors: error.issues.map(i => ({
path: i.path.join('.'),
message: i.message,
})),
source: context,
},
}),
},
});
Added onValidationError hook to ServerConfig and createRoute(). When a request fails Zod schema validation (query parameters, request body, or path parameters), this hook lets you customize the error response — including the HTTP status code and response body — instead of the default 400 response. Set it on the server config to apply globally, or on individual routes to override per-route. All server adapters (Hono, Express, Fastify, Koa) support this hook. (#13477)
const mastra = new Mastra({
server: {
onValidationError: (error, context) => ({
status: 422,
body: {
ok: false,
errors: error.issues.map(i => ({
path: i.path.join('.'),
message: i.message,
})),
source: context,
},
}),
},
});
Added resilient column handling to insert and update operations. Unknown columns in records are now silently dropped instead of causing SQL errors, ensuring forward compatibility when newer domain packages add fields that haven't been migrated yet. (#14021)
For example, calling db.insert({ tableName, record: { id: '1', title: 'Hello', futureField: 'value' } }) will silently ignore futureField if it doesn't exist in the database table, rather than throwing. The same applies to update — unknown fields in the data payload are dropped before building the SQL statement.
Improved semantic recall performance for large message histories. Semantic recall no longer loads entire threads when only the recalled messages are needed, eliminating delays that previously scaled with total message count. (Fixes #11702) (#14022)
Added requestContext column to the spans table. Request context data from tracing is now persisted alongside other span data. (#14020)
Added requestContext and requestContextSchema column support to dataset storage. Dataset items now persist request context alongside input and ground truth data. (#13938)
Added resilient column handling to insert and update operations. Unknown columns in records are now silently dropped instead of causing SQL errors, ensuring forward compatibility when newer domain packages add fields that haven't been migrated yet. (#14021)
For example, calling db.insert({ tableName, record: { id: '1', title: 'Hello', futureField: 'value' } }) will silently ignore futureField if it doesn't exist in the database table, rather than throwing. The same applies to update — unknown fields in the data payload are dropped before building the SQL statement.
Fixed slow semantic recall in the libsql, Cloudflare D1, and ClickHouse storage adapters. Recall performance no longer degrades as threads grow larger. (Fixes #11702) (#14022)
Add image and file attachment support to Observational Memory. The observer can now see and reason about images and files in conversation history, and attachment token counts are included in observation thresholds. Provider-backed token counting is used when available, with results cached for faster subsequent runs. (#13953)
Added observational memory repro tooling for recording, analyzing, and sanitizing captures before sharing them. (#13877)
fix: respect lastMessages: false in recall() to disable conversation history (#12951)
Setting lastMessages: false in Memory options now correctly prevents recall() from returning previous messages. Previously, the agent would retain the full conversation history despite this setting being disabled.
Callers can still pass perPage: false explicitly to recall() to retrieve all messages (e.g., for displaying thread history in a UI).
fix(memory): handle dynamic functions returning ModelWithRetries[] in observational memory model resolution (#13902)
Fixed observational memory activation using outdated buffered observations in some long-running threads. Activation now uses the latest thread state so the correct observations are promoted. (#13955)
Fixed message loss when saving certain messages so text content is preserved. (#13918)
Added a compatibility guard so observational memory now fails fast when @mastra/core does not support request-response-id-rotation. (#13887)
requestContext column to the spans table. Request context data from tracing is now persisted alongside other span data. (#14020)Added resilient column handling to insert and update operations. Unknown columns in records are now silently dropped instead of causing SQL errors, ensuring forward compatibility when newer domain packages add fields that haven't been migrated yet. (#14021)
For example, calling db.insert({ tableName, record: { id: '1', title: 'Hello', futureField: 'value' } }) will silently ignore futureField if it doesn't exist in the database table, rather than throwing. The same applies to update — unknown fields in the data payload are dropped before building the SQL statement.
Improved semantic recall performance for large message histories. Semantic recall no longer loads entire threads when only the recalled messages are needed, eliminating delays that previously scaled with total message count. (Fixes #11702) (#14022)
requestContext field to tracing spans. Each span now automatically captures a snapshot of the active RequestContext, making request-scoped values like user IDs, tenant IDs, and feature flags available when viewing traces. (#14020)Added metadataIndexes option to createIndex() for PgVector. This allows creating btree indexes on specific metadata fields in vector tables, significantly improving query performance when filtering by those fields. This is especially impactful for Memory's memory_messages table, which filters by thread_id and resource_id — previously causing sequential scans under load. (#14034)
Usage example:
await pgVector.createIndex({
indexName: 'my_vectors',
dimension: 1536,
metadataIndexes: ['thread_id', 'resource_id'],
});
Fixes #12109
Add support for pgvector's bit and sparsevec vector storage types (#12815)
You can now store binary and sparse vectors in @mastra/pg:
// Binary vectors for fast similarity search
await db.createIndex({
indexName: 'my_binary_index',
dimension: 128,
metric: 'hamming', // or 'jaccard'
vectorType: 'bit',
});
// Sparse vectors for BM25/TF-IDF representations
await db.createIndex({
indexName: 'my_sparse_index',
dimension: 500,
metric: 'cosine',
vectorType: 'sparsevec',
});
What's new:
vectorType: 'bit' for binary vectors with 'hamming' and 'jaccard' distance metricsvectorType: 'sparsevec' for sparse vectors (cosine, euclidean, dotproduct)bit defaults to 'hamming' when no metric is specifiedincludeVector round-trips work correctly for all vector typesAdded requestContext column to the spans table. Request context data from tracing is now persisted alongside other span data. (#14020)
Added requestContext and requestContextSchema column support to dataset storage. Dataset items now persist request context alongside input and ground truth data. (#13938)
Added resilient column handling to insert and update operations. Unknown columns in records are now silently dropped instead of causing SQL errors, ensuring forward compatibility when newer domain packages add fields that haven't been migrated yet. (#14021)
For example, calling db.insert({ tableName, record: { id: '1', title: 'Hello', futureField: 'value' } }) will silently ignore futureField if it doesn't exist in the database table, rather than throwing. The same applies to update — unknown fields in the data payload are dropped before building the SQL statement.
Fixed slow semantic recall in the Postgres storage adapter for large threads. Recall now completes in under 500ms even for threads with 7,000+ messages, down from ~30 seconds. (Fixes #11702) (#14022)
Added Playground and Traces tabs to the agent detail page. (#13938)
Agent Playground tab provides a side-by-side environment for iterating on agent configuration (instructions, tools, model settings) and testing changes in a live chat — without modifying the deployed agent. Includes version comparison, request context configuration, and the ability to trigger dataset experiments directly from the playground.
Agent Traces tab shows a compact table of all agent traces with columns for status, timestamp, input preview, output preview, and duration. Supports date range filtering, infinite scroll pagination, and clicking rows to inspect full trace details. Includes checkbox selection and bulk "Add to dataset" for quickly building evaluation datasets from production traces.
Tools edit page now shows configured (enabled) tools in a dedicated section at the top, making it easier to find and edit tools that are already in use.
Dataset save actions on the test chat: per-message save button on user messages and a "Save full conversation" action at the bottom of the thread.
Added input message preview column to the observability trace list. You can now see a truncated preview of user input messages directly in the trace table, making it easier to find specific traces without clicking into each one. (#14025)
The "Run Experiment" button is now disabled when a dataset has no items, with a tooltip explaining that items must be added first. (#14031)
fix: maxTokens from Studio Advanced Settings now correctly limits model output (#13912)
The modelSettingsArgs object was prematurely renaming maxTokens to maxOutputTokens. The React hook (useChat) destructures maxTokens from this object, so the rename caused it to receive undefined, and the value was silently dropped from the API request body.
Added permission denied handling for dataset pages. Datasets now show a "Permission Denied" screen when the user lacks access, matching the behavior of agents, workflows, and other resources. (#13876)
Fixed sandbox execution badge to show full streaming output during live sessions instead of snapping to the truncated tool result. The truncated view now only appears after page refresh when streaming data is no longer available. (#13869)
Fix the observational memory sidebar so the observation token label uses the live observation window token count shown by the progress UI. (#13953)
Add Zod v4 and Standard Schema support (#12238)
z.record() calls to use 2-argument form (key + value schema) as required by Zod v4ZodError.errors to ZodError.issues (Zod v4 API change)@ai-sdk/provider versions for Zod v4 compatibilitypackages/core/src/schema/ module that re-exports from @mastra/schema-compatPublicSchema type for schema parameterstoStandardSchema() for normalizing schemas across Zod v3, Zod v4, AI SDK Schema, and JSON SchemastandardSchemaToJSONSchema() for JSON Schema conversion@mastra/schema-compat/adapters/ai-sdk, @mastra/schema-compat/adapters/zod-v3, @mastra/schema-compat/adapters/json-schemaunrepresentable: 'any' supportBREAKING CHANGE: Minimum Zod version is now ^3.25.0 for v3 compatibility or ^4.0.0 for v4
Fixed Gemini supervisor agent tool calls failing with INVALID_ARGUMENT when delegated tool schemas include nullable fields. Fixes #13988. (#14012)
Fixed OpenAI and OpenAI Reasoning compat layers to ensure all properties appear in the JSON Schema required array when using processToJSONSchema. This prevents OpenAI strict mode rejections for schemas with optional, default, or nullish fields. (Fixes #12284) (#13695)
Added onValidationError hook to ServerConfig and createRoute(). When a request fails Zod schema validation (query parameters, request body, or path parameters), this hook lets you customize the error response — including the HTTP status code and response body — instead of the default 400 response. Set it on the server config to apply globally, or on individual routes to override per-route. All server adapters (Hono, Express, Fastify, Koa) support this hook. (#13477)
const mastra = new Mastra({
server: {
onValidationError: (error, context) => ({
status: 422,
body: {
ok: false,
errors: error.issues.map(i => ({
path: i.path.join('.'),
message: i.message,
})),
source: context,
},
}),
},
});
Added requestContext field to dataset item API endpoints and requestContextSchema to dataset CRUD endpoints. Added requestContext option to the experiment trigger endpoint, which gets forwarded to agent execution during experiments. (#13938)
Usage with @mastra/client-js:
// Create a dataset with a request context schema
await client.createDataset({
name: 'my-dataset',
requestContextSchema: {
type: 'object',
properties: { region: { type: 'string' } },
},
});
// Add an item with request context
await client.addDatasetItem({
datasetId: 'my-dataset',
input: { prompt: 'Hello' },
requestContext: { region: 'us-east-1' },
});
// Trigger an experiment with request context forwarded to agent
await client.triggerDatasetExperiment({
datasetId: 'my-dataset',
agentId: 'my-agent',
requestContext: { region: 'eu-west-1' },
});
Experiments now fail immediately with a clear error when triggered on a dataset with zero items, instead of getting stuck in "pending" status forever. The experiment trigger API returns HTTP 400 for empty datasets. Unexpected errors during async experiment setup are now logged and mark the experiment as failed. (#14031)
Fixed getPublicOrigin to parse only the first value from the X-Forwarded-Host header. When requests pass through multiple proxies, each proxy appends its host to the header, creating a comma-separated list. The previous code used the raw value, producing a malformed URL that broke OAuth redirect URIs. Now only the first (client-facing) host is used, per RFC 7239. (#13935)
Add new models to GeminiVoiceModel type and mark deprecated models with @deprecated JSDoc. (#12625)
Added:
gemini-live-2.5-flash-native-audio (GA)gemini-live-2.5-flash-preview-native-audio-09-2025gemini-2.5-flash-native-audio-preview-12-2025gemini-2.5-flash-native-audio-preview-09-2025Deprecated:
gemini-2.0-flash-exp (shut down 2025-12-09)gemini-2.0-flash-exp-image-generation (shut down 2025-11-14)gemini-2.0-flash-live-001 (shut down 2025-12-09)gemini-live-2.5-flash-preview-native-audio (use gemini-live-2.5-flash-preview-native-audio-09-2025)gemini-2.5-flash-exp-native-audio-thinking-dialog (shut down 2025-10-20)gemini-live-2.5-flash-preview (shut down 2025-12-09)The following packages were updated with dependency changes only:
inputExamples to improve model tool-call accuracyTool definitions can now include inputExamples, which are passed through to models that support them (e.g., Anthropic’s input_examples) to demonstrate valid inputs and reduce malformed tool calls.
RequestContext (auth/cookie forwarding)@mastra/mcp adds requestContext support to custom fetch functions for MCP HTTP server definitions, enabling request-scoped forwarding of cookies/bearer tokens during tool execution while remaining backward compatible with (url, init) fetch signatures.
Provider stream errors are now consistently surfaced from generate()/resumeGenerate(), AI SDK errors are routed through the Mastra logger with structured context, client-side tools no longer lose history in stateless deployments, and memory.deleteThread()/deleteMessages() now automatically cleans up orphaned vector embeddings across supported vector stores.
Add inputExamples support on tool definitions to show AI models what valid tool inputs look like. Models that support this (e.g., Anthropic's input_examples) will receive the examples alongside the tool schema, improving tool call accuracy. (#12932)
inputExamples field to ToolAction, CoreTool, and Tool classconst weatherTool = createTool({
id: "get-weather",
description: "Get weather for a location",
inputSchema: z.object({
city: z.string(),
units: z.enum(["celsius", "fahrenheit"])
}),
inputExamples: [{ input: { city: "New York", units: "fahrenheit" } }, { input: { city: "Tokyo", units: "celsius" } }],
execute: async ({ city, units }) => {
return await fetchWeather(city, units);
}
});
dependencies updates: (#13209)
p-map@^7.0.4 ↗︎ (from ^7.0.3, in dependencies)dependencies updates: (#13210)
p-retry@^7.1.1 ↗︎ (from ^7.1.0, in dependencies)Update provider registry and model documentation with latest models and providers (33e2fd5)
Fixed execute_command tool timeout parameter to accept seconds instead of milliseconds, preventing agents from accidentally setting extremely short timeouts (#13799)
Skill tools are now stable across conversation turns and prompt-cache friendly. (#13744)
skill-activate → skill — returns full skill instructions directly in the tool resultskill-read-reference, skill-read-script, skill-read-asset → skill_readskill-search → skill_search<available_skills> in the system message is now sorted deterministicallyFixed Cloudflare Workers build failures when using @mastra/core. Local process execution now loads its runtime dependency lazily, preventing incompatible Node-only modules from being bundled during worker builds. (#13813)
Fix mimeType → mediaType typo in sendMessage file part construction. This caused file attachments to be routed through the V4 adapter instead of V5, preventing them from being correctly processed by AI SDK v5 providers. (#13833)
Fixed onIterationComplete feedback being discarded when it returns { continue: false } — feedback is now added to the conversation and the model gets one final turn to produce a text response before the loop stops. (#13759)
Fixed generate() and resumeGenerate() to always throw provider stream errors. Previously, certain provider errors were silently swallowed, returning false "successful" empty responses. Now errors are always surfaced to the caller, making retry logic reliable when providers fail transiently. (#13802)
Remove the default maxSteps limit so stopWhen can control sub-agent execution (#13764)
Fix suspendedToolRunId required error when it shouldn't be required (#13722)
Fixed subagent tool defaulting maxSteps to 50 when no stop condition is configured, preventing unbounded execution loops. When stopWhen is set, maxSteps is left to the caller. (#13777)
Fixed prompt failures by removing assistant messages that only contain sources before model calls. (#13790)
Fixed RequestContext constructor crashing when constructed from a deserialized plain object. (#13856)
Fixed LLM errors (generateText, generateObject, streamText, streamObject) being swallowed by the AI SDK's default handler instead of being routed through the Mastra logger. Errors now appear with structured context (runId, modelId, provider, etc.) in your logger, and streaming errors are captured via onError callbacks. (#13857)
Fixed workspace tool output truncation so it no longer gets prematurely cut off when short lines precede a very long line (e.g. minified JSON). Output now uses the full token budget instead of stopping at line boundaries, resulting in more complete tool results. (#13828)
Fixed subagent tool to default maxSteps to 50 when no stopWhen condition is configured, preventing unbounded agent loops. When stopWhen is set, maxSteps remains unset so the stop condition controls termination. (#13777)
semver@^7.7.4 ↗︎ (from ^7.7.2, in dependencies)41e48c1, 82469d3, 33e2fd5, 7ef6e2c, 08072ec, ef9d0f0, b12d2a5, fa37d39, b12d2a5, 1391f22, 71c38bf, f993c38, f51849a, 3ceb231, 9bf3a0d, cafa045, 1fd9ddb, 1391f22, ef888d2, e7a567c, 3626623, 6135ef4, d9d228c, 5576507, 79d69c9, 94f44b8, 13187db, 2ae5311, 6135ef4]:
Fixes inline import() statements referencing workspace packages (via @internal/*) in publish .d.ts files (#13811)
Updated dependencies [41e48c1, 82469d3, 33e2fd5, 7ef6e2c, b12d2a5, fa37d39, b12d2a5, 71c38bf, f993c38, f51849a, 9bf3a0d, cafa045, 1fd9ddb, 6135ef4, d9d228c, 5576507, 79d69c9, 94f44b8, 13187db, 2ae5311, 6135ef4]:
41e48c1, 82469d3, 33e2fd5, 7ef6e2c, b12d2a5, fa37d39, b12d2a5, 71c38bf, f993c38, f51849a, 9bf3a0d, cafa045, 1fd9ddb, 6135ef4, d9d228c, 5576507, 79d69c9, 94f44b8, 13187db, 2ae5311, 6135ef4]:
dependencies updates: (#13134)
jsonwebtoken@^9.0.3 ↗︎ (from ^9.0.2, in dependencies)dependencies updates: (#13135)
jwks-rsa@^3.2.2 ↗︎ (from ^3.2.0, in dependencies)ae52b89, 1ea40a9, 41e48c1, 82469d3, 33e2fd5, 7ef6e2c, b12d2a5, fa37d39, b12d2a5, 71c38bf, f993c38, f51849a, 9bf3a0d, cafa045, 1fd9ddb, 6135ef4, d9d228c, 5576507, 79d69c9, 94f44b8, 13187db, 2ae5311, 6135ef4]:
lru-cache@^11.2.6 ↗︎ (from ^11.1.0, in dependencies)ae52b89, 1ea40a9, 41e48c1, 82469d3, 33e2fd5, 7ef6e2c, b12d2a5, fa37d39, b12d2a5, 71c38bf, f993c38, f51849a, 9bf3a0d, cafa045, 1fd9ddb, 6135ef4, d9d228c, 5576507, 79d69c9, 94f44b8, 13187db, 2ae5311, 6135ef4]:
41e48c1, 82469d3, 33e2fd5, 7ef6e2c, b12d2a5, fa37d39, b12d2a5, 71c38bf, f993c38, f51849a, 9bf3a0d, cafa045, 1fd9ddb, 6135ef4, d9d228c, 5576507, 79d69c9, 94f44b8, e9060ef, 13187db, 2ae5311, 6135ef4]:
Fix agent losing conversation context ("amnesia") when using client-side tools with stateless server deployments. Recursive calls after tool execution now include the full conversation history when no threadId is provided. (#11476)
Updated dependencies [41e48c1, 82469d3, 33e2fd5, 7ef6e2c, b12d2a5, fa37d39, b12d2a5, 71c38bf, f993c38, f51849a, 9bf3a0d, cafa045, 1fd9ddb, 6135ef4, d9d228c, 5576507, 79d69c9, 94f44b8, 13187db, 2ae5311, 6135ef4]:
couchbase@^4.6.1 ↗︎ (from ^4.6.0, in dependencies)41e48c1, 82469d3, 33e2fd5, 7ef6e2c, b12d2a5, fa37d39, b12d2a5, 71c38bf, f993c38, f51849a, 9bf3a0d, cafa045, 1fd9ddb, 6135ef4, d9d228c, 5576507, 79d69c9, 94f44b8, 13187db, 2ae5311, 6135ef4]:
41e48c1, 82469d3, 33e2fd5, 7ef6e2c, b12d2a5, fa37d39, b12d2a5, 71c38bf, f993c38, f51849a, 9bf3a0d, cafa045, 1fd9ddb, 6135ef4, d9d228c, 5576507, 79d69c9, 94f44b8, e9060ef, 13187db, 2ae5311, 6135ef4]:
41e48c1, 82469d3, 33e2fd5, 7ef6e2c, b12d2a5, fa37d39, b12d2a5, 71c38bf, f993c38, f51849a, 9bf3a0d, cafa045, 1fd9ddb, 6135ef4, b12d2a5, d9d228c, 5576507, 79d69c9, 9fb4c06, 94f44b8, 13187db, 2ae5311, b5a8ea5, 6135ef4]:
41e48c1, 82469d3, 33e2fd5, 7ef6e2c, b12d2a5, fa37d39, b12d2a5, 71c38bf, f993c38, f51849a, 9bf3a0d, cafa045, 1fd9ddb, 6135ef4, d9d228c, 5576507, 79d69c9, 94f44b8, 13187db, 2ae5311, 6135ef4]:
41e48c1, 82469d3, 33e2fd5, 7ef6e2c, b12d2a5, fa37d39, b12d2a5, 71c38bf, f993c38, f51849a, 9bf3a0d, cafa045, 1fd9ddb, 6135ef4, d9d228c, 5576507, 79d69c9, 94f44b8, 13187db, 2ae5311, 6135ef4]:
41e48c1, 82469d3, 33e2fd5, 7ef6e2c, b12d2a5, fa37d39, b12d2a5, 71c38bf, f993c38, f51849a, 9bf3a0d, cafa045, 1fd9ddb, 6135ef4, d9d228c, 5576507, 79d69c9, 94f44b8, 13187db, 2ae5311, 6135ef4]:
41e48c1, 82469d3, 33e2fd5, 7ef6e2c, b12d2a5, fa37d39, b12d2a5, 71c38bf, f993c38, f51849a, 9bf3a0d, cafa045, 1fd9ddb, 6135ef4, d9d228c, 5576507, 79d69c9, 94f44b8, 13187db, 2ae5311, 6135ef4]:
Added custom user-agent header to all Elasticsearch requests. Every request now identifies itself as mastra-elasticsearch/<version> via the user-agent header, enabling usage tracking in Elasticsearch server logs and analytics tools. (#13740)
Updated dependencies [41e48c1, 82469d3, 33e2fd5, 7ef6e2c, b12d2a5, fa37d39, b12d2a5, 71c38bf, f993c38, f51849a, 9bf3a0d, cafa045, 1fd9ddb, 6135ef4, d9d228c, 5576507, 79d69c9, 94f44b8, 13187db, 2ae5311, 6135ef4]:
41e48c1, 82469d3, 33e2fd5, 7ef6e2c, b12d2a5, fa37d39, b12d2a5, 71c38bf, f993c38, f51849a, 9bf3a0d, cafa045, 1fd9ddb, 6135ef4, b12d2a5, d9d228c, 5576507, 79d69c9, 9fb4c06, 94f44b8, 13187db, 2ae5311, b5a8ea5, 6135ef4]:
dependencies updates: (#10195)
fastembed@^2.1.0 ↗︎ (from ^1.14.4, in dependencies)Add warmup() export to pre-download fastembed models without creating ONNX sessions. This prevents concurrent download race conditions when multiple consumers call FlagEmbedding.init() in parallel, which could corrupt the model archive and cause Z_BUF_ERROR. (#13752)
41e48c1, 82469d3, 33e2fd5, 7ef6e2c, b12d2a5, fa37d39, b12d2a5, 71c38bf, f993c38, f51849a, 9bf3a0d, cafa045, 1fd9ddb, 6135ef4, b12d2a5, d9d228c, 5576507, 79d69c9, 9fb4c06, 94f44b8, 13187db, 2ae5311, b5a8ea5, 6135ef4]:
41e48c1, 82469d3, 33e2fd5, 7ef6e2c, b12d2a5, fa37d39, b12d2a5, 71c38bf, f993c38, f51849a, 9bf3a0d, cafa045, 1fd9ddb, 6135ef4, b12d2a5, d9d228c, 5576507, 79d69c9, 9fb4c06, 94f44b8, 13187db, 2ae5311, b5a8ea5, 6135ef4]:
41e48c1, 82469d3, 33e2fd5, 7ef6e2c, b12d2a5, fa37d39, b12d2a5, 71c38bf, f993c38, f51849a, 9bf3a0d, cafa045, 1fd9ddb, 6135ef4, b12d2a5, d9d228c, 5576507, 79d69c9, 9fb4c06, 94f44b8, 13187db, 2ae5311, b5a8ea5, 6135ef4]:
41e48c1, 82469d3, 33e2fd5, 7ef6e2c, b12d2a5, fa37d39, b12d2a5, 71c38bf, f993c38, f51849a, 9bf3a0d, cafa045, 1fd9ddb, 6135ef4, d9d228c, 5576507, 79d69c9, 94f44b8, e9060ef, 13187db, 2ae5311, 6135ef4]:
41e48c1, 82469d3, 33e2fd5, 7ef6e2c, b12d2a5, fa37d39, b12d2a5, 71c38bf, f993c38, f51849a, 9bf3a0d, cafa045, 1fd9ddb, 6135ef4, d9d228c, 5576507, 79d69c9, 94f44b8, e9060ef, 13187db, 2ae5311, 6135ef4]:
41e48c1, 82469d3, 33e2fd5, 7ef6e2c, b12d2a5, fa37d39, b12d2a5, 71c38bf, f993c38, f51849a, 9bf3a0d, cafa045, 1fd9ddb, 6135ef4, d9d228c, 5576507, 79d69c9, 94f44b8, e9060ef, 13187db, 2ae5311, 6135ef4]:
41e48c1, 82469d3, 33e2fd5, 7ef6e2c, b12d2a5, fa37d39, b12d2a5, 71c38bf, f993c38, f51849a, 9bf3a0d, cafa045, 1fd9ddb, 6135ef4, d9d228c, 5576507, 79d69c9, 94f44b8, 13187db, 2ae5311, 6135ef4]:
41e48c1, 82469d3, a8e2e2e, 33e2fd5, 7ef6e2c, 08072ec, ef9d0f0, b12d2a5, 3ada2da, fa37d39, b12d2a5, 1391f22, 71c38bf, f993c38, f51849a, 3ceb231, 9bf3a0d, cafa045, 1fd9ddb, 1391f22, ef888d2, e7a567c, 3626623, 6135ef4, d9d228c, 5576507, 79d69c9, 94f44b8, 13187db, 2ae5311, 6135ef4]:
Added requestContext support to the custom fetch option in MCP client HTTP server definitions. The fetch function now receives the current request context as an optional third argument, enabling users to forward authentication cookies, bearer tokens, and other request-scoped data from the incoming request to remote MCP servers during tool execution. (#13773)
Example usage:
const mcp = new MCPClient({
servers: {
myServer: {
url: new URL("https://api.example.com/mcp"),
fetch: async (url, init, requestContext) => {
const headers = new Headers(init?.headers);
const cookie = requestContext?.get("cookie");
if (cookie) {
headers.set("cookie", cookie);
}
return fetch(url, { ...init, headers });
}
}
}
});
This change is fully backward-compatible — existing fetch functions with (url, init) signatures continue to work unchanged. Closes #13769.
41e48c1, 82469d3, 33e2fd5, 7ef6e2c, b12d2a5, fa37d39, b12d2a5, 71c38bf, f993c38, f51849a, 9bf3a0d, cafa045, 1fd9ddb, 6135ef4, d9d228c, 5576507, 79d69c9, 94f44b8, 13187db, 2ae5311, 6135ef4]:
41e48c1, 82469d3, 33e2fd5, 7ef6e2c, b12d2a5, 9e21667, fa37d39, b12d2a5, 71c38bf, f993c38, f51849a, 9bf3a0d, cafa045, 1fd9ddb, 6135ef4, d9d228c, 5576507, 79d69c9, 94f44b8, 13187db, 2ae5311, 6135ef4]:
Improved confidence in observational memory threshold behavior through expanded automated test coverage. No public API changes. (#13785)
Improved Observational Memory reliability: fixed future-date annotations producing invalid strings, fixed a duplicate inline-date matching bug, and hardened the process-level operation registry against concurrent operation tracking errors. (#13774)
Fixed buffered activation cleanup to respect the configured retention floor so message history does not collapse unexpectedly after activation. (#13745)
Fixed orphaned vector embeddings accumulating when memory threads or messages are deleted. Calling memory.deleteThread() or memory.deleteMessages() now automatically cleans up associated vector embeddings across all supported vector store backends. Cleanup is non-blocking and does not slow down the delete call. Also fixed updateMessages not cleaning up old vectors correctly when using a non-default index separator (e.g. Pinecone). (#12227)
Repeated token counts in OM are faster and more reliable, estimates are now persisted on metadata, and totals remain consistent after saving and loading conversations. (#13745)
Improved observational memory marker creation consistency for more reliable debugging and UI status behavior. No public API changes. (#13779)
Fixed observational memory token counting to use stored model output for tool results transformed with toModelOutput. (#13862)
Fix working memory data corruption when using resource scope across threads (#12415)
updateWorkingMemory() to prevent race conditions during concurrent updates__experimental_updateWorkingMemoryVNext() to detect template duplicates with whitespace variationsupdateWorkingMemoryTool to prevent LLM from accidentally wiping existing data by sending empty templateUpdated dependencies [41e48c1, 82469d3, 33e2fd5, 7ef6e2c, b12d2a5, fa37d39, b12d2a5, 71c38bf, f993c38, f51849a, 9bf3a0d, cafa045, 1fd9ddb, 6135ef4, d9d228c, 5576507, 79d69c9, 94f44b8, 13187db, 2ae5311, 6135ef4]:
Fixed MongoDB observational memory buffering so legacy records with bufferedObservationChunks: null can append chunks safely and continue storing chunk buffers as arrays after activation. (#13803)
Updated dependencies [41e48c1, 82469d3, 33e2fd5, 7ef6e2c, b12d2a5, fa37d39, b12d2a5, 71c38bf, f993c38, f51849a, 9bf3a0d, cafa045, 1fd9ddb, 6135ef4, d9d228c, 5576507, 79d69c9, 94f44b8, 13187db, 2ae5311, 6135ef4]:
@mastra/observability by requiring @mastra/core >= 1.9.0. (#13838)
This prevents installs with older core versions that can cause runtime errors.41e48c1, 82469d3, 33e2fd5, 7ef6e2c, b12d2a5, fa37d39, b12d2a5, 71c38bf, f993c38, f51849a, 9bf3a0d, cafa045, 1fd9ddb, 6135ef4, d9d228c, 5576507, 79d69c9, 94f44b8, 13187db, 2ae5311, 6135ef4]:
41e48c1, 82469d3, 33e2fd5, 7ef6e2c, 08072ec, ef9d0f0, b12d2a5, fa37d39, b12d2a5, 1391f22, 71c38bf, f993c38, f51849a, 3ceb231, 9bf3a0d, cafa045, 1fd9ddb, 1391f22, ef888d2, e7a567c, 3626623, 6135ef4, d9d228c, 5576507, 79d69c9, 94f44b8, 13187db, 2ae5311, 6135ef4]:
41e48c1, 82469d3, 33e2fd5, 7ef6e2c, b12d2a5, fa37d39, b12d2a5, 71c38bf, f993c38, f51849a, 9bf3a0d, cafa045, 1fd9ddb, 6135ef4, d9d228c, 5576507, 79d69c9, 94f44b8, e9060ef, 13187db, 2ae5311, 6135ef4]:
41e48c1, 82469d3, 33e2fd5, 7ef6e2c, b12d2a5, fa37d39, b12d2a5, 71c38bf, f993c38, f51849a, 9bf3a0d, cafa045, 1fd9ddb, 6135ef4, d9d228c, 5576507, 79d69c9, 94f44b8, e9060ef, 13187db, 2ae5311, 6135ef4]:
41e48c1, 82469d3, 33e2fd5, 7ef6e2c, b12d2a5, fa37d39, b12d2a5, 71c38bf, f993c38, f51849a, 9bf3a0d, cafa045, 1fd9ddb, 6135ef4, d9d228c, 5576507, 79d69c9, 94f44b8, 13187db, 2ae5311, 6135ef4]:
dependencies updates: (#13236)
papaparse@^5.5.3 ↗︎ (from ^5.4.1, in dependencies)dependencies updates: (#13271)
@codemirror/merge@^6.12.0 ↗︎ (from ^6.10.2, in dependencies)@codemirror/view@^6.39.15 ↗︎ (from ^6.39.14, in dependencies)dependencies updates: (#13283)
semver@^7.7.4 ↗︎ (from ^7.7.2, in dependencies)dependencies updates: (#13771)
@uiw/codemirror-theme-dracula@^4.25.5 ↗︎ (from ^4.25.4, in dependencies)dependencies updates: (#13847)
@codemirror/autocomplete@^6.20.1 ↗︎ (from ^6.20.0, in dependencies)@codemirror/lang-javascript@^6.2.5 ↗︎ (from ^6.2.4, in dependencies)@codemirror/view@^6.39.16 ↗︎ (from ^6.39.15, in dependencies)Fixed experiment results page showing only 10 items, empty summary tab with no scorers, and scores not updating during experiment runs. (#13831)
Updated skill activation indicators to match new skill tool names. (#13744)
Fix saving traces and scores as dataset items in the Studio. (#13800)
Fixed Playground UI agent settings so temperature is no longer reset to 1 on refresh. Temperature now stays unset unless saved settings or code defaults provide a value. (#13778)
Fixed documentation link in empty datasets page pointing to the correct datasets docs instead of evals (#13872)
Fix wrong threads showing for agents on studio (#13789)
Fixed dev playground auth bypass not working in capabilities endpoint. The client now passes MastraClient headers (including x-mastra-dev-playground) to the auth capabilities endpoint, and the server returns disabled state when this header is present. This prevents the login gate from appearing in dev playground mode. (#13801)
Updated dependencies [41e48c1, 82469d3, 33e2fd5, 7ef6e2c, 88061a8, b12d2a5, fa37d39, b12d2a5, 71c38bf, f993c38, f51849a, 9bf3a0d, cafa045, 1fd9ddb, d1e26f0, 6135ef4, d9d228c, 5576507, 79d69c9, 94f44b8, 13187db, 2ae5311, 6135ef4]:
41e48c1, 82469d3, 33e2fd5, 7ef6e2c, b12d2a5, fa37d39, b12d2a5, 71c38bf, f993c38, f51849a, 9bf3a0d, cafa045, 1fd9ddb, 6135ef4, d9d228c, 5576507, 79d69c9, 94f44b8, e9060ef, 13187db, 2ae5311, 6135ef4]:
41e48c1, 82469d3, 33e2fd5, 7ef6e2c, b12d2a5, fa37d39, b12d2a5, 71c38bf, f993c38, f51849a, 9bf3a0d, cafa045, 1fd9ddb, d1e26f0, 6135ef4, d9d228c, 5576507, 79d69c9, 94f44b8, 13187db, 2ae5311, 6135ef4]:
41e48c1, 82469d3, 33e2fd5, 7ef6e2c, b12d2a5, fa37d39, b12d2a5, 71c38bf, f993c38, f51849a, 9bf3a0d, cafa045, 1fd9ddb, 6135ef4, d9d228c, 5576507, 79d69c9, 94f44b8, e9060ef, 13187db, 2ae5311, 6135ef4]:
Fixed Studio so custom gateway models appear in the provider list. (#13772) Improved /agents/providers API to include both built-in providers and providers from configured custom gateways.
Fixed dev playground auth bypass not working in capabilities endpoint. The client now passes MastraClient headers (including x-mastra-dev-playground) to the auth capabilities endpoint, and the server returns disabled state when this header is present. This prevents the login gate from appearing in dev playground mode. (#13801)
Updated dependencies [41e48c1, 82469d3, 33e2fd5, 7ef6e2c, b12d2a5, fa37d39, b12d2a5, 71c38bf, f993c38, f51849a, 9bf3a0d, cafa045, 1fd9ddb, 6135ef4, d9d228c, 5576507, 79d69c9, 94f44b8, 13187db, 2ae5311, 6135ef4]:
Workspaces get significant capabilities: workspace tool output is now token-limited, ANSI-stripped for model context, and .gitignore-aware to reduce token usage. Sandbox commands support abortSignal for cancellation, plus background process streaming callbacks (onStdout/onStderr/onExit), and local symlink mounts in LocalSandbox. Tools can be exposed under custom names via WorkspaceToolConfig.name. LSP binary resolution is now configurable (binaryOverrides, searchPaths, packageRunner), making workspace diagnostics work reliably across monorepos, global installs, and custom setups.
Mastra now ships a pluggable auth system (@mastra/core/auth) plus server-side auth routes and convention-based route permission enforcement (@mastra/server + all server adapters). New auth provider packages (@mastra/auth-cloud, @mastra/auth-studio, @mastra/auth-workos) add OAuth/SSO, session management, and RBAC—Studio UI also gains permission-gated auth screens/components.
Workflow results now include stepExecutionPath (also available mid-execution and preserved across resume/restart), and execution logs are smaller by deduping payloads. Storage backends add atomic updateWorkflowResults/updateWorkflowState with a supportsConcurrentUpdates() check—enabling safer concurrent workflow updates (supported in e.g. Postgres/LibSQL/MongoDB/DynamoDB/Upstash; explicitly not supported in some backends like ClickHouse/Cloudflare/Lance).
harness.sendMessage() now uses files instead of images (supports any file type, preserves filenames, and auto-decodes text-based files).Added onStepFinish and onError callbacks to NetworkOptions, allowing per-LLM-step progress monitoring and custom error handling during network execution. Closes #13362. (#13370)
Before: No way to observe per-step progress or handle errors during network execution.
const stream = await agent.network('Research AI trends', {
memory: { thread: 'my-thread', resource: 'my-resource' },
});
After: onStepFinish and onError are now available in NetworkOptions.
const stream = await agent.network('Research AI trends', {
onStepFinish: event => {
console.log('Step completed:', event.finishReason, event.usage);
},
onError: ({ error }) => {
console.error('Network error:', error);
},
memory: { thread: 'my-thread', resource: 'my-resource' },
});
Add workflow execution path tracking and optimize execution logs (#11755)
Workflow results now include a stepExecutionPath array showing the IDs of each step that executed during a workflow run. You can use this to understand exactly which path your workflow took.
// Before: no execution path in results
const result = await workflow.execute({ triggerData });
// result.stepExecutionPath → undefined
// After: stepExecutionPath is available in workflow results
const result = await workflow.execute({ triggerData });
console.log(result.stepExecutionPath);
// → ['step1', 'step2', 'step4'] — the actual steps that ran
stepExecutionPath is available in:
WorkflowResult.stepExecutionPath) — see which steps ran after execution completesExecutionContext.stepExecutionPath) — access the path mid-execution inside your stepsWorkflow execution logs are now more compact and easier to read. Step outputs are no longer duplicated as the next step's input, reducing the size of execution results while maintaining full visibility.
Key improvements:
stepExecutionPathThis is particularly beneficial for AI agents and LLM-based workflows where reducing context size improves performance and cost efficiency.
Related: #8951
Added authentication interfaces and Enterprise Edition RBAC support. (#13163)
New @mastra/core/auth export with pluggable interfaces for building auth providers:
IUserProvider — user lookup and managementISessionProvider — session creation, validation, and cookie handlingISSOProvider — SSO login and callback flowsICredentialsProvider — username/password authenticationDefault implementations included out of the box:
Enterprise Edition (@mastra/core/auth/ee) adds RBAC, ACL, and license validation:
import { buildCapabilities } from '@mastra/core/auth/ee';
const capabilities = buildCapabilities({
rbac: myRBACProvider,
acl: myACLProvider,
});
Built-in role definitions (owner, admin, editor, viewer) and a static RBAC provider are included for quick setup. Enterprise features require a valid license key via the MASTRA_EE_LICENSE environment variable.
Workspace sandbox tool results (execute_command, kill_process, get_process_output) sent to the model now strip ANSI color codes via toModelOutput, while streamed output to the user keeps colors. This reduces token usage and improves model readability. (#13440)
Workspace execute_command tool now extracts trailing | tail -N pipes from commands so output streams live to the user, while the final result sent to the model is still truncated to the last N lines.
Workspace tools that return potentially large output now enforce a token-based output limit (~3k tokens by default) using tiktoken for accurate counting. The limit is configurable per-tool via maxOutputTokens in WorkspaceToolConfig. Each tool uses a truncation strategy suited to its output:
read_file, grep, list_files — truncate from the end (keep imports, first matches, top-level tree)execute_command, get_process_output, kill_process — head+tail sandwich (keep early output + final status)const workspace = new Workspace({
tools: {
mastra_workspace_execute_command: {
maxOutputTokens: 5000, // override default 3k
},
},
});
Workspace tools (list_files, grep) now automatically respect .gitignore, filtering out directories like node_modules and dist from results. Explicitly targeting an ignored path still works. Also lowered the default tree depth from 3 to 2 to reduce token usage. (#13724)
Added maxSteps and stopWhen support to HarnessSubagent. (#13653)
You can now define maxSteps and stopWhen on a harness subagent so spawned subagents can use custom loop limits instead of relying only on the default maxSteps: 50 fallback.
const harness = new Harness({
id: 'dev-harness',
modes: [{ id: 'build', default: true, agent: buildAgent }],
subagents: [
{
id: 'explore',
name: 'Explore',
description: 'Inspect the codebase',
instructions: 'Investigate and summarize findings.',
defaultModelId: 'openai/gpt-4o',
maxSteps: 7,
stopWhen: ({ steps }) => steps.length >= 3,
},
],
});
Added OpenAI WebSocket transport for streaming responses with auto-close and manual transport access (#13531)
Added name property to WorkspaceToolConfig for remapping workspace tool names. Tools can now be exposed under custom names to the LLM while keeping the original constant as the config key. (#13687)
const workspace = new Workspace({
filesystem: new LocalFilesystem({ basePath: './project' }),
tools: {
mastra_workspace_read_file: { name: 'view' },
mastra_workspace_grep: { name: 'search_content' },
mastra_workspace_edit_file: { name: 'string_replace_lsp' },
},
});
Also removed hardcoded tool-name cross-references from edit-file and ast-edit tool descriptions, since tools can be renamed or disabled.
Adds requestContext passthrough to Harness runtime APIs. (#13650)
Added
You can now pass requestContext to Harness runtime methods so tools and subagents receive request-scoped values.
Added binaryOverrides, searchPaths, and packageRunner options to LSPConfig to support flexible language server binary resolution. (#13677)
Previously, workspace LSP diagnostics only worked when language server binaries were installed in the project's node_modules/.bin/. There was no way to use globally installed binaries or point to a custom install.
New LSPConfig fields:
binaryOverrides: Override the binary command for a specific server, bypassing the default lookup. Useful when the binary is installed in a non-standard location.searchPaths: Additional directories to search when resolving Node.js modules (e.g. typescript/lib/tsserver.js). Each entry should be a directory whose node_modules contains the required packages.packageRunner: Package runner to use as a last-resort fallback when no binary is found (e.g. 'npx --yes', 'pnpm dlx', 'bunx'). Off by default — package runners can hang in monorepos with workspace links.Binary resolution order per server: explicit binaryOverrides override → project node_modules/.bin/ → process.cwd() node_modules/.bin/ → searchPaths node_modules/.bin/ → global PATH → packageRunner fallback.
const workspace = new Workspace({
lsp: {
// Point to a globally installed binary
binaryOverrides: {
typescript: '/usr/local/bin/typescript-language-server --stdio',
},
// Resolve typescript/lib/tsserver.js from a tool's own node_modules
searchPaths: ['/path/to/my-tool'],
// Use a package runner as last resort (off by default)
packageRunner: 'npx --yes',
},
});
Also exported buildServerDefs(config?) for building config-aware server definitions, and LSPConfig / LSPServerDef types from @mastra/core/workspace.
Added a unified observability type system with interfaces for structured logging, metrics (counters, gauges, histograms), scores, and feedback alongside the existing tracing infrastructure. (#13058)
Why? Previously, only tracing flowed through execution contexts. Logging was ad-hoc and metrics did not exist. This change establishes the type system and context plumbing so that when concrete implementations land, logging and metrics will flow through execute callbacks automatically — no migration needed.
What changed:
ObservabilityContext interface combining tracing, logging, and metrics contextsLoggerContext, MetricsContext, ScoreInput, FeedbackInput, and ObservabilityEventBuscreateObservabilityContext() factory and resolveObservabilityContext() resolver with no-op defaults for graceful degradationloggerVNext and metrics getters to the Mastra classAdded setServer() public method to the Mastra class, enabling post-construction configuration of server settings. This allows platform tooling to inject server defaults (e.g. auth) into user-created Mastra instances at deploy time. (#13729)
const mastra = new Mastra({ agents: { myAgent } });
// Platform tooling can inject server config after construction
mastra.setServer({ ...mastra.getServer(), auth: new MastraAuthWorkos() });
Added local symlink mounts in LocalSandbox so sandboxed commands can access locally-mounted filesystem paths. (#13474)
Improved mounted paths so commands resolve consistently in local sandboxes.
Improved workspace instructions so developers can quickly find mounted data paths.
Why: Local sandboxes can now run commands against locally-mounted data without manual path workarounds.
Usage example:
const workspace = new Workspace({
mounts: {
'/data': new LocalFilesystem({ basePath: '/path/to/data' }),
},
sandbox: new LocalSandbox({ workingDirectory: './workspace' }),
});
await workspace.init();
// Sandboxed commands can access the mount path via symlink
await workspace.sandbox.executeCommand('ls data');
Abort signal and background process callbacks (#13597)
abortSignal in command optionsexecute_command now support onStdout, onStderr, and onExit callbacks for streaming output and exit notificationsbackgroundProcesses config in workspace tool options for wiring up background process callbacksAdded supportsConcurrentUpdates() method to WorkflowsStorage base class and abstract updateWorkflowResults/updateWorkflowState methods for atomic workflow state updates. The evented workflow engine now checks supportsConcurrentUpdates() and throws a clear error if the storage backend does not support concurrent updates. (#12575)
Update provider registry and model documentation with latest models and providers (edee4b3)
Fixed sandbox command execution crashing the parent process on some Node.js versions by explicitly setting stdio to pipe for detached child processes. (#13697)
Fixed an issue where generating a response in an empty thread (system-only messages) would throw an error. Providers that support system-only prompts like Anthropic and OpenAI now work as expected. A warning is logged for providers that require at least one user message (e.g. Gemini). Fixes #13045. (#13164)
Sanitize invalid tool names in agent history so Bedrock retries continue instead of failing request validation. (#13633)
Fixed path matching for auto-indexing and skills discovery. (#13511)
Single file paths, directory globs, and SKILL.md file globs now resolve consistently.
Trailing slashes are now handled correctly.
Harness.cloneThread() now resolves dynamic memory factories before cloning, fixing "cloneThread is not a function" errors when memory is provided as a factory function. HarnessConfig.memory type widened to DynamicArgument<MastraMemory>. (#13569)
Fixed workspace tools being callable by their old default names (e.g. mastra_workspace_edit_file) when renamed via tools config. The tool's internal id is now updated to match the remapped name, preventing fallback resolution from bypassing the rename. (#13694)
Reduced default max output tokens from 3000 to 2000 for all workspace tools. List files tool uses a 1000 token limit. Suppressed "No errors or warnings" LSP diagnostic message when there are no issues. (#13730)
Add first-class custom provider support for MastraCode model selection and routing. (#13682)
/custom-providers command to create, edit, and delete custom OpenAI-compatible providers and manage model IDs under each provider.settings.json with schema parsing/validation updates.customModelCatalogProvider so custom models appear in existing selectors (/models, /subagents).ModelRouterLanguageModel using provider-specific URL and optional API key settings.sendMessage now accepts files instead of images, supporting any file type with optional filename. (#13574)
Breaking change: Rename images to files when calling harness.sendMessage():
// Before
await harness.sendMessage({
content: 'Analyze this',
images: [{ data: base64Data, mimeType: 'image/png' }],
});
// After
await harness.sendMessage({
content: 'Analyze this',
files: [{ data: base64Data, mediaType: 'image/png', filename: 'screenshot.png' }],
});
files accepts { data, mediaType, filename? } — filenames are now preserved through storage and message historytext/*, application/json) are automatically decoded to readable text content instead of being sent as binary, which models could not processHarnessMessageContent now includes a file type, so file parts round-trip correctly through message historyFixed Agent Network routing failures for users running Claude models through AWS Bedrock by removing trailing whitespace from the routing assistant message. (#13624)
Fixed thread title generation when user messages include file parts (for example, images). (#13671) Titles now generate reliably instead of becoming empty.
Fixed parallel workflow tool calls so each call runs independently. (#13478)
When an agent starts multiple tool calls to the same workflow at the same time, each call now runs with its own workflow run context. This prevents duplicated results across parallel calls and ensures each call returns output for its own input. Also ensures workflow tool suspension and manual resumption correctly preserves the run context.
Fixed sub-agent instructions being overridden when the parent agent uses an OpenAI model. Previously, OpenAI models would fill in the optional instructions parameter when calling a sub-agent tool, completely replacing the sub-agent's own instructions. Now, any LLM-provided instructions are appended to the sub-agent's configured instructions instead of replacing them. (#13578)
Tool lifecycle hooks (onInputStart, onInputDelta, onInputAvailable, onOutput) now fire correctly during agent execution for tools created via createTool(). Previously these hooks were silently ignored. Affected: createTool, Tool, CoreToolBuilder.build, CoreTool. (#13708)
Fixed an issue where sub-agent messages inside a workflow tool would corrupt the parent agent's memory context. When an agent calls a workflow as a tool and the workflow runs sub-agents with their own memory threads, the parent's thread identity on the shared request context is now correctly saved before the workflow executes and restored afterward, preventing messages from being written to the wrong thread. (#13637)
Fix workspace tool output truncation to handle tokenizer special tokens (#13725)
Added a warning when a LocalFilesystem mount uses contained: false, alerting users to path resolution issues in mount-based workspaces. Use contained: true (default) or allowedPaths to allow specific host paths. (#13474)
Fixed harness handling for observational memory failures so streams stop immediately when OM reports a failed run or buffering cycle. (#13563)
The harness now emits the existing OM failure event (om_observation_failed, om_reflection_failed, or om_buffering_failed), emits a top-level error with OM context, and aborts the active stream. This prevents normal assistant output from continuing after an OM model failure.
Fixed subagents being unable to access files outside the project root. Subagents now inherit both user-approved sandbox paths and skill paths (e.g. ~/.claude/skills) from the parent agent. (#13700)
Fixed agent-as-tools schema generation so Gemini accepts tool definitions for suspend/resume flows. (#13715)
This prevents schema validation failures when resumeData is present.
Fixed tool approval resume failing when Agent is used without an explicit Mastra instance. The Harness now creates an internal Mastra instance with storage and registers it on mode agents, ensuring workflow snapshots persist and load correctly. Also fixed requestContext serialization using toJSON() to prevent circular reference errors during snapshot persistence. (#13519)
Fixed spawn error handling in LocalSandbox by switching to execa. Previously, spawning a process with an invalid working directory or missing command could crash with an unhandled Node.js exception. Now returns descriptive error messages instead. Also fixed timeout handling to properly kill the entire process group for compound commands. (#13734)
HTTP request logging can now be configured in detail via apiReqLogs in the server config. The new HttpLoggingConfig type is exported from @mastra/core/server. (#11907)
import type { HttpLoggingConfig } from '@mastra/core/server';
const loggingConfig: HttpLoggingConfig = {
enabled: true,
level: 'info',
excludePaths: ['/health', '/metrics'],
includeHeaders: true,
includeQueryParams: true,
redactHeaders: ['authorization', 'cookie'],
};
Remove internal processes field from sandbox provider options (#13597)
The processes field is no longer exposed in constructor options for E2B, Daytona, and Blaxel sandbox providers. This field is managed internally and was not intended to be user-configurable.
Fixed abort signal propagation in agent networks. When using abortSignal with agent.network(), the signal now correctly prevents tool execution when abort fires during routing, and no longer saves partial results to memory when sub-agents, tools, or workflows are aborted. (#13491)
Fixed Memory.recall() to include pagination metadata (total, page, perPage, hasMore) in its response, ensuring consistent pagination regardless of whether agentId is provided. Fixes #13277 (#13278)
Fixed harness getTokenUsage() returning zeros when using AI SDK v5/v6. The token usage extraction now correctly reads both inputTokens/outputTokens (v5/v6) and promptTokens/completionTokens (v4) field names from the usage object. (#13622)
Model pack selection is now more consistent and reliable in mastracode. (#13512)
/models is now the single command for choosing and managing model packs.varied.Added support for reading resource IDs from Harness. (#13690)
You can now get the default resource ID and list known resource IDs from stored threads.
const defaultId = harness.getDefaultResourceId();
const knownIds = await harness.getKnownResourceIds();
chore(harness): Update harness sub-agent instructions type to be dynamic (#13706)
Added MastraMessagePart to the public type exports of @mastra/core/agent, allowing it to be imported directly in downstream packages. (#13297)
Added deleteThread({ threadId }) method to the Harness class for deleting threads and their messages from storage. Releases the thread lock and clears the active thread when deleting the current thread. Emits a thread_deleted event. (#13625)
Fixed tilde (~) paths not expanding to the home directory in LocalFilesystem and LocalSandbox. Paths like ~/my-project were silently treated as relative paths, creating a literal ~/ directory instead of resolving to $HOME. This affects basePath, allowedPaths, setAllowedPaths(), all file operations in LocalFilesystem, and workingDirectory in LocalSandbox. (#13739)
Switched Mastra Code to workspace tools and enabled LSP by default (#13437)
old_string/new_string)Fixed tilde paths (~/foo) in contained LocalFilesystem silently writing to the wrong location. Previously, ~/foo would expand and then nest under basePath (e.g. basePath/home/user/foo). Tilde paths are now treated as real absolute paths, and throw PermissionError when the expanded path is outside basePath and allowedPaths. (#13741)
504fc8b, f9c150b, 88de7e8, edee4b3, 9311c17, 3790c75, e7a235b, d51d298, 6dbeeb9, d5f0d8d, 09c3b18, b896379, b896379, 85c84eb, a89272a, ee9c8df, 77b4a25, 276246e, 08ecfdb, d5f628c, 359d687, 524c0f3, c18a0e9, 4bd21ea, 115a7a4, 22a48ae, 3c6ef79, 9e77e8f, 9311c17, 7edf78f, 1c4221c, d25b9ea, fe1ce5c, b03c0e0, 0a8366b, 56f2018, 85664e9, bc79650, 9257d01, 9311c17, 3a3a59e, 3108d4e, 0c33b2c, 191e5bd, f77cd94, e8135c7, daca48f, 257d14f, 352f25d, 93477d0, 31c78b3, 0bc0720, 36516ac, e947652, 3c6ef79, 9257d01, ec248f6]:
Fixed message history and semantic recall persistence after AI SDK streams finish. (#13297)
Updated dependencies [504fc8b, f9c150b, 88de7e8, edee4b3, 3790c75, e7a235b, d51d298, 6dbeeb9, d5f0d8d, 09c3b18, b896379, 85c84eb, a89272a, ee9c8df, 77b4a25, 276246e, 08ecfdb, d5f628c, 524c0f3, c18a0e9, 4bd21ea, 115a7a4, 22a48ae, 3c6ef79, 9311c17, 7edf78f, 1c4221c, d25b9ea, fe1ce5c, b03c0e0, 0a8366b, 85664e9, bc79650, 9257d01, 3a3a59e, 3108d4e, 0c33b2c, 191e5bd, f77cd94, e8135c7, daca48f, 257d14f, 352f25d, 93477d0, 31c78b3, 0bc0720, 36516ac, e947652, 3c6ef79, 9257d01, ec248f6]:
504fc8b, f9c150b, 88de7e8, edee4b3, 3790c75, e7a235b, d51d298, 6dbeeb9, d5f0d8d, 09c3b18, b896379, 85c84eb, a89272a, ee9c8df, 77b4a25, 276246e, 08ecfdb, d5f628c, 524c0f3, c18a0e9, 4bd21ea, 115a7a4, 22a48ae, 3c6ef79, 9311c17, 7edf78f, 1c4221c, d25b9ea, fe1ce5c, b03c0e0, 0a8366b, 85664e9, bc79650, 9257d01, 3a3a59e, 3108d4e, 0c33b2c, 191e5bd, f77cd94, e8135c7, daca48f, 257d14f, 352f25d, 93477d0, 31c78b3, 0bc0720, 36516ac, e947652, 3c6ef79, 9257d01, ec248f6]:
Expanded @mastra/auth-better-auth to implement the new auth interfaces (IUserProvider, ISessionProvider, ICredentialsProvider) from @mastra/core/auth. Adds support for username/password credential flows alongside the existing token-based authentication. (#13163)
Updated dependencies [504fc8b, f9c150b, 88de7e8, edee4b3, 3790c75, e7a235b, d51d298, 6dbeeb9, d5f0d8d, 09c3b18, b896379, 85c84eb, a89272a, ee9c8df, 77b4a25, 276246e, 08ecfdb, d5f628c, 524c0f3, c18a0e9, 4bd21ea, 115a7a4, 22a48ae, 3c6ef79, 9311c17, 7edf78f, 1c4221c, d25b9ea, fe1ce5c, b03c0e0, 0a8366b, 85664e9, bc79650, 9257d01, 3a3a59e, 3108d4e, 0c33b2c, 191e5bd, f77cd94, e8135c7, daca48f, 257d14f, 352f25d, 93477d0, 31c78b3, 0bc0720, 36516ac, e947652, 3c6ef79, 9257d01, ec248f6]:
Added @mastra/auth-cloud — a new auth provider for Mastra Cloud with PKCE OAuth flow, session management, and role-based access control. (#13163)
import { MastraCloudAuthProvider, MastraRBACCloud } from '@mastra/auth-cloud';
const mastra = new Mastra({
server: {
auth: new MastraCloudAuthProvider({
appId: process.env.MASTRA_APP_ID!,
apiKey: process.env.MASTRA_API_KEY!,
}),
rbac: new MastraRBACCloud({
appId: process.env.MASTRA_APP_ID!,
apiKey: process.env.MASTRA_API_KEY!,
}),
},
});
Handles the full OAuth lifecycle including login URL generation, PKCE challenge/verification, callback handling, and session cookie management.
504fc8b, f9c150b, 88de7e8, edee4b3, 3790c75, e7a235b, d51d298, 6dbeeb9, d5f0d8d, 09c3b18, b896379, 85c84eb, a89272a, ee9c8df, 77b4a25, 276246e, 08ecfdb, d5f628c, 524c0f3, c18a0e9, 4bd21ea, 115a7a4, 22a48ae, 3c6ef79, 9311c17, 7edf78f, 1c4221c, d25b9ea, fe1ce5c, b03c0e0, 0a8366b, 85664e9, bc79650, 9257d01, 3a3a59e, 3108d4e, 0c33b2c, 191e5bd, f77cd94, e8135c7, daca48f, 257d14f, 352f25d, 93477d0, 31c78b3, 0bc0720, 36516ac, e947652, 3c6ef79, 9257d01, ec248f6]:
Added @mastra/auth-studio — an auth provider for deployed Mastra Studio instances that proxies authentication through the Mastra shared API. (#13163)
Deployed instances need no secrets — all WorkOS authentication is handled by the shared API. The package provides SSO login/callback flows, session management via sealed cookies, RBAC with organization-scoped permissions, and automatic forced account picker on deploy logins.
504fc8b, f9c150b, 88de7e8, edee4b3, 3790c75, e7a235b, d51d298, 6dbeeb9, d5f0d8d, 09c3b18, b896379, 85c84eb, a89272a, ee9c8df, 77b4a25, 276246e, 08ecfdb, d5f628c, 524c0f3, c18a0e9, 4bd21ea, 115a7a4, 22a48ae, 3c6ef79, 9311c17, 7edf78f, 1c4221c, d25b9ea, fe1ce5c, b03c0e0, 0a8366b, 85664e9, bc79650, 9257d01, 3a3a59e, 3108d4e, 0c33b2c, 191e5bd, f77cd94, e8135c7, daca48f, 257d14f, 352f25d, 93477d0, 31c78b3, 0bc0720, 36516ac, e947652, 3c6ef79, 9257d01, ec248f6]:
Added full auth provider to @mastra/auth-workos with SSO, RBAC, SCIM directory sync, and admin portal support. (#13163)
import { MastraAuthWorkos, MastraRBACWorkos } from '@mastra/auth-workos';
const mastra = new Mastra({
server: {
auth: new MastraAuthWorkos({
apiKey: process.env.WORKOS_API_KEY,
clientId: process.env.WORKOS_CLIENT_ID,
}),
rbac: new MastraRBACWorkos({
apiKey: process.env.WORKOS_API_KEY,
clientId: process.env.WORKOS_CLIENT_ID,
roleMapping: {
admin: ['*'],
member: ['agents:read', 'workflows:*'],
},
}),
},
});
504fc8b, f9c150b, 88de7e8, edee4b3, 3790c75, e7a235b, d51d298, 6dbeeb9, d5f0d8d, 09c3b18, b896379, 85c84eb, a89272a, ee9c8df, 77b4a25, 276246e, 08ecfdb, d5f628c, 524c0f3, c18a0e9, 4bd21ea, 115a7a4, 22a48ae, 3c6ef79, 9311c17, 7edf78f, 1c4221c, d25b9ea, fe1ce5c, b03c0e0, 0a8366b, 85664e9, bc79650, 9257d01, 3a3a59e, 3108d4e, 0c33b2c, 191e5bd, f77cd94, e8135c7, daca48f, 257d14f, 352f25d, 93477d0, 31c78b3, 0bc0720, 36516ac, e947652, 3c6ef79, 9257d01, ec248f6]:
Abort signal support in sandbox commands (#13597)
abortSignal in command optionsAdded background process management support for Blaxel sandboxes. Agents can now spawn, monitor, and kill long-running processes using the standard ProcessHandle interface. (79177b1)
Example usage:
const sandbox = new BlaxelSandbox({ timeout: '5m' });
const workspace = new Workspace({ sandbox });
// Process manager is available via sandbox.processes
const handle = await sandbox.processes.spawn('python server.py');
// Monitor output
handle.onStdout(data => console.log(data));
// Check status
const info = await sandbox.processes.list();
// Kill when done
await handle.kill();
Note: Process stdin is not supported in Blaxel sandboxes.
Additional improvements:
Fixed command timeouts in Blaxel sandboxes so long-running commands now respect configured limits. (#13520)
Changed the default Blaxel image to blaxel/ts-app:latest (Debian-based), which supports both S3 and GCS mounts out of the box.
Added distro detection for mount scripts so S3 mounts work on Alpine-based images (e.g. blaxel/node:latest) via apk, and GCS mounts give a clear error on Alpine since gcsfuse is unavailable.
Removed working directory from sandbox instructions to avoid breaking prompt caching.
Remove internal processes field from sandbox provider options (#13597)
The processes field is no longer exposed in constructor options for E2B, Daytona, and Blaxel sandbox providers. This field is managed internally and was not intended to be user-configurable.
Updated dependencies [504fc8b, f9c150b, 88de7e8, edee4b3, 3790c75, e7a235b, d51d298, 6dbeeb9, d5f0d8d, 09c3b18, b896379, 85c84eb, a89272a, ee9c8df, 77b4a25, 276246e, 08ecfdb, d5f628c, 524c0f3, c18a0e9, 4bd21ea, 115a7a4, 22a48ae, 3c6ef79, 9311c17, 7edf78f, 1c4221c, d25b9ea, fe1ce5c, b03c0e0, 0a8366b, 85664e9, bc79650, 9257d01, 3a3a59e, 3108d4e, 0c33b2c, 191e5bd, f77cd94, e8135c7, daca48f, 257d14f, 352f25d, 93477d0, 31c78b3, 0bc0720, 36516ac, e947652, 3c6ef79, 9257d01, ec248f6]:
504fc8b, f9c150b, 88de7e8, edee4b3, 3790c75, e7a235b, d51d298, 6dbeeb9, d5f0d8d, 09c3b18, b896379, 85c84eb, a89272a, ee9c8df, 6a72884, 77b4a25, 276246e, 08ecfdb, d5f628c, 524c0f3, c18a0e9, 4bd21ea, 115a7a4, 22a48ae, 3c6ef79, 9311c17, 7edf78f, 1c4221c, d25b9ea, fe1ce5c, b03c0e0, 0a8366b, 85664e9, bc79650, 9257d01, 3a3a59e, 3108d4e, 0c33b2c, 191e5bd, f77cd94, e8135c7, daca48f, 257d14f, 352f25d, 93477d0, 31c78b3, 0bc0720, 36516ac, e947652, 3c6ef79, 9257d01, ec248f6]:
updateWorkflowResults and updateWorkflowState now throw a not-implemented error. This storage backend does not support concurrent workflow updates. (#12575)
Updated dependencies [504fc8b, f9c150b, 88de7e8, edee4b3, 3790c75, e7a235b, d51d298, 6dbeeb9, d5f0d8d, 09c3b18, b896379, 85c84eb, a89272a, ee9c8df, 77b4a25, 276246e, 08ecfdb, d5f628c, 524c0f3, c18a0e9, 4bd21ea, 115a7a4, 22a48ae, 3c6ef79, 9311c17, 7edf78f, 1c4221c, d25b9ea, fe1ce5c, b03c0e0, 0a8366b, 85664e9, bc79650, 9257d01, 3a3a59e, 3108d4e, 0c33b2c, 191e5bd, f77cd94, e8135c7, daca48f, 257d14f, 352f25d, 93477d0, 31c78b3, 0bc0720, 36516ac, e947652, 3c6ef79, 9257d01, ec248f6]:
Added getFullUrl helper method for constructing auth redirect URLs and exported the AuthCapabilities type. HTTP retries now skip 4xx client errors to avoid retrying authentication failures. (#13163)
Fixed CMS features (Create an agent button, clone, edit, create scorer) not appearing in built output. The build command now writes package metadata so the studio can detect installed Mastra packages at runtime. (#13163)
Updated dependencies [504fc8b, f9c150b, 88de7e8, edee4b3, 3790c75, e7a235b, d51d298, 6dbeeb9, d5f0d8d, 09c3b18, b896379, 85c84eb, a89272a, ee9c8df, 77b4a25, 276246e, 08ecfdb, d5f628c, 524c0f3, c18a0e9, 4bd21ea, 115a7a4, 22a48ae, 3c6ef79, 9311c17, 7edf78f, 1c4221c, d25b9ea, fe1ce5c, b03c0e0, 0a8366b, 85664e9, bc79650, 9257d01, 3a3a59e, 3108d4e, 0c33b2c, 191e5bd, f77cd94, e8135c7, daca48f, 257d14f, 352f25d, 93477d0, 31c78b3, 0bc0720, 36516ac, e947652, 3c6ef79, 9257d01, ec248f6]:
updateWorkflowResults and updateWorkflowState now throw a not-implemented error. This storage backend does not support concurrent workflow updates. (#12575)
Updated dependencies [504fc8b, f9c150b, 88de7e8, edee4b3, 3790c75, e7a235b, d51d298, 6dbeeb9, d5f0d8d, 09c3b18, b896379, 85c84eb, a89272a, ee9c8df, 77b4a25, 276246e, 08ecfdb, d5f628c, 524c0f3, c18a0e9, 4bd21ea, 115a7a4, 22a48ae, 3c6ef79, 9311c17, 7edf78f, 1c4221c, d25b9ea, fe1ce5c, b03c0e0, 0a8366b, 85664e9, bc79650, 9257d01, 3a3a59e, 3108d4e, 0c33b2c, 191e5bd, f77cd94, e8135c7, daca48f, 257d14f, 352f25d, 93477d0, 31c78b3, 0bc0720, 36516ac, e947652, 3c6ef79, 9257d01, ec248f6]:
updateWorkflowResults and updateWorkflowState now throw a not-implemented error. This storage backend does not support concurrent workflow updates. (#12575)
Updated dependencies [504fc8b, f9c150b, 88de7e8, edee4b3, 3790c75, e7a235b, d51d298, 6dbeeb9, d5f0d8d, 09c3b18, b896379, 85c84eb, a89272a, ee9c8df, 77b4a25, 276246e, 08ecfdb, d5f628c, 524c0f3, c18a0e9, 4bd21ea, 115a7a4, 22a48ae, 3c6ef79, 9311c17, 7edf78f, 1c4221c, d25b9ea, fe1ce5c, b03c0e0, 0a8366b, 85664e9, bc79650, 9257d01, 3a3a59e, 3108d4e, 0c33b2c, 191e5bd, f77cd94, e8135c7, daca48f, 257d14f, 352f25d, 93477d0, 31c78b3, 0bc0720, 36516ac, e947652, 3c6ef79, 9257d01, ec248f6]:
updateWorkflowResults and updateWorkflowState now throw a not-implemented error. This storage backend does not support concurrent workflow updates. (#12575)
fix: use existing indexes for queryTable operations instead of full table scans (#13630)
The queryTable handler in the Convex storage mutation now automatically
selects the best matching index based on equality filters. Previously, all
queryTable operations performed a full table scan (up to 10,000 documents)
and filtered in JavaScript, which hit Convex's 16MB/32K document read limit
when enough records accumulated across threads.
Now, when equality filters are provided (e.g., thread_id for message queries
or resourceId for thread queries), the handler matches them against the
existing schema indexes (by_thread, by_thread_created, by_resource, etc.)
and uses .withIndex() for efficient indexed queries.
Updated dependencies [504fc8b, f9c150b, 88de7e8, edee4b3, 3790c75, e7a235b, d51d298, 6dbeeb9, d5f0d8d, 09c3b18, b896379, 85c84eb, a89272a, ee9c8df, 77b4a25, 276246e, 08ecfdb, d5f628c, 524c0f3, c18a0e9, 4bd21ea, 115a7a4, 22a48ae, 3c6ef79, 9311c17, 7edf78f, 1c4221c, d25b9ea, fe1ce5c, b03c0e0, 0a8366b, 85664e9, bc79650, 9257d01, 3a3a59e, 3108d4e, 0c33b2c, 191e5bd, f77cd94, e8135c7, daca48f, 257d14f, 352f25d, 93477d0, 31c78b3, 0bc0720, 36516ac, e947652, 3c6ef79, 9257d01, ec248f6]:
504fc8b, f9c150b, 88de7e8, edee4b3, 3790c75, e7a235b, d51d298, 6dbeeb9, d5f0d8d, 09c3b18, b896379, 85c84eb, a89272a, ee9c8df, 6a72884, 77b4a25, 276246e, 08ecfdb, d5f628c, 524c0f3, c18a0e9, 4bd21ea, 115a7a4, 22a48ae, 3c6ef79, 9311c17, 7edf78f, 1c4221c, d25b9ea, fe1ce5c, b03c0e0, 0a8366b, 85664e9, bc79650, 9257d01, 3a3a59e, 3108d4e, 0c33b2c, 191e5bd, f77cd94, e8135c7, daca48f, 257d14f, 352f25d, 93477d0, 31c78b3, 0bc0720, 36516ac, e947652, 3c6ef79, 9257d01, ec248f6]:
Add DaytonaSandbox workspace provider — Daytona cloud sandbox integration for Mastra workspaces, implementing the WorkspaceSandbox interface with support for command execution, environment variables, resource configuration, snapshots, and Daytona volumes. (#13112)
Basic usage
import { Workspace } from '@mastra/core/workspace';
import { DaytonaSandbox } from '@mastra/daytona';
const sandbox = new DaytonaSandbox({
id: 'my-sandbox',
env: { NODE_ENV: 'production' },
});
const workspace = new Workspace({ sandbox });
await workspace.init();
const result = await workspace.sandbox.executeCommand('echo', ['Hello!']);
console.log(result.stdout); // "Hello!"
await workspace.destroy();
Added S3 and GCS cloud filesystem mounting support via FUSE (s3fs-fuse, gcsfuse). Daytona sandboxes can now mount cloud storage as local directories, matching the mount capabilities of E2B and Blaxel providers. (#13544)
New methods:
mount(filesystem, mountPath) — Mount an S3 or GCS filesystem at a path in the sandboxunmount(mountPath) — Unmount a previously mounted filesystemWhat changed:
Usage:
import { Workspace } from '@mastra/core/workspace';
import { S3Filesystem } from '@mastra/s3';
import { DaytonaSandbox } from '@mastra/daytona';
const workspace = new Workspace({
mounts: {
'/data': new S3Filesystem({
bucket: 'my-bucket',
region: 'us-east-1',
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
}),
},
sandbox: new DaytonaSandbox(),
});
Remove internal processes field from sandbox provider options (#13597)
The processes field is no longer exposed in constructor options for E2B, Daytona, and Blaxel sandbox providers. This field is managed internally and was not intended to be user-configurable.
Improved S3/GCS FUSE mounting reliability and sandbox reconnection. (#13543)
Mounting improvements
mount --move fallback when standard FUSE unmount fails on stuck mountsstop() now unmounts all filesystems before stopping the sandboxReconnection improvements
findExistingSandbox now looks up sandboxes by name first (works for stopped sandboxes), then falls back to label searchSandbox instructions no longer include a working directory path, keeping instructions stable across sessions. (#13520)
Updated dependencies [504fc8b, f9c150b, 88de7e8, edee4b3, 3790c75, e7a235b, d51d298, 6dbeeb9, d5f0d8d, 09c3b18, b896379, 85c84eb, a89272a, ee9c8df, 77b4a25, 276246e, 08ecfdb, d5f628c, 524c0f3, c18a0e9, 4bd21ea, 115a7a4, 22a48ae, 3c6ef79, 9311c17, 7edf78f, 1c4221c, d25b9ea, fe1ce5c, b03c0e0, 0a8366b, 85664e9, bc79650, 9257d01, 3a3a59e, 3108d4e, 0c33b2c, 191e5bd, f77cd94, e8135c7, daca48f, 257d14f, 352f25d, 93477d0, 31c78b3, 0bc0720, 36516ac, [`e947
A new supervisor pattern enables orchestrating multiple agents via stream() and generate(), with delegation hooks, iteration monitoring, completion scoring, memory isolation, tool approval propagation, context filtering, and a bail mechanism.
queryVector)Vector querying now supports metadata-only retrieval by making queryVector optional (with at least one of queryVector or filter required). @mastra/pg’s PgVector.query() explicitly supports filter-only queries, while other vector stores now throw a clear MastraError when metadata-only queries aren’t supported.
runEvals Target OptionsrunEvals adds targetOptions to forward execution/run options into agent.generate() or workflow.run.start(), plus per-item startOptions for workflow-specific overrides (e.g., initialState) on each eval datum.
Workspace edit tools (write_file, edit_file, ast_edit) can now surface Language Server Protocol diagnostics immediately after edits (TypeScript, Python/Pyright, Go/gopls, Rust/rust-analyzer, ESLint), helping catch type/lint errors before the next tool call.
@mastra/blaxel adds a Blaxel cloud sandbox provider, expanding deployment/runtime options for executing workspace tooling in a managed environment.
Make queryVector optional in the QueryVectorParams interface to support metadata-only queries. At least one of queryVector or filter must be provided. Not all vector store backends support metadata-only queries — check your store's documentation for details. (#13286)
Also fixes documentation where the query() parameter was incorrectly named vector instead of queryVector.
Added targetOptions parameter to runEvals that is forwarded directly to agent.generate() (modern path) or workflow.run.start(). Also added per-item startOptions field to RunEvalsDataItem for per-item workflow options like initialState. (#13366)
New feature: targetOptions
Pass agent execution options (e.g. maxSteps, modelSettings, instructions) through to agent.generate(), or workflow run options (e.g. perStep, outputOptions) through to workflow.run.start():
// Agent - pass modelSettings or maxSteps
await runEvals({
data,
scorers,
target: myAgent,
targetOptions: { maxSteps: 5, modelSettings: { temperature: 0 } },
});
// Workflow - pass run options
await runEvals({
data,
scorers,
target: myWorkflow,
targetOptions: { perStep: true },
});
New feature: per-item startOptions
Supply per-item workflow options (e.g. initialState) directly on each data item:
await runEvals({
data: [
{ input: { query: 'hello' }, startOptions: { initialState: { counter: 1 } } },
{ input: { query: 'world' }, startOptions: { initialState: { counter: 2 } } },
],
scorers,
target: myWorkflow,
});
Per-item startOptions take precedence over global targetOptions for the same key. Runeval-managed options (scorers, returnScorerData, requestContext) cannot be overridden via targetOptions.
Add supervisor pattern for multi-agent coordination using stream() and generate(). Includes delegation hooks, iteration monitoring, completion scoring, memory isolation, tool approval propagation, context filtering, and bail mechanism. (#13323)
Add LSP diagnostics to workspace edit tools (#13441)
Language Server Protocol (LSP) diagnostics now appear after edits made with write_file, edit_file, and ast_edit. Seeing type and lint errors immediately helps catch issues before the next tool call. Edits still work without diagnostics when language servers are not installed.
Supports TypeScript, Python (Pyright), Go (gopls), Rust (rust-analyzer), and ESLint.
Example
Before:
const workspace = new Workspace({ sandbox, filesystem });
After:
const workspace = new Workspace({ sandbox, filesystem, lsp: true });
Propagate tripwire's that are thrown from a nested workflow. (#13502)
Added isProviderDefinedTool helper to detect provider-defined AI SDK tools (e.g. google.tools.googleSearch(), openai.tools.webSearch()) for proper schema handling during serialization. (#13507)
Fixed ModelRouterEmbeddingModel.doEmbed() crashing with TypeError: result.warnings is not iterable when used with AI SDK v6's embedMany. The result now always includes a warnings array, ensuring forward compatibility across AI SDK versions. (#13369)
Fixed build error in ModelRouterEmbeddingModel.doEmbed() caused by warnings not existing on the return type. (#13461)
Fixed Harness.createThread() defaulting the thread title to "New Thread" which prevented generateTitle from working (see #13391). Threads created without an explicit title now have an empty string title, allowing the agent's title generation to produce a title from the first user message. (#13393)
Prevent unknown model IDs from being sorted to the front in reorderModels(). Models not present in the modelIds parameter are now moved to the end of the array. Fixes #13410. (#13445)
Include traceId on scores generated during experiment runs to restore traceability of experiment results (#13464)
Fixed skill-read-reference (and getReference, getScript, getAsset in WorkspaceSkillsImpl) to resolve file paths relative to the skill root instead of hardcoded subdirectories (references/, scripts/, assets/). (#13363)
Previously, calling skill-read-reference with referencePath: "docs/schema.md" would silently fail because it resolved to <skill>/references/docs/schema.md instead of <skill>/docs/schema.md. Now all paths like references/colors.md, docs/schema.md, and ./config.json resolve correctly relative to the skill root. Path traversal attacks (e.g. ../../etc/passwd) are still blocked.
Fixed workspace listing to show whether each workspace is global or agent-owned. (#13468) Agent-owned workspaces now include the owning agent's ID and name so clients can distinguish them from global workspaces.
Fixed Observational Memory not working with AI SDK v4 models (legacy path). The legacy stream/generate path now calls processInputStep, enabling processors like Observational Memory to inject conversation history and observations. (#13358)
Added resolveWorkspace() so callers can access a dynamic workspace before the first request. (#13457)
Fixed abortSignal not stopping LLM generation or preventing memory persistence. When aborting a stream (e.g., client disconnect), the LLM response no longer continues processing in the background and partial/full responses are no longer saved to memory. Fixes #13117. (#13206)
Fixed observation activation to always preserve a minimum amount of context. Previously, swapping buffered observation chunks could unexpectedly drop the context window to near-zero tokens. (#13476)
Fixed a crash where the Node.js process would terminate with an unhandled TypeError when an LLM stream encountered an error. The ReadableStreamDefaultController would throw "Controller is already closed" when chunks were enqueued after a downstream consumer cancelled or terminated the stream. All controller.enqueue(), controller.close(), and controller.error() calls now check if the controller is still open before attempting operations. (https://github.com/mastra-ai/mastra/issues/13107) (#13206)
Updated dependencies [8d14a59]:
df170fd, ae55343, c290cec, f03e794, aa4a5ae, de3f584, d3fb010, 702ee1c, f495051, e622f1d, 861f111, 00f43e8, 1b6f651, 96a1702, cb9f921, 114e7c1, cb9f921, 1b6f651, 72df4a8]:
Fixed withMastra() re-persisting prior message history on later turns. When using generateText() multiple times on the same thread, previously stored messages were duplicated in storage. (fixes #13438) (#13459)
Suppress completion feedback display when suppressFeedback is set (#13323)
Updated dependencies [df170fd, ae55343, c290cec, f03e794, aa4a5ae, de3f584, d3fb010, 702ee1c, f495051, e622f1d, 861f111, 00f43e8, 1b6f651, 96a1702, cb9f921, 114e7c1, 1b6f651, 72df4a8]:
Add a clear runtime error when queryVector is omitted for vector stores that require a vector for queries. Previously, omitting queryVector would produce confusing SDK-level errors; now each store throws a structured MastraError with ErrorCategory.USER explaining that metadata-only queries are not supported by that backend. (#13286)
Updated dependencies [df170fd, ae55343, c290cec, f03e794, aa4a5ae, de3f584, d3fb010, 702ee1c, f495051, e622f1d, 861f111, 00f43e8, 1b6f651, 96a1702, cb9f921, 114e7c1, 1b6f651, 72df4a8]:
Adds Blaxel cloud sandbox provider (#13015)
Updated dependencies [df170fd, ae55343, c290cec, f03e794, aa4a5ae, de3f584, d3fb010, 702ee1c, f495051, e622f1d, 861f111, 00f43e8, 1b6f651, 96a1702, cb9f921, 114e7c1, 1b6f651, 72df4a8]:
Add a clear runtime error when queryVector is omitted for vector stores that require a vector for queries. Previously, omitting queryVector would produce confusing SDK-level errors; now each store throws a structured MastraError with ErrorCategory.USER explaining that metadata-only queries are not supported by that backend. (#13286)
Updated dependencies [df170fd, ae55343, c290cec, f03e794, aa4a5ae, de3f584, d3fb010, 702ee1c, f495051, e622f1d, 861f111, 00f43e8, 1b6f651, 96a1702, cb9f921, 114e7c1, 1b6f651, 72df4a8]:
df170fd, ae55343, c290cec, f03e794, aa4a5ae, de3f584, d3fb010, 702ee1c, f495051, e622f1d, 8d14a59, 861f111, 00f43e8, 1b6f651, 96a1702, cb9f921, 114e7c1, 1b6f651, 72df4a8]:
Add a clear runtime error when queryVector is omitted for vector stores that require a vector for queries. Previously, omitting queryVector would produce confusing SDK-level errors; now each store throws a structured MastraError with ErrorCategory.USER explaining that metadata-only queries are not supported by that backend. (#13286)
Updated dependencies [df170fd, ae55343, c290cec, f03e794, aa4a5ae, de3f584, d3fb010, 702ee1c, f495051, e622f1d, 861f111, 00f43e8, 1b6f651, 96a1702, cb9f921, 114e7c1, 1b6f651, 72df4a8]:
Add a clear runtime error when queryVector is omitted for vector stores that require a vector for queries. Previously, omitting queryVector would produce confusing SDK-level errors; now each store throws a structured MastraError with ErrorCategory.USER explaining that metadata-only queries are not supported by that backend. (#13286)
Updated dependencies [df170fd, ae55343, c290cec, f03e794, aa4a5ae, de3f584, d3fb010, 702ee1c, f495051, e622f1d, 861f111, 00f43e8, 1b6f651, 96a1702, cb9f921, 114e7c1, 1b6f651, 72df4a8]:
df170fd, ae55343, c290cec, f03e794, aa4a5ae, de3f584, d3fb010, 702ee1c, f495051, ddf8e5c, e622f1d, e622f1d, 861f111, 00f43e8, 1b6f651, 96a1702, cb9f921, 114e7c1, 1b6f651, ae55343, 72df4a8]:
df170fd, ae55343, c290cec, f03e794, aa4a5ae, de3f584, d3fb010, 702ee1c, f495051, e622f1d, 861f111, 00f43e8, 1b6f651, 96a1702, cb9f921, 114e7c1, 1b6f651, 72df4a8]:
df170fd, ae55343, c290cec, f03e794, aa4a5ae, de3f584, d3fb010, 702ee1c, f495051, e622f1d, 861f111, 00f43e8, 1b6f651, 96a1702, cb9f921, 114e7c1, 1b6f651, 72df4a8]:
df170fd, ae55343, c290cec, f03e794, aa4a5ae, de3f584, d3fb010, 702ee1c, f495051, e622f1d, 861f111, 00f43e8, 1b6f651, 96a1702, cb9f921, 114e7c1, 1b6f651, 72df4a8]:
df170fd, ae55343, c290cec, f03e794, aa4a5ae, de3f584, d3fb010, 702ee1c, f495051, e622f1d, 861f111, 00f43e8, 1b6f651, 96a1702, cb9f921, 114e7c1, 1b6f651, 72df4a8]:
Add a clear runtime error when queryVector is omitted for vector stores that require a vector for queries. Previously, omitting queryVector would produce confusing SDK-level errors; now each store throws a structured MastraError with ErrorCategory.USER explaining that metadata-only queries are not supported by that backend. (#13286)
Updated dependencies [df170fd, ae55343, c290cec, f03e794, aa4a5ae, de3f584, d3fb010, 702ee1c, f495051, e622f1d, 861f111, 00f43e8, 1b6f651, 96a1702, cb9f921, 114e7c1, 1b6f651, 72df4a8]:
Add a clear runtime error when queryVector is omitted for vector stores that require a vector for queries. Previously, omitting queryVector would produce confusing SDK-level errors; now each store throws a structured MastraError with ErrorCategory.USER explaining that metadata-only queries are not supported by that backend. (#13286)
Updated dependencies [df170fd, ae55343, c290cec, f03e794, aa4a5ae, de3f584, d3fb010, 702ee1c, f495051, e622f1d, 861f111, 00f43e8, 1b6f651, 96a1702, cb9f921, 114e7c1, 1b6f651, 72df4a8]:
df170fd, ae55343, c290cec, f03e794, aa4a5ae, de3f584, d3fb010, 702ee1c, f495051, ddf8e5c, e622f1d, e622f1d, 861f111, 00f43e8, 1b6f651, 96a1702, cb9f921, 114e7c1, 1b6f651, ae55343, 72df4a8]:
df170fd, ae55343, c290cec, f03e794, aa4a5ae, de3f584, d3fb010, 702ee1c, f495051, ddf8e5c, e622f1d, e622f1d, 861f111, 00f43e8, 1b6f651, 96a1702, cb9f921, 114e7c1, 1b6f651, ae55343, 72df4a8]:
df170fd, ae55343, c290cec, f03e794, aa4a5ae, de3f584, d3fb010, 702ee1c, f495051, ddf8e5c, e622f1d, e622f1d, 861f111, 00f43e8, 1b6f651, 96a1702, cb9f921, 114e7c1, 1b6f651, ae55343, 72df4a8]:
df170fd, ae55343, c290cec, f03e794, aa4a5ae, de3f584, d3fb010, 702ee1c, f495051, ddf8e5c, e622f1d, e622f1d, 861f111, 00f43e8, 1b6f651, 96a1702, cb9f921, 114e7c1, 1b6f651, ae55343, 72df4a8]:
Add a clear runtime error when queryVector is omitted for vector stores that require a vector for queries. Previously, omitting queryVector would produce confusing SDK-level errors; now each store throws a structured MastraError with ErrorCategory.USER explaining that metadata-only queries are not supported by that backend. (#13286)
Updated dependencies [df170fd, ae55343, c290cec, f03e794, aa4a5ae, de3f584, d3fb010, 702ee1c, f495051, e622f1d, 861f111, 00f43e8, 1b6f651, 96a1702, cb9f921, 114e7c1, 1b6f651, 72df4a8]:
Fixed observation activation to always preserve a minimum amount of context. Previously, swapping buffered observation chunks could unexpectedly drop the context window to near-zero tokens. (#13476)
Add a clear runtime error when queryVector is omitted for vector stores that require a vector for queries. Previously, omitting queryVector would produce confusing SDK-level errors; now each store throws a structured MastraError with ErrorCategory.USER explaining that metadata-only queries are not supported by that backend. (#13286)
Updated dependencies [df170fd, ae55343, c290cec, f03e794, aa4a5ae, de3f584, d3fb010, 702ee1c, f495051, e622f1d, 861f111, 00f43e8, 1b6f651, 96a1702, cb9f921, 114e7c1, 1b6f651, 72df4a8]:
df170fd, ae55343, c290cec, f03e794, aa4a5ae, de3f584, d3fb010, 702ee1c, f495051, e622f1d, 861f111, 00f43e8, 1b6f651, 96a1702, cb9f921, 114e7c1, cb9f921, 1b6f651, c290cec, 4852a84, 72df4a8]:
Fixed MCP tool results returning empty {} when the server does not include structuredContent in responses (e.g. FastMCP, older MCP protocol versions). The client now extracts the actual result from the content array instead of returning the raw protocol envelope, which previously caused output schema validation to strip all properties. (#13469)
Fix MCP client connect() creating duplicate connections when called concurrently. This could leak stdio child processes or HTTP sessions. Fixes #13411. (#13444)
Updated dependencies [df170fd, ae55343, c290cec, f03e794, aa4a5ae, de3f584, d3fb010, 702ee1c, f495051, e622f1d, 861f111, 00f43e8, 1b6f651, 96a1702, cb9f921, 114e7c1, 1b6f651, 72df4a8]:
df170fd, ae55343, b8621e2, c290cec, f03e794, aa4a5ae, de3f584, 74ae019, d3fb010, 702ee1c, f495051, e622f1d, 861f111, 00f43e8, 1b6f651, 96a1702, cb9f921, 114e7c1, 1b6f651, 72df4a8]:
Fixed observational memory buffering to preserve more context and activate at the right time. (#13476)
blockAfter behavior: values below 100 are treated as multipliers (e.g. 1.2 = 1.2× threshold), values ≥ 100 as absolute token counts.Updated dependencies [df170fd, ae55343, c290cec, f03e794, aa4a5ae, de3f584, d3fb010, 702ee1c, f495051, e622f1d, 8d14a59, 861f111, 00f43e8, 1b6f651, 96a1702, cb9f921, 114e7c1, 1b6f651, 72df4a8]:
Fixed observation activation to always preserve a minimum amount of context. Previously, swapping buffered observation chunks could unexpectedly drop the context window to near-zero tokens. (#13476)
Add a clear runtime error when queryVector is omitted for vector stores that require a vector for queries. Previously, omitting queryVector would produce confusing SDK-level errors; now each store throws a structured MastraError with ErrorCategory.USER explaining that metadata-only queries are not supported by that backend. (#13286)
Updated dependencies [df170fd, ae55343, c290cec, f03e794, aa4a5ae, de3f584, d3fb010, 702ee1c, f495051, e622f1d, 861f111, 00f43e8, 1b6f651, 96a1702, cb9f921, 114e7c1, 1b6f651, 72df4a8]:
df170fd, ae55343, c290cec, f03e794, aa4a5ae, de3f584, d3fb010, 702ee1c, f495051, e622f1d, 861f111, 00f43e8, 1b6f651, 96a1702, cb9f921, 114e7c1, cb9f921, 1b6f651, c290cec, 72df4a8]:
Add a clear runtime error when queryVector is omitted for vector stores that require a vector for queries. Previously, omitting queryVector would produce confusing SDK-level errors; now each store throws a structured MastraError with ErrorCategory.USER explaining that metadata-only queries are not supported by that backend. (#13286)
Updated dependencies [df170fd, ae55343, c290cec, f03e794, aa4a5ae, de3f584, d3fb010, 702ee1c, f495051, e622f1d, 861f111, 00f43e8, 1b6f651, 96a1702, cb9f921, 114e7c1, 1b6f651, 72df4a8]:
PgVector.query() now supports querying by metadata filters alone without providing a query vector — useful when you need to retrieve records by metadata without performing similarity search. (#13286)
Before (queryVector was required):
const results = await pgVector.query({
indexName: 'my-index',
queryVector: [0.1, 0.2, ...],
filter: { category: 'docs' },
});
After (metadata-only query):
const results = await pgVector.query({
indexName: 'my-index',
filter: { category: 'docs' },
});
// Returns matching records with score: 0 (no similarity ranking)
At least one of queryVector or filter must be provided. When queryVector is omitted, results are returned with score: 0 since no similarity computation is performed.
Set REPLICA IDENTITY USING INDEX on the mastra_workflow_snapshot table so PostgreSQL logical replication can track row updates. The table only has a UNIQUE constraint with no PRIMARY KEY, which caused "cannot update table because it does not have a replica identity and publishes updates" errors when logical replication was enabled. Fixes #13097. (#13178)
Fixed observation activation to always preserve a minimum amount of context. Previously, swapping buffered observation chunks could unexpectedly drop the context window to near-zero tokens. (#13476)
Updated dependencies [df170fd, ae55343, c290cec, f03e794, aa4a5ae, de3f584, d3fb010, 702ee1c, f495051, e622f1d, 861f111, 00f43e8, 1b6f651, 96a1702, cb9f921, 114e7c1, 1b6f651, 72df4a8]:
Add a clear runtime error when queryVector is omitted for vector stores that require a vector for queries. Previously, omitting queryVector would produce confusing SDK-level errors; now each store throws a structured MastraError with ErrorCategory.USER explaining that metadata-only queries are not supported by that backend. (#13286)
Updated dependencies [df170fd, ae55343, c290cec, f03e794, aa4a5ae, de3f584, d3fb010, 702ee1c, f495051, e622f1d, 861f111, 00f43e8, 1b6f651, 96a1702, cb9f921, 114e7c1, 1b6f651, 72df4a8]:
Target type and target ID fields in the experiment dialog are now searchable dropdowns. Scorers can be selected via a multi-select dropdown. All three dropdowns share a consistent searchable style and visual behavior. (#13463)
Show completion result UI for supervisor pattern delegations. Previously, completion check results were only displayed when metadata.mode === 'network'. Now they display for any response that includes completionResult metadata, supporting the new supervisor pattern. (#13323)
Updated dependencies [df170fd, ae55343, c290cec, f03e794, aa4a5ae, de3f584, d3fb010, 702ee1c, f495051, b8f636a, e622f1d, 8d14a59, 114e7c1, 861f111, 00f43e8, 1b6f651, 96a1702, cb9f921, 114e7c1, 114e7c1, 114e7c1, 1b6f651, 72df4a8]:
Add a clear runtime error when queryVector is omitted for vector stores that require a vector for queries. Previously, omitting queryVector would produce confusing SDK-level errors; now each store throws a structured MastraError with ErrorCategory.USER explaining that metadata-only queries are not supported by that backend. (#13286)
Updated dependencies [df170fd, ae55343, c290cec, f03e794, aa4a5ae, de3f584, d3fb010, 702ee1c, f495051, e622f1d, 861f111, 00f43e8, 1b6f651, 96a1702, cb9f921, 114e7c1, 1b6f651, 72df4a8]:
Improved token-based chunking performance in token and semantic-markdown strategies. Markdown knowledge bases now chunk significantly faster with lower tokenization overhead. (#13495)
Updated dependencies [df170fd, ae55343, c290cec, f03e794, aa4a5ae, de3f584, d3fb010, 702ee1c, f495051, e622f1d, 861f111, 00f43e8, 1b6f651, 96a1702, cb9f921, 114e7c1, 1b6f651, 72df4a8]:
Added completionResult to MastraUIMessageMetadata (#13323)
Updated dependencies:
Add a clear runtime error when queryVector is omitted for vector stores that require a vector for queries. Previously, omitting queryVector would produce confusing SDK-level errors; now each store throws a structured MastraError with ErrorCategory.USER explaining that metadata-only queries are not supported by that backend. (#13286)
Updated dependencies [df170fd, ae55343, c290cec, f03e794, aa4a5ae, de3f584, d3fb010, 702ee1c, f495051, e622f1d, 861f111, 00f43e8, 1b6f651, 96a1702, cb9f921, 114e7c1, 1b6f651, 72df4a8]:
ZodNull throwing "does not support zod type: ZodNull" for Anthropic and OpenAI reasoning models. MCP tools with nullable properties in their JSON Schema produce z.null() which was unhandled by these provider compat layers. (#13496)Fixed the skill reference endpoint (GET /workspaces/:workspaceId/skills/:skillName/references/:referencePath) returning 404 for valid reference files. (#13506)
Fixed GET /api/workspaces returning source: 'mastra' for all workspaces. Agent workspaces now correctly return source: 'agent' with agentId and agentName populated. (#13468)
Fixed /tools API endpoint crashing with provider-defined tools (e.g. google.tools.googleSearch(), openai.tools.webSearch()). These tools have a lazy inputSchema that is not a Zod schema, which caused zodToJsonSchema to throw "Cannot read properties of undefined (reading 'typeName')". (#13507)
Updated dependencies [df170fd, ae55343, c290cec, f03e794, aa4a5ae, de3f584, d3fb010, 702ee1c, f495051, e622f1d, 861f111, 00f43e8, 1b6f651, 96a1702, cb9f921, 114e7c1, 1b6f651, 72df4a8]:
Add a clear runtime error when queryVector is omitted for vector stores that require a vector for queries. Previously, omitting queryVector would produce confusing SDK-level errors; now each store throws a structured MastraError with ErrorCategory.USER explaining that metadata-only queries are not supported by that backend. (#13286)
Updated dependencies [df170fd, ae55343, c290cec, f03e794, aa4a5ae, de3f584, d3fb010, 702ee1c, f495051, e622f1d, 861f111, 00f43e8, 1b6f651, 96a1702, cb9f921, 114e7c1, 1b6f651, 72df4a8]:
Add a clear runtime error when queryVector is omitted for vector stores that require a vector for queries. Previously, omitting queryVector would produce confusing SDK-level errors; now each store throws a structured MastraError with ErrorCategory.USER explaining that metadata-only queries are not supported by that backend. (#13286)
Updated dependencies [df170fd, ae55343, c290cec, f03e794, aa4a5ae, de3f584, d3fb010, 702ee1c, f495051, e622f1d, 861f111, 00f43e8, 1b6f651, 96a1702, cb9f921, 114e7c1, 1b6f651, 72df4a8]:
Add a clear runtime error when queryVector is omitted for vector stores that require a vector for queries. Previously, omitting queryVector would produce confusing SDK-level errors; now each store throws a structured MastraError with ErrorCategory.USER explaining that metadata-only queries are not supported by that backend. (#13286)
Updated dependencies [df170fd, ae55343, c290cec, f03e794, aa4a5ae, de3f584, d3fb010, 702ee1c, f495051, e622f1d, 861f111, 00f43e8, 1b6f651, 96a1702, cb9f921, 114e7c1, 1b6f651, 72df4a8]:
Workspaces now support spawning and managing long-running background processes (via SandboxProcessManager / ProcessHandle), with new tools like execute_command (background: true), get_process_output, and kill_process plus improved streaming terminal-style UI.
Workspace.setToolsConfig()You can dynamically enable/disable tools at runtime on an existing workspace instance (including re-enabling all tools by passing undefined), enabling safer modes like plan/read-only without recreating the workspace.
Core adds Harness.getObservationalMemoryRecord() for public access to the full OM record for the current thread, while @mastra/memory fixes major OM stability issues (shared tokenizer to prevent OOM/memory leaks, plus PostgreSQL deadlock fixes and clearer errors when threadId is missing).
Added getObservationalMemoryRecord() method to the Harness class. Fixes #13392. (#13395)
This provides public access to the full ObservationalMemoryRecord for the current thread, including activeObservations, generationCount, and observationTokenCount. Previously, accessing raw observation text required bypassing the Harness abstraction by reaching into private storage internals.
const record = await harness.getObservationalMemoryRecord();
if (record) {
console.log(record.activeObservations);
}
Added Workspace.setToolsConfig() method for dynamically updating per-tool configuration at runtime without recreating the workspace instance. Passing undefined re-enables all tools. (#13439)
const workspace = new Workspace({ filesystem, sandbox });
// Disable write tools (e.g., in plan/read-only mode)
workspace.setToolsConfig({
mastra_workspace_write_file: { enabled: false },
mastra_workspace_edit_file: { enabled: false },
});
// Re-enable all tools
workspace.setToolsConfig(undefined);
Added HarnessDisplayState so any UI can read a single state snapshot instead of handling 35+ individual events. (#13427)
Why: Previously, every UI (TUI, web, desktop) had to subscribe to dozens of granular Harness events and independently reconstruct what to display. This led to duplicated state tracking and inconsistencies across UI implementations. Now the Harness maintains a single canonical display state that any UI can read.
Before: UIs subscribed to raw events and built up display state locally:
harness.subscribe((event) => {
if (event.type === 'agent_start') localState.isRunning = true;
if (event.type === 'agent_end') localState.isRunning = false;
if (event.type === 'tool_start') localState.tools.set(event.toolCallId, ...);
// ... 30+ more event types to handle
});
After: UIs read a single snapshot from the Harness:
import type { HarnessDisplayState } from '@mastra/core/harness';
harness.subscribe(event => {
const ds: HarnessDisplayState = harness.getDisplayState();
// ds.isRunning, ds.tokenUsage, ds.omProgress, ds.activeTools, etc.
renderUI(ds);
});
Workspace instruction improvements (#13304)
Workspace.getInstructions(): agents now receive accurate workspace context that distinguishes sandbox-accessible paths from workspace-only paths.WorkspaceInstructionsProcessor: workspace context is injected directly into the agent system message instead of embedded in tool descriptions.Workspace.getPathContext() in favour of getInstructions().Added instructions option to LocalFilesystem and LocalSandbox. Pass a string to fully replace default instructions, or a function to extend them with access to the current requestContext for per-request customization (e.g. by tenant or locale).
const filesystem = new LocalFilesystem({
basePath: './workspace',
instructions: ({ defaultInstructions, requestContext }) => {
const locale = requestContext?.get('locale') ?? 'en';
return `${defaultInstructions}\nLocale: ${locale}`;
},
});
Added background process management to workspace sandboxes. (#13293)
You can now spawn, monitor, and manage long-running background processes (dev servers, watchers, REPLs) inside sandbox environments.
// Spawn a background process
const handle = await sandbox.processes.spawn('node server.js');
// Stream output and wait for exit
const result = await handle.wait({
onStdout: data => console.log(data),
});
// List and manage running processes
const procs = await sandbox.processes.list();
await sandbox.processes.kill(handle.pid);
SandboxProcessManager abstract base class with spawn(), list(), get(pid), kill(pid)ProcessHandle base class with stdout/stderr accumulation, streaming callbacks, and wait()LocalProcessManager implementation wrapping Node.js child_processhandle.reader / handle.writerexecuteCommand implementation built on process manager (spawn + wait)Added workspace tools for background process management and improved sandbox execution UI. (#13309)
execute_command now supports background: true to spawn long-running processes and return a PIDget_process_output tool to check output/status of background processes (supports wait to block until exit)kill_process tool to terminate background processesFixed agents-as-tools failing with OpenAI when using the model router. The auto-injected resumeData field (from z.any()) produced a JSON Schema without a type key, which OpenAI rejects. Tool schemas are now post-processed to ensure all properties have valid type information. (#13326)
Fixed stopWhen callback receiving empty toolResults on steps. step.toolResults now correctly reflects the tool results present in step.content. (#13319)
Added hasJudge metadata to scorer records so the studio can distinguish code-based scorers (e.g., textual-difference, content-similarity) from LLM-based scorers. This metadata is now included in all four score-saving paths: runEvals, scorer hooks, trace scoring, and dataset experiments. (#13386)
Fixed a bug where custom output processors could not emit stream events during final output processing. The writer object was always undefined when passed to output processors in the finish phase, preventing use cases like streaming moderation updates or custom UI events back to the client. (#13454)
Added per-file write locking to workspace tools (edit_file, write_file, ast_edit, delete). Concurrent tool calls targeting the same file are now serialized, preventing race conditions where parallel edits could silently overwrite each other. (#13302)
24284ff, f5097cc, 71e237f, 13a291e, 397af5a, d4701f7, 2b40831, 6184727, 0c338b8, 6f6385b, 14aba61, dd9dd1c]:
24284ff, f5097cc, 71e237f, 13a291e, 397af5a, d4701f7, 2b40831, 6184727, 0c338b8, 6f6385b, 14aba61, dd9dd1c]:
24284ff, f5097cc, 71e237f, 13a291e, 397af5a, d4701f7, 2b40831, 6184727, 0c338b8, 6f6385b, 14aba61, dd9dd1c]:
24284ff, f5097cc, 71e237f, 13a291e, 397af5a, d4701f7, 2b40831, 6184727, 0c338b8, 6f6385b, 14aba61, dd9dd1c]:
24284ff, f5097cc, 71e237f, 13a291e, 397af5a, d4701f7, 2b40831, 6184727, 0c338b8, 6f6385b, 14aba61, dd9dd1c]:
24284ff, f5097cc, 71e237f, 13a291e, 397af5a, d4701f7, 2b40831, 6184727, 0c338b8, 6f6385b, 14aba61, dd9dd1c]:
Fixed getInstructions() to report sandbox-level facts only (working directory, provider type) instead of counting all mount entries regardless of state. Added instructions option to E2BSandbox to override or extend default instructions. (#13304)
Added E2BProcessManager for background process management in E2B cloud sandboxes. (#13293)
Wraps E2B SDK's commands.run() with background: true and commands.connect() for reconnection. Processes spawned in E2B sandboxes are automatically cleaned up on stop() and destroy().
Bumps @mastra/core peer dependency to >=1.7.0-0 (requires SandboxProcessManager from core).
24284ff, f5097cc, 71e237f, 13a291e, 397af5a, d4701f7, 2b40831, 6184727, 0c338b8, 6f6385b, 14aba61, dd9dd1c]:
24284ff, f5097cc, 71e237f, 13a291e, 397af5a, d4701f7, 2b40831, 6184727, 0c338b8, 6f6385b, 14aba61, dd9dd1c]:
24284ff, f5097cc, 71e237f, 13a291e, 397af5a, d4701f7, 2b40831, 6184727, 0c338b8, 6f6385b, 14aba61, dd9dd1c]:
24284ff, f5097cc, 71e237f, 13a291e, 397af5a, d4701f7, 2b40831, 6184727, 0c338b8, 6f6385b, 14aba61, dd9dd1c]:
24284ff, f5097cc, 71e237f, 13a291e, 397af5a, d4701f7, 2b40831, 6184727, 0c338b8, 6f6385b, 14aba61, dd9dd1c]:
Fixed non-deterministic query ordering by adding secondary sort on id for dataset and dataset item queries. (#13399)
Updated dependencies [24284ff, f5097cc, 71e237f, 13a291e, 397af5a, d4701f7, 2b40831, 6184727, 0c338b8, 6f6385b, 14aba61, dd9dd1c]:
3af89bb, 551dc24, e8afc44, 24284ff, f5097cc, 71e237f, c2e02f1, 13a291e, 397af5a, d4701f7, 2b40831, 6184727, 0c338b8, 6f6385b, 14aba61, dd9dd1c]:
24284ff, f5097cc, 71e237f, 13a291e, 397af5a, d4701f7, 2b40831, 6184727, 0c338b8, 6f6385b, 14aba61, dd9dd1c]:
Fixed memory leak in Observational Memory (#13425)
Fixed several memory management issues that could cause OOM crashes in long-running processes with Observational Memory enabled:
cleanupStaticMaps.Fixed PostgreSQL deadlock when parallel agents with different threadIds share the same resourceId while using Observational Memory. Thread scope now requires a valid threadId and throws a clear error if one is missing. Also fixed the database lock ordering in synchronous observation to prevent lock inversions. (#13436)
Updated dependencies [24284ff, f5097cc, 71e237f, 13a291e, 397af5a, d4701f7, 2b40831, 6184727, 0c338b8, 6f6385b, 14aba61, dd9dd1c]:
24284ff, f5097cc, 71e237f, 13a291e, 397af5a, d4701f7, 2b40831, 6184727, 0c338b8, 6f6385b, 14aba61, dd9dd1c]:
551dc24, e8afc44, 24284ff, f5097cc, 71e237f, c2e02f1, 13a291e, 397af5a, d4701f7, 2b40831, 6184727, 0c338b8, 6f6385b, 14aba61, dd9dd1c]:
Fixed non-deterministic query ordering by adding secondary sort on id for dataset and dataset item queries. (#13399)
Updated dependencies [24284ff, f5097cc, 71e237f, 13a291e, 397af5a, d4701f7, 2b40831, 6184727, 0c338b8, 6f6385b, 14aba61, dd9dd1c]:
Improved comparison selection behavior: selecting a third item now replaces the most recent selection instead of being blocked. Applies to dataset version, item version, and dataset item comparison flows. (#13406)
Improved the score dialog to show "N/A" with an explanation instead of "null" for empty scorer fields. Code-based scorers show "N/A — code-based scorer does not use prompts" and LLM scorers with unconfigured steps show "N/A — step not configured". Detection uses the hasJudge metadata flag with a heuristic fallback for older data. (#13386)
Added workspace tools for background process management and improved sandbox execution UI. (#13309)
execute_command now supports background: true to spawn long-running processes and return a PIDget_process_output tool to check output/status of background processes (supports wait to block until exit)kill_process tool to terminate background processesUpdated dependencies [24284ff, f5097cc, 71e237f, 13a291e, 397af5a, d4701f7, 2b40831, 2b40831, 6184727, 0c338b8, 6f6385b, 14aba61, dd9dd1c]:
zeroentropy@0.1.0-alpha.7 ↗︎ (from 0.1.0-alpha.6, in dependencies)24284ff, f5097cc, 71e237f, 13a291e, 397af5a, d4701f7, 2b40831, 6184727, 0c338b8, 6f6385b, 14aba61, dd9dd1c]:
2b40831]:
mastra_workspace_ast_edit)A new AST edit tool enables intelligent code transformations (rename identifiers, add/remove/merge imports, pattern-based replacements with metavariables) and is automatically available when @ast-grep/napi is installed, enabling robust refactors beyond string-based edits.
Tool renderers now stream argument previews in real time (including diffs for edits and streamed file content for writes) via partial JSON parsing, and task_write/task_check are now built-in Harness tools automatically injected into agent calls for structured task tracking.
Observational Memory now preserves suggestedContinuation and currentTask across activations (with storage adapter support), improving conversational continuity when the message window shrinks; activation/priority handling is improved to better hit retention targets and avoid runaway observer output.
Added AST edit tool (workspace_ast_edit) for intelligent code transformations using AST analysis. Supports renaming identifiers, adding/removing/merging imports, and pattern-based find-and-replace with metavariable substitution. Automatically available when @ast-grep/napi is installed in the project. (#13233)
Example:
const workspace = new Workspace({
filesystem: new LocalFilesystem({ basePath: '/my/project' }),
});
const tools = createWorkspaceTools(workspace);
// Rename all occurrences of an identifier
await tools['mastra_workspace_ast_edit'].execute({
path: '/src/utils.ts',
transform: 'rename',
targetName: 'oldName',
newName: 'newName',
});
// Add an import (merges into existing imports from the same module)
await tools['mastra_workspace_ast_edit'].execute({
path: '/src/app.ts',
transform: 'add-import',
importSpec: { module: 'react', names: ['useState', 'useEffect'] },
});
// Pattern-based replacement with metavariables
await tools['mastra_workspace_ast_edit'].execute({
path: '/src/app.ts',
pattern: 'console.log($ARG)',
replacement: 'logger.debug($ARG)',
});
Added streaming tool argument previews across all tool renderers. Tool names, file paths, and commands now appear immediately as the model generates them, rather than waiting for the complete tool call. (#13328)
old_str and new_str are available, even before the tool result arrivesAll tools use partial JSON parsing to progressively display argument information. This is enabled automatically for all Harness-based agents — no configuration required.
Added optional threadLock callbacks to HarnessConfig for preventing concurrent thread access across processes. The Harness calls acquire/release during selectOrCreateThread, createThread, and switchThread when configured. Locking is opt-in — when threadLock is not provided, behavior is unchanged. (#13334)
const harness = new Harness({
id: 'my-harness',
storage: myStore,
modes: [{ id: 'default', agent: myAgent }],
threadLock: {
acquire: threadId => acquireThreadLock(threadId),
release: threadId => releaseThreadLock(threadId),
},
});
Refactored all Harness class methods to accept object parameters instead of positional arguments, and standardized method naming. (#13353)
Why: Positional arguments make call sites harder to read, especially for methods with optional middle parameters or multiple string arguments. Object parameters are self-documenting and easier to extend without breaking changes.
list prefix (listModes, listAvailableModels, listMessages, listMessagesForThread)persistThreadSetting → setThreadSettingresolveToolApprovalDecision → respondToToolApproval (consistent with respondToQuestion / respondToPlanApproval)setPermissionCategory → setPermissionForCategorysetPermissionTool → setPermissionForToolBefore:
await harness.switchMode('build');
await harness.sendMessage('Hello', { images });
const modes = harness.getModes();
const models = await harness.getAvailableModels();
harness.resolveToolApprovalDecision('approve');
After:
await harness.switchMode({ modeId: 'build' });
await harness.sendMessage({ content: 'Hello', images });
const modes = harness.listModes();
const models = await harness.listAvailableModels();
harness.respondToToolApproval({ decision: 'approve' });
The HarnessRequestContext interface methods (registerQuestion, registerPlanApproval, getSubagentModelId) are also updated to use object parameters.
Added task_write and task_check as built-in Harness tools. These tools are automatically injected into every agent call, allowing agents to track structured task lists without manual tool registration. (#13344)
// Agents can call task_write to create/update a task list
await tools['task_write'].execute({
tasks: [
{ content: 'Fix authentication bug', status: 'in_progress', activeForm: 'Fixing authentication bug' },
{ content: 'Add unit tests', status: 'pending', activeForm: 'Adding unit tests' },
],
});
// Agents can call task_check to verify all tasks are complete before finishing
await tools['task_check'].execute({});
// Returns: { completed: 1, inProgress: 0, pending: 1, allDone: false, incomplete: [...] }
Fixed duplicate Vercel AI Gateway configuration that could cause incorrect API key resolution. Removed a redundant override that conflicted with the upstream models.dev registry. (#13291)
Fixed Vercel AI Gateway failing when using the model router string format (e.g. vercel/openai/gpt-oss-120b). The provider registry was overriding createGateway's base URL with an incorrect value, causing API requests to hit the wrong endpoint. Removed the URL override so the AI SDK uses its own correct default. Closes #13280. (#13287)
Fixed recursive schema warnings for processor graph entries by unrolling to a fixed depth of 3 levels, matching the existing rule group pattern (#13292)
Fixed Observational Memory status not updating during conversations. The harness was missing streaming handlers for OM data chunks (status, observation start/end, buffering, activation), so the TUI never received real-time OM progress updates. Also added switchObserverModel and switchReflectorModel methods so changing OM models properly emits events to subscribers. (#13330)
Fixed thread resuming in git worktrees. Previously, starting mastracode in a new worktree would resume a thread from another worktree of the same repo. Threads are now auto-tagged with the project path and filtered on resume so each worktree gets its own thread scope. (#13343)
Fixed a crash where the Node.js process would terminate with an unhandled TypeError when an LLM stream encountered an error. The ReadableStreamDefaultController would throw "Controller is already closed" when chunks were enqueued after a downstream consumer cancelled or terminated the stream. All controller.enqueue(), controller.close(), and controller.error() calls now check if the controller is still open before attempting operations. (https://github.com/mastra-ai/mastra/issues/13107) (#13142)
Added suggestedContinuation and currentTask fields to the in-memory storage adapter's Observational Memory activation result, aligning it with the persistent storage implementations. (#13354)
Fixed provider-executed tools (e.g. Anthropic web_search) causing stream bail when called in parallel with regular tools. The tool-call-step now provides a fallback result for provider-executed tools whose output was not propagated, preventing the mapping step from misidentifying them as pending HITL interactions. Fixes #13125. (#13126)
Updated dependencies [7184d87]:
0d9efb4, 5caa13d, 940163f, 47892c8, 45bb78b, 70eef84, d84e52d, 24b80af, 608e156, 2b2e157, 59d30b5, 453693b, 78d1c80, c204b63, 742a417]:
0d9efb4, 5caa13d, 940163f, 47892c8, 45bb78b, 70eef84, d84e52d, 24b80af, 608e156, 2246a6f, 2b2e157, 59d30b5, 453693b, 78d1c80, c204b63, 742a417, b4b75ca]:
Added MCP server table and CRUD operations to storage adapters, enabling MCP server configurations to be persisted alongside agents and workflows. (#13357)
Updated dependencies [0d9efb4, 5caa13d, 940163f, 47892c8, 45bb78b, 70eef84, d84e52d, 24b80af, 608e156, 2b2e157, 59d30b5, 453693b, 78d1c80, c204b63, 742a417]:
0d9efb4, 7184d87, 5caa13d, 940163f, 47892c8, 45bb78b, 70eef84, d84e52d, 24b80af, 608e156, 2b2e157, 59d30b5, 453693b, 78d1c80, c204b63, 742a417]:
Added MCP server table and CRUD operations to storage adapters, enabling MCP server configurations to be persisted alongside agents and workflows. (#13357)
Updated dependencies [0d9efb4, 5caa13d, 940163f, 47892c8, 45bb78b, 70eef84, d84e52d, 24b80af, 608e156, 2b2e157, 59d30b5, 453693b, 78d1c80, c204b63, 742a417]:
0d9efb4, 5caa13d, 940163f, 47892c8, 45bb78b, 70eef84, d84e52d, 24b80af, 608e156, 2246a6f, 2b2e157, 59d30b5, 453693b, 78d1c80, c204b63, 742a417, b4b75ca]:
0d9efb4, 5caa13d, 940163f, 47892c8, 3f8f1b3, 45bb78b, 70eef84, d84e52d, 940163f, 24b80af, 608e156, 2b2e157, 59d30b5, 453693b, 78d1c80, c204b63, 742a417]:
0d9efb4, 5caa13d, 940163f, 47892c8, 45bb78b, 70eef84, d84e52d, 24b80af, 608e156, 2b2e157, 59d30b5, 453693b, 78d1c80, c204b63, 742a417]:
0d9efb4, 5caa13d, 940163f, 47892c8, 45bb78b, 70eef84, d84e52d, 24b80af, 608e156, 2b2e157, 59d30b5, 453693b, 78d1c80, c204b63, 742a417]:
0d9efb4, 5caa13d, 940163f, 47892c8, 45bb78b, 70eef84, d84e52d, 24b80af, 608e156, 2b2e157, 59d30b5, 453693b, 78d1c80, c204b63, 742a417]:
0d9efb4, 5caa13d, 940163f, 47892c8, 45bb78b, 70eef84, d84e52d, 24b80af, 608e156, 2b2e157, 59d30b5, 453693b, 78d1c80, c204b63, 742a417]:
0d9efb4, 5caa13d, 940163f, 47892c8, 3f8f1b3, 45bb78b, 70eef84, d84e52d, 940163f, 24b80af, 608e156, 2b2e157, 59d30b5, 453693b, 78d1c80, c204b63, 742a417]:
0d9efb4, 5caa13d, 940163f, 47892c8, 3f8f1b3, 45bb78b, 70eef84, d84e52d, 940163f, 24b80af, 608e156, 2b2e157, 59d30b5, 453693b, 78d1c80, c204b63, 742a417]:
0d9efb4, 5caa13d, 940163f, 47892c8, 3f8f1b3, 45bb78b, 70eef84, d84e52d, 940163f, 24b80af, 608e156, 2b2e157, 59d30b5, 453693b, 78d1c80, c204b63, 742a417]:
0d9efb4, 5caa13d, 940163f, 47892c8, 3f8f1b3, 45bb78b, 70eef84, d84e52d, 940163f, 24b80af, 608e156, 2b2e157, 59d30b5, 453693b, 78d1c80, c204b63, 742a417]:
0d9efb4, 5caa13d, 940163f, 47892c8, 45bb78b, 70eef84, d84e52d, 24b80af, 608e156, 2246a6f, 2b2e157, 59d30b5, 453693b, 78d1c80, c204b63, 742a417, b4b75ca]:
0d9efb4, 5caa13d, 940163f, 47892c8, 45bb78b, 70eef84, d84e52d, 24b80af, 608e156, 2246a6f, 2b2e157, 59d30b5, 453693b, 78d1c80, c204b63, 742a417, b4b75ca]:
0d9efb4, 5caa13d, 940163f, 47892c8, 45bb78b, 70eef84, d84e52d, 24b80af, 608e156, 2246a6f, 2b2e157, 59d30b5, 453693b, 78d1c80, c204b63, 742a417, b4b75ca]:
Storage adapters now return suggestedContinuation and currentTask fields on Observational Memory activation, enabling agents to maintain conversational context across activation boundaries. (#13357)
Updated dependencies [0d9efb4, 5caa13d, 940163f, 47892c8, 45bb78b, 70eef84, d84e52d, 24b80af, 608e156, 2b2e157, 59d30b5, 453693b, 78d1c80, c204b63, 742a417]:
5c70aeb, 0d9efb4, 5caa13d, 270dd16, 940163f, 5c70aeb, b260123, 47892c8, 45bb78b, 5c70aeb, 70eef84, d84e52d, 24b80af, 608e156, 78d1c80, 2b2e157, 78d1c80, 59d30b5, 453693b, 78d1c80, c204b63, 742a417]:
0d9efb4, 5caa13d, 940163f, 47892c8, 45bb78b, 70eef84, d84e52d, 24b80af, 608e156, 2b2e157, 59d30b5, 453693b, 78d1c80, c204b63, 742a417]:
Improved conversational continuity when the message window shrinks during Observational Memory activation. The agent now preserves its suggested next response and current task across activation, so it maintains context instead of losing track of the conversation. (#13354)
Also improved the Observer to capture user messages more faithfully, reduce repetitive observations, and treat the most recent user message as the highest-priority signal.
Improved Observational Memory priority handling. User messages and task completions are now always treated as high priority, ensuring the observer captures the most relevant context during conversations. (#13329)
Improved Observational Memory activation to preserve more usable context after activation. Previously, activation could leave the agent with too much or too little context depending on how chunks aligned with the retention target. (#13305)
blockAfter, activation now aggressively reduces context to unblock the conversationbufferActivation now accepts absolute token values (>= 1000) in addition to ratios (0–1), giving more precise control over when activation triggersObservations no longer inflate token counts from degenerate LLM output. Runaway or repetitive observer/reflector output is automatically detected and retried, preventing excessive context usage after activation. (#13354)
Updated dependencies [0d9efb4, 7184d87, 5caa13d, 940163f, 47892c8, 45bb78b, 70eef84, d84e52d, 24b80af, 608e156, 2b2e157, 59d30b5, 453693b, 78d1c80, c204b63, 742a417]:
Storage adapters now return suggestedContinuation and currentTask fields on Observational Memory activation, enabling agents to maintain conversational context across activation boundaries. (#13357)
Updated dependencies [0d9efb4, 5caa13d, 940163f, 47892c8, 45bb78b, 70eef84, d84e52d, 24b80af, 608e156, 2b2e157, 59d30b5, 453693b, 78d1c80, c204b63, 742a417]:
Fixed telemetry spans being silently dropped when the default exporter was used. The exporter now holds spans in memory until initialization completes, ensuring all spans are propagated to your tracing backend. (#12936)
Fixed keysToStrip.has is not a function crash in deepClean() when bundlers transform new Set([...]) into a plain object or array. This affected agents with memory deployed to Mastra Cloud. (#13322)
Updated dependencies [0d9efb4, 5caa13d, 940163f, 47892c8, 45bb78b, 70eef84, d84e52d, 24b80af, 608e156, 2b2e157, 59d30b5, 453693b, 78d1c80, c204b63, 742a417]:
5c70aeb, 0d9efb4, 5caa13d, 270dd16, 940163f, 5c70aeb, b260123, 47892c8, 45bb78b, 5c70aeb, 70eef84, d84e52d, 24b80af, 608e156, 78d1c80, 2b2e157, 78d1c80, 59d30b5, 453693b, 78d1c80, c204b63, 742a417]:
0d9efb4, 5caa13d, 940163f, 47892c8, 45bb78b, 70eef84, d84e52d, 24b80af, 608e156, 2246a6f, 2b2e157, 59d30b5, 453693b, 78d1c80, c204b63, 742a417, b4b75ca]:
0d9efb4, 5caa13d, 940163f, 47892c8, 45bb78b, 70eef84, d84e52d, 24b80af, 608e156, 2246a6f, 2b2e157, 59d30b5, 453693b, 78d1c80, c204b63, 742a417, b4b75ca]:
Storage adapters now return suggestedContinuation and currentTask fields on Observational Memory activation, enabling agents to maintain conversational context across activation boundaries. (#13357)
Updated dependencies [0d9efb4, 5caa13d, 940163f, 47892c8, 45bb78b, 70eef84, d84e52d, 24b80af, 608e156, 2b2e157, 59d30b5, 453693b, 78d1c80, c204b63, 742a417]:
Added Tree.Input component for inline file and folder creation within the Tree. Supports auto-focus, Enter to confirm, Escape to cancel, and blur handling with correct depth indentation. (#13264)
import { Tree } from '@mastra/playground-ui';
<Tree.Folder name="src" defaultOpen>
<Tree.Input
type="file"
placeholder="new-file.ts"
onSubmit={name => createFile(name)}
onCancel={() => setCreating(false)}
/>
<Tree.File name="index.ts" />
</Tree.Folder>;
dependencies updates: (#13284)
sonner@^2.0.7 ↗︎ (from ^2.0.5, in dependencies)dependencies updates: (#13300)
superjson@^2.2.6 ↗︎ (from ^2.2.2, in dependencies)Added a side-by-side diff view to the Dataset comparison pages (Compare Items and Compare Item Versions), making it easier to spot differences between dataset entries at a glance. (#13267)
Fixed the Experiment Result panel crashing with a type error when results contained nested objects instead of plain strings. (#13275)
Added a searchable combobox header to the Dataset page, allowing you to quickly filter and switch between datasets without scrolling through a long list. (#13273)
Added a composable Tree component for displaying file-system-like views with collapsible folders, file type icons, selection support, and action slots (#13259)
Updated dependencies [e4034e5, 0d9efb4, 7184d87, 5caa13d, 940163f, 47892c8, 3f8f1b3, 45bb78b, 70eef84, d84e52d, 24b80af, 608e156, 2b2e157, 59d30b5, 453693b, 78d1c80, c204b63, 742a417]:
0d9efb4, 5caa13d, 940163f, 47892c8, 45bb78b, 70eef84, d84e52d, 24b80af, 608e156, 2246a6f, 2b2e157, 59d30b5, 453693b, 78d1c80, c204b63, 742a417, b4b75ca]:
tailwind-merge@^3.4.1 ↗︎ (from ^3.3.1, in dependencies)0d9efb4, 3f8f1b3]:
0d9efb4, 5caa13d, 940163f, 47892c8, 45bb78b, 70eef84, d84e52d, 24b80af, 608e156, 2246a6f, 2b2e157, 59d30b5, 453693b, 78d1c80, c204b63, 742a417, b4b75ca]:
Fixed recursive schema warnings for processor graph entries by unrolling to a fixed depth of 3 levels, matching the existing rule group pattern (#13292)
Workspace tools like ast_edit are now correctly detected at runtime based on available dependencies (e.g. @ast-grep/napi), preventing missing tools from being advertised to agents. (#13233)
Updated dependencies [0d9efb4, 5caa13d, 940163f, 47892c8, 45bb78b, 70eef84, d84e52d, 24b80af, 608e156, 2b2e157, 59d30b5, 453693b, 78d1c80, c204b63, 742a417]:
A new generic Harness in @mastra/core provides a foundation for building agent-powered applications with modes, state management, built-in tools (ask_user, submit_plan), subagent support, Observational Memory integration, model discovery, permission-aware tool approval, and event-driven/thread/heartbeat management.
Workspaces gain least-privilege filesystem access via LocalFilesystem.allowedPaths (plus runtime updates with setAllowedPaths()), expanded glob-based configuration for file listing/indexing/skill discovery, and a new regex search tool mastra_workspace_grep to complement semantic search.
Tools can now define toModelOutput to transform tool results into model-friendly content while preserving raw outputs in storage, and workspace tools now return raw text (moving structured metadata to data-workspace-metadata stream chunks) to reduce token usage. Streaming reliability also improves (correct chunk types for tool errors, onChunk receives raw Mastra chunks, agent loop continues after tool errors).
Added allowedPaths option to LocalFilesystem for granting agents access to specific directories outside basePath without disabling containment. (#13054)
const workspace = new Workspace({
filesystem: new LocalFilesystem({
basePath: './workspace',
allowedPaths: ['/home/user/.config', '/home/user/documents'],
}),
});
Allowed paths can be updated at runtime using setAllowedPaths():
workspace.filesystem.setAllowedPaths(prev => [...prev, '/home/user/new-dir']);
This is the recommended approach for least-privilege access — agents can only reach the specific directories you allow, while containment stays enabled for everything else.
Added generic Harness class for orchestrating agents with modes, state management, built-in tools (ask_user, submit_plan), subagent support, Observational Memory integration, model discovery, and permission-aware tool approval. The Harness provides a reusable foundation for building agent-powered applications with features like thread management, heartbeat monitoring, and event-driven architecture. (#13245)
Added glob pattern support for workspace configuration. The list_files tool now accepts a pattern parameter for filtering files (e.g., **/*.ts, src/**/*.test.ts). autoIndexPaths accepts glob patterns like ./docs/**/*.md to selectively index files for BM25 search. Skills paths support globs like ./**/skills to discover skill directories at any depth, including dot-directories like .agents/skills. (#13023)
list_files tool with pattern:
// Agent can now use glob patterns to filter files
const result = await workspace.tools.workspace_list_files({
path: '/',
pattern: '**/*.test.ts',
});
autoIndexPaths with globs:
const workspace = new Workspace({
filesystem: new LocalFilesystem({ basePath: './project' }),
bm25: true,
// Only index markdown files under ./docs
autoIndexPaths: ['./docs/**/*.md'],
});
Skills paths with globs:
const workspace = new Workspace({
filesystem: new LocalFilesystem({ basePath: './project' }),
// Discover any directory named 'skills' within 4 levels of depth
skills: ['./**/skills'],
});
Note: Skills glob discovery walks up to 4 directory levels deep from the glob's static prefix. Use more specific patterns like ./src/**/skills to narrow the search scope for large workspaces.
Added direct skill path discovery — you can now pass a path directly to a skill directory or SKILL.md file in the workspace skills configuration (e.g., skills: ['/path/to/my-skill'] or skills: ['/path/to/my-skill/SKILL.md']). Previously only parent directories were supported. Also improved error handling when a configured skills path is inaccessible (e.g., permission denied), logging a warning instead of breaking discovery for all skills. (#13031)
Add optional instruction field to ObservationalMemory config types (#13240)
Adds instruction?: string to ObservationalMemoryObservationConfig and ObservationalMemoryReflectionConfig interfaces, allowing external consumers to pass custom instructions to observational memory.
Added typed workspace providers — workspace.filesystem and workspace.sandbox now return the concrete types you passed to the constructor, improving autocomplete and eliminating casts. (#13021)
When mounts are configured, workspace.filesystem returns a typed CompositeFilesystem<TMounts> with per-key narrowing via mounts.get().
Before:
const workspace = new Workspace({
filesystem: new LocalFilesystem({ basePath: '/tmp' }),
sandbox: new E2BSandbox({ timeout: 60000 }),
});
workspace.filesystem; // WorkspaceFilesystem | undefined
workspace.sandbox; // WorkspaceSandbox | undefined
After:
const workspace = new Workspace({
filesystem: new LocalFilesystem({ basePath: '/tmp' }),
sandbox: new E2BSandbox({ timeout: 60000 }),
});
workspace.filesystem; // LocalFilesystem
workspace.sandbox; // E2BSandbox
// Mount-aware workspaces get typed per-key access:
const ws = new Workspace({
mounts: { '/local': new LocalFilesystem({ basePath: '/tmp' }) },
});
ws.filesystem.mounts.get('/local'); // LocalFilesystem
Added support for Vercel AI Gateway in the model router. You can now use model: 'vercel/google/gemini-3-flash' with agents and it will route through the official AI SDK gateway provider. (#13149)
Added toModelOutput support to the agent loop. Tool definitions can now include a toModelOutput function that transforms the raw tool result before it's sent to the model, while preserving the raw result in storage. This matches the AI SDK toModelOutput convention — the function receives the raw output directly and returns { type: 'text', value: string } or { type: 'content', value: ContentPart[] }. (#13171)
import { createTool } from '@mastra/core/tools';
import { z } from 'zod';
const weatherTool = createTool({
id: 'weather',
inputSchema: z.object({ city: z.string() }),
execute: async ({ city }) => ({
city,
temperature: 72,
conditions: 'sunny',
humidity: 45,
raw_sensor_data: [0.12, 0.45, 0.78],
}),
// The model sees a concise summary instead of the full JSON
toModelOutput: output => ({
type: 'text',
value: `${output.city}: ${output.temperature}°F, ${output.conditions}`,
}),
});
Added mastra_workspace_grep workspace tool for regex-based content search across files. This complements the existing semantic search tool by providing direct pattern matching with support for case-insensitive search, file filtering by extension, context lines, and result limiting. (#13010)
The tool is automatically available when a workspace has a filesystem configured:
import { Workspace, WORKSPACE_TOOLS } from '@mastra/core/workspace';
import { LocalFilesystem } from '@mastra/core/workspace';
const workspace = new Workspace({
filesystem: new LocalFilesystem({ basePath: './my-project' }),
});
// The grep tool is auto-injected and available as:
// WORKSPACE_TOOLS.SEARCH.GREP → 'mastra_workspace_grep'
Removed outputSchema from workspace tools to return raw text instead of JSON, optimizing for token usage and LLM performance. Structured metadata that was previously returned in tool output is now emitted as data-workspace-metadata chunks via writer.custom(), keeping it available for UI consumption without passing it to the LLM. Tools are also extracted into individual files and can be imported directly (e.g. import { readFileTool } from '@mastra/core/workspace'). (#13166)
dependencies updates: (#13127)
hono@^4.11.9 ↗︎ (from ^4.11.3, in dependencies)dependencies updates: (#13167)
lru-cache@^11.2.6 ↗︎ (from ^11.2.2, in dependencies)Export AnyWorkspace type from @mastra/core/workspace for accepting any Workspace regardless of generic parameters. Updates Agent and Mastra to use AnyWorkspace so workspaces with typed mounts/sandbox (e.g. E2BSandbox, GCSFilesystem) are accepted without type errors. (#13155)
Update provider registry and model documentation with latest models and providers (e37ef84)
Fixed skill processor tools (skill-activate, skill-search, skill-read-reference, skill-read-script, skill-read-asset) being incorrectly suspended for approval when requireToolApproval: true is set. These internal tools now bypass the approval check and execute directly. (#13160)
Fixed a bug where requestContext metadata was not propagated to child spans. When using requestContextKeys, only root spans were enriched with request context values — child spans (e.g. agent_run inside a workflow) were missing them. All spans in a trace are now correctly enriched. Fixes #12818. (#12819)
Fixed semantic recall search in Mastra Studio returning no results when using non-default embedding dimensions (e.g., fastembed with 384-dim). The SemanticRecall processor now probes the embedder for its actual output dimension, ensuring the vector index name matches between write and read paths. Previously, the processor defaulted to a 1536-dim index name regardless of the actual embedder, causing a mismatch with the dimension-aware index name used by Studio's search. Fixes #13039 (#13059)
Fixed CompositeFilesystem instructions: agents and tools no longer receive an incorrect claim that files written via workspace tools are accessible at sandbox paths. The instructions now accurately describe only the available mounted filesystems. (#13221)
Fixed onChunk callback to receive raw Mastra chunks instead of AI SDK v5 converted chunks for tool results. Also added missing onChunk calls for tool-error chunks and tool-result chunks in mixed-error scenarios. (#13243)
Fixed tool execution errors stopping the agentic loop. The agent now continues after tool errors, allowing the model to see the error and retry with corrected arguments. (#13242)
Added runtime requestContext forwarding to tool executions. (#13094)
Tools invoked within agentic workflow steps now receive the caller's requestContext — including authenticated API clients, feature flags, and user metadata set by middleware. Runtime requestContext is preferred over build-time context when both are available.
Why: Previously, requestContext values were silently dropped in two places: (1) the workflow loop stream created a new empty RequestContext instead of forwarding the caller's, and (2) createToolCallStep didn't pass requestContext in tool options. This aligns both the agent generate/stream and agentic workflow paths with the agent network path, where requestContext was already forwarded correctly.
Before: Tools received an empty requestContext, losing all values set by the workflow step.
// requestContext with auth data set in workflow step
requestContext.set('apiClient', authedClient);
// tool receives empty RequestContext — apiClient is undefined
After: Pass requestContext via MastraToolInvocationOptions and tools receive it.
// requestContext with auth data flows through to the tool
requestContext.set('apiClient', authedClient);
// tool receives the same RequestContext — apiClient is available
Fixes #13088
Fixed tsc out-of-memory crash caused by step-schema.d.ts expanding to 50k lines. Added explicit type annotations to all exported Zod schema constants, reducing declaration output from 49,729 to ~500 lines without changing runtime behavior. (#13229)
Fixed TypeScript type generation hanging or running out of memory when packages depend on @mastra/core tool types. Changed ZodLikeSchema from a nominal union type to structural typing, which prevents TypeScript from performing deep comparisons of zod v3/v4 type trees during generic inference. (#13239)
Fixed tool execution errors being emitted as tool-result instead of tool-error in fullStream. Previously, when a tool's execute function threw an error, the error was caught and returned as a value, causing the stream to emit a tool-result chunk containing the error object. Now errors are properly propagated, so the stream emits tool-error chunks, allowing consumers (including the @mastra/ai-sdk conversion pipeline) to correctly distinguish between successful tool results and failed tool executions. Fixes #13123. (#13147)
Fixed thread title not being generated for pre-created threads. When threads were created before starting a conversation (e.g., for URL routing or storing metadata), the title stayed as a placeholder because the title generation condition checked whether the thread existed rather than whether it had a title. Threads created without an explicit title now get an empty title instead of a placeholder, and title generation fires whenever a thread has no title. Resolves #13145. (#13151)
Fixed inputData in dowhile and dountil loop condition functions to be properly typed as the step's output schema instead of any. This means you no longer need to manually cast inputData in your loop conditions — TypeScript will now correctly infer the type from your step's outputSchema. (#12977)
Migrated MastraCode from the prototype harness to the generic CoreHarness from @mastra/core. The createMastraCode function is now fully configurable with optional parameters for modes, subagents, storage, tools, and more. Removed the deprecated prototype harness implementation. (#13245)
Fixed the writer object being undefined in processOutputStream, allowing output processors to emit custom events to the stream during chunk processing. This enables use cases like streaming moderation results back to the client. (#13056)
Fixed sub-agent tool approval resume flow. When a sub-agent tool required approval and was approved, the agent would restart from scratch instead of resuming, causing an infinite loop. The resume data is now correctly passed through for agent tools so they properly resume after approval. (#13241)
Dataset schemas now appear in the Edit Dataset dialog. Previously the inputSchema and groundTruthSchema fields were not passed to the dialog, so editing a dataset always showed empty schemas. (#13175)
Schema edits in the JSON editor no longer cause the cursor to jump to the top of the field. Typing {"type": "object"} in the schema editor now behaves like a normal text input instead of resetting on every keystroke.
Validation errors are now surfaced when updating a dataset schema that conflicts with existing items. For example, adding a required: ["name"] constraint when existing items lack a name field now shows "2 existing item(s) fail validation" in the dialog instead of silently dropping the error.
Disabling a dataset schema from the Studio UI now correctly clears it. Previously the server converted null (disable) to undefined (no change), so the old schema persisted and validation continued.
Workflow schemas fetched via client.getWorkflow().getSchema() are now correctly parsed. The server serializes schemas with superjson, but the client was using plain JSON.parse, yielding a {json: {...}} wrapper instead of the actual JSON Schema object.
Fixed network mode messages missing metadata for filtering. All internal network messages (sub-agent results, tool execution results, workflow results) now include metadata.mode: 'network' in their content metadata, making it possible to filter them from user-facing messages without parsing JSON content. Previously, consumers had to parse the JSON body of each message to check for isNetwork: true — now they can simply check message.content.metadata.mode === 'network'. Fixes #13106. (#13144)
Fixed sub-agent memory context pollution that caused 'Exhausted all fallback models' errors when using Observational Memory with sub-agents. The parent agent's memory context is now preserved across sub-agent tool execution. (#13051)
Fixed CompositeAuth losing public and protected route configurations from underlying auth providers. Routes marked as public or protected now work correctly when deployed to Mastra Cloud. (#13086)
Trimmed the agent experiment result output to only persist relevant fields instead of the entire FullOutput blob. The stored output now contains: text, object, toolCalls, toolResults, sources, files, usage, reasoningText, traceId, and error. (#13158)
Dropped fields like steps, response, messages, rememberedMessages, request, providerMetadata, warnings, scoringData, suspendPayload, and other provider/debugging internals that were duplicated elsewhere or not useful for experiment evaluation.
Fixed .branch() condition receiving undefined inputData when resuming a suspended nested workflow after .map(). (#13055)
Previously, when a workflow used .map() followed by .branch() and a nested workflow inside the branch called suspend(), resuming would fail with TypeError: Cannot read properties of undefined because the branch conditions were unnecessarily re-evaluated with stale data.
Resume now skips condition re-evaluation for .branch() entries and goes directly to the correct suspended branch, matching the existing behavior for .parallel() entries.
Fixes #12982
Updated dependencies [1415bcd]:
@clack/prompts to v1 (#13095)dependencies updates: (#13127)
hono@^4.11.9 ↗︎ (from ^4.11.3, in dependencies)dependencies updates: (#13152)
@babel/core@^7.29.0 ↗︎ (from ^7.28.6, in dependencies)@babel/preset-typescript@^7.28.5 ↗︎ (from ^7.27.1, in dependencies)Fixes mastra build on Windows that incorrectly added spurious npm dependencies from monorepo directory names. (#13035)
Workspace paths are normalized to use forward slashes so import-path comparisons match Rollup on Windows.
Updated dependencies [252580a, f8e819f, 252580a, 5c75261, e27d832, e37ef84, 6fdd3d4, 10cf521, efdb682, 0dee7a0, 04c2c8e, 84fb4bf, 02dc07a, bb7262b, cf1c6e7, 5ffadfe, 1e1339c, d03df73, 79b8f45, 9bbf08e, 6909c74, 0a25952, ffa5468, 3264a04, 6fdd3d4, 10cf521, 088d9ba, 74fbebd, aea6217, b6a855e, ae408ea, 17e942e, 2015cf9, d03df73, 7ef454e, 2be1d99, 2708fa1, ba74aef, ba74aef, ec53e89, 9b5a8cb, 607e66b, a215d06, 6909c74, 192438f]:
compromise@^14.14.5 ↗︎ (from ^14.14.4, in dependencies)252580a, f8e819f, 5c75261, e27d832, e37ef84, 6fdd3d4, 10cf521, efdb682, 0dee7a0, 04c2c8e, 02dc07a, bb7262b, cf1c6e7, 5ffadfe, 1e1339c, d03df73, 79b8f45, 9bbf08e, 0a25952, ffa5468, 3264a04, 6fdd3d4, 088d9ba, 74fbebd, aea6217, b6a855e, ae408ea, 17e942e, 2015cf9, 7ef454e, 2be1d99, 2708fa1, ba74aef, ba74aef, ec53e89, 9b5a8cb, 607e66b, a215d06, 6909c74, 192438f]:
pino-pretty@^13.1.3 ↗︎ (from ^13.0.0, in dependencies)252580a, f8e819f, 5c75261, e27d832, e37ef84, 6fdd3d4, 10cf521, efdb682, 0dee7a0, 04c2c8e, 02dc07a, bb7262b, cf1c6e7, 5ffadfe, 1e1339c, d03df73, 79b8f45, 9bbf08e, 0a25952, ffa5468, 3264a04, 6fdd3d4, 088d9ba, 74fbebd, aea6217, b6a855e, ae408ea, 17e942e, 2015cf9, 7ef454e, 2be1d99, 2708fa1, ba74aef, ba74aef, ec53e89, 9b5a8cb, 607e66b, a215d06, 6909c74, 192438f]:
252580a, f8e819f, 5c75261, e27d832, e37ef84, 6fdd3d4, 10cf521, efdb682, 0dee7a0, 04c2c8e, 02dc07a, bb7262b, cf1c6e7, 5ffadfe, 1e1339c, d03df73, 79b8f45, 9bbf08e, 0a25952, ffa5468, 3264a04, 6fdd3d4, 088d9ba, 74fbebd, aea6217, b6a855e, ae408ea, 17e942e, 2015cf9, 7ef454e, 2be1d99, 2708fa1, ba74aef, ba74aef, ec53e89, 9b5a8cb, 607e66b, a215d06, 6909c74, 192438f]:
Add instruction property to observational memory configs (#13240)
Adds optional instruction field to ObservationConfig and ReflectionConfig that allows users to extend the built-in observer/reflector system prompts with custom guidance.
Example:
const memory = new ObservationalMemory({
model: openai('gpt-4'),
observation: {
instruction: 'Focus on user preferences about food and dietary restrictions.',
},
reflection: {
instruction: 'Prioritize consolidating health-related observations together.',
},
});
dependencies updates: (#13167)
lru-cache@^11.2.6 ↗︎ (from ^11.2.2, in dependencies)Reflection retry logic now attempts compression up to level 3, so reflections more consistently shrink to meet token limits (#13170)
sliceTokenEstimate * 0.75), making automatic trimming less aggressivetokensBuffered marker now reports the actual slice size rather than total observation tokens, giving accurate size monitoringThese changes reduce failed reflections and make reported metrics match what is actually being processed.
Fixed a bug where the OM context window would jump to extremely high token counts (e.g. 16k → 114k) after observation activation. Two issues were causing this: (#13070)
data-om-activation marker parts (which contain the full observation text, up to 150k+ characters) were being counted as message tokens when loaded from the database. These parts are never sent to the LLM and are now skipped during token counting.persistMarkerToMessage and persistMarkerToStorage now deduplicate by type + cycleId before adding a marker.Fixed thread title not being generated for pre-created threads. When threads were created before starting a conversation (e.g., for URL routing or storing metadata), the title stayed as a placeholder because the title generation condition checked whether the thread existed rather than whether it had a title. Threads created without an explicit title now get an empty title instead of a placeholder, and title generation fires whenever a thread has no title. Resolves #13145. (#13151)
Fixed sub-agent memory context pollution that caused 'Exhausted all fallback models' errors when using Observational Memory with sub-agents. The parent agent's memory context is now preserved across sub-agent tool execution. (#13051)
Updated dependencies [252580a, f8e819f, 5c75261, e27d832, e37ef84, 6fdd3d4, 10cf521, efdb682, 0dee7a0, 04c2c8e, 02dc07a, bb7262b, 1415bcd, cf1c6e7, 5ffadfe, 1e1339c, d03df73, 79b8f45, 9bbf08e, 0a25952, ffa5468, 3264a04, 6fdd3d4, 088d9ba, 74fbebd, aea6217, b6a855e, ae408ea, 17e942e, 2015cf9, 7ef454e, 2be1d99, 2708fa1, ba74aef, ba74aef, ec53e89, 9b5a8cb, 607e66b, a215d06, 6909c74, 192438f]:
dependencies updates: (#12921)
@assistant-ui/react@^0.12.10 ↗︎ (from ^0.12.3, in dependencies)@assistant-ui/react-markdown@^0.12.3 ↗︎ (from ^0.12.1, in dependencies)@assistant-ui/react-syntax-highlighter@^0.12.3 ↗︎ (from ^0.12.1, in dependencies)dependencies updates: (#13100)
@codemirror/view@^6.39.14 ↗︎ (from ^6.39.13, in dependencies)Fixed React error #310 crash when using observational memory in Mastra Studio. The chat view would crash when observation markers appeared alongside regular tool calls during streaming. (#13216)
Fixed dragged column appearing offset from cursor when mapping CSV columns in dataset import dialog (#13176)
Fixed schema form to apply changes only on Save click instead of every keystroke. Removed AgentPromptExperimentProvider in favor of inline prompt rendering. Switched hooks to use merged request context for proper request-scoped data fetching. (#13034)
Added a "Try to connect" button in the MCP client sidebar to preview available tools before creating. Fixed SSE response parsing for MCP servers returning event streams. (#13093)
Fix dataset import creating a new version per item. CSV and JSON import dialogs now use the batch insert endpoint so all imported items land on a single version. (#13214)
Fix experiment result detail panel showing output data under "Input" label, add missing Output section, and add Input column to experiment results list table. (#13165)
Updated style of Button and Select experimental variants (#13186)
Added requestContext support to listScorers for request-scoped scorer resolution (#13034)
Added mount picker for skill installation when multiple filesystem mounts are available. Skills table and detail page now show skill paths and mount labels. (#13031)
Removed unused Dataset Item Edit Dialog (#13199)
Hide Dataset Item History panel when item is edited (#13195)
Dataset schemas now appear in the Edit Dataset dialog. Previously the inputSchema and groundTruthSchema fields were not passed to the dialog, so editing a dataset always showed empty schemas. (#13175)
Schema edits in the JSON editor no longer cause the cursor to jump to the top of the field. Typing {"type": "object"} in the schema editor now behaves like a normal text input instead of resetting on every keystroke.
Validation errors are now surfaced when updating a dataset schema that conflicts with existing items. For example, adding a required: ["name"] constraint when existing items lack a name field now shows "2 existing item(s) fail validation" in the dialog instead of silently dropping the error.
Disabling a dataset schema from the Studio UI now correctly clears it. Previously the server converted null (disable) to undefined (no change), so the old schema persisted and validation continued.
Workflow schemas fetched via client.getWorkflow().getSchema() are now correctly parsed. The server serializes schemas with superjson, but the client was using plain JSON.parse, yielding a {json: {...}} wrapper instead of the actual JSON Schema object.
Fixed "Tried to unmount a fiber that is already unmounted" error in the playground agent chat. Switching threads rapidly or creating new threads no longer causes this console error. (#13182)
Updated visuals of Dataset Items List empty state (#13203)
Updated workspace tool badges for execute command and list files tools. Execute command badge now shows running state for silent commands and correctly scopes output to individual tool calls during parallel execution. (#13166)
Updated dependencies [252580a, f8e819f, 5c75261, e27d832, e37ef84, 6fdd3d4, 10cf521, efdb682, 0dee7a0, 04c2c8e, 55a0ab1, 02dc07a, bb7262b, 1415bcd, cf1c6e7, cf1c6e7, 5ffadfe, 1e1339c, d03df73, 79b8f45, 9bbf08e, 0a25952, ffa5468, 55a0ab1, 3264a04, 6fdd3d4, 088d9ba, 74fbebd, ec36c0c, aea6217, b6a855e, ae408ea, 17e942e, 2015cf9, 7ef454e, 2be1d99, 2708fa1, ba74aef, ba74aef, ec53e89, 9b5a8cb, 607e66b, a215d06, 6909c74, 192438f]:
The zodToJsonSchema function now reliably detects and routes Zod v3 vs v4 schemas regardless of which version the ambient zod import resolves to. Previously, the detection relied on checking 'toJSONSchema' in z against the ambient z import, which could resolve to either v3 or v4 depending on the environment (monorepo vs global install). This caused v3 schemas to be passed to v4's toJSONSchema() (crashing with "Cannot read properties of undefined (reading 'def')") or v4 schemas to be passed to the v3 converter (producing schemas missing the type field).
The fix explicitly imports z as zV4 from zod/v4 and routes based on the schema's own _zod property, making the behavior environment-independent.
Also migrates all mastracode tool files from zod/v3 to zod imports now that the schema-compat fix supports both versions correctly.
dependencies updates: (#13127)
hono@^4.11.9 ↗︎ (from ^4.11.3, in dependencies)Use dynamic import in server for stored skills related feature. (#13217)
Register mastra_workspace_grep tool in studio workspace tools listing so it appears in the agent metadata UI when a workspace has a filesystem configured. (#13010)
Fixed skill removal for glob-discovered skills. The remove handler now looks up a skill's actual discovered path instead of assuming the hardcoded .agents/skills directory, so skills discovered via glob patterns can be correctly deleted. (#13023)
Dataset schemas now appear in the Edit Dataset dialog. Previously the inputSchema and groundTruthSchema fields were not passed to the dialog, so editing a dataset always showed empty schemas. (#13175)
Added mount-aware skill installation for CompositeFilesystem workspaces. The install, remove, and update handlers now resolve the correct mount when multiple filesystem mounts are configured. Workspace info endpoint now includes mount metadata when available. (#13031)
Updated dependencies [252580a, f8e819f, 5c75261, e27d832, e37ef84, 6fdd3d4, 10cf521, efdb682, 0dee7a0, 04c2c8e, 02dc07a, bb7262b, cf1c6e7, 5ffadfe, 1e1339c, d03df73, 79b8f45, 9bbf08e, 0a25952, ffa5468, 3264a04, 6fdd3d4, 088d9ba, 74fbebd, aea6217, b6a855e, ae408ea, 17e942e, 2015cf9, 7ef454e, 2be1d99, 2708fa1, ba74aef, ba74aef, ec53e89, 9b5a8cb, 607e66b, a215d06, 6909c74, 192438f]:
Mastra now has first-class evaluation primitives: versioned Datasets (with JSON Schema validation and SCD-2 item versioning) and Experiments that run agents against datasets with configurable scorers and result tracking. This ships end-to-end across @mastra/core APIs, new /datasets REST endpoints in @mastra/server, and a full Studio UI for managing datasets, triggering experiments, and comparing results.
Workspace lifecycle interfaces were split into FilesystemLifecycle and SandboxLifecycle, and MastraFilesystem now supports onInit/onDestroy callbacks. Filesystem path resolution and metadata were improved (generic FilesystemInfo<TMetadata>, provider-specific metadata, safer instructions for uncontained filesystems) and filesystem info is now exposed via the workspaces API response.
Workflows now emit a workflow-step-progress stream event for foreach steps (completed/total, current index, per-iteration status/output), supported by both execution engines. Studio renders real-time progress bars, and @mastra/react watch hooks now accumulate foreachProgress into step state.
@mastra/memory: observe() now takes a single object parameter (e.g., observe({ threadId, resourceId })) instead of positional arguments.Added Datasets and Experiments to core. Datasets let you store and version collections of test inputs with JSON Schema validation. Experiments let you run AI outputs against dataset items with configurable scorers to track quality over time. (#12747)
New exports from @mastra/core/datasets:
DatasetsManager — orchestrates dataset CRUD, item versioning (SCD-2), and experiment executionDataset — single-dataset handle for adding items and running experimentsNew storage domains:
DatasetsStorage — abstract base class for dataset persistence (datasets, items, versions)ExperimentsStorage — abstract base class for experiment lifecycle and result trackingExample:
import { Mastra } from "@mastra/core";
const mastra = new Mastra({
/* ... */
});
const dataset = await mastra.datasets.create({ name: "my-eval-set" });
await dataset.addItems([{ input: { query: "What is 2+2?" }, groundTruth: { answer: "4" } }]);
const result = await dataset.runExperiment({
targetType: "agent",
targetId: "my-agent",
scorerIds: ["accuracy"]
});
Fix LocalFilesystem.resolvePath handling of absolute paths and improve filesystem info. (#12971)
FilesystemInfo generic (FilesystemInfo<TMetadata>) so providers can type their metadata.basePath, contained) to metadata in LocalFilesystem.getInfo().Add workflow-step-progress stream event for foreach workflow steps. Each iteration emits a progress event with completedCount, totalCount, currentIndex, iterationStatus (success | failed | suspended), and optional iterationOutput. Both the default and evented execution engines emit these events. (#12838)
The Mastra Studio UI now renders a progress bar with an N/total counter on foreach nodes, updating in real time as iterations complete:
// Consuming progress events from the workflow stream
const run = workflow.createRun();
const result = await run.start({ inputData });
const stream = result.stream;
for await (const chunk of stream) {
if (chunk.type === "workflow-step-progress") {
console.log(`${chunk.payload.completedCount}/${chunk.payload.totalCount} - ${chunk.payload.iterationStatus}`);
}
}
MCP Client Storage
New storage domain for persisting MCP client configurations with CRUD operations. Each MCP client can contain multiple servers with independent tool selection:
// Store an MCP client with multiple servers
await storage.mcpClients.create({
id: "my-mcp",
name: "My MCP Client",
servers: {
"github-server": { url: "https://mcp.github.com/sse" },
"slack-server": { url: "https://mcp.slack.com/sse" }
}
});
LibSQL, PostgreSQL, and MongoDB storage adapters all implement the new MCP client domain.
ToolProvider Interface
New ToolProvider interface at @mastra/core/tool-provider enables third-party tool catalog integration (e.g., Composio, Arcade AI):
import type { ToolProvider } from '@mastra/core/tool-provider';
# Providers implement: listToolkits(), listTools(), getToolSchema(), resolveTools()
resolveTools() receives requestContext from the current request, enabling per-user API keys and credentials in multi-tenant setups:
const tools = await provider.resolveTools(slugs, configs, {
requestContext: { apiKey: "user-specific-key", userId: "tenant-123" }
});
Tool Selection Semantics
Both mcpClients and integrationTools on stored agents follow consistent three-state selection:
{ tools: undefined } — provider registered, no tools selected{ tools: {} } — all tools from provider included{ tools: { 'TOOL_SLUG': { description: '...' } } } — specific tools with optional overridesAdded (#12764)
Added a suppressFeedback option to hide internal completion‑check messages from the stream. This keeps the conversation history clean while leaving existing behavior unchanged by default.
Example Before:
const agent = await mastra.createAgent({
completion: { validate: true }
});
After:
const agent = await mastra.createAgent({
completion: { validate: true, suppressFeedback: true }
});
Split workspace lifecycle interfaces (#12978)
The shared Lifecycle interface has been split into provider-specific types that match actual usage:
FilesystemLifecycle — two-phase: init() → destroy()SandboxLifecycle — three-phase: start() → stop() → destroy()The base Lifecycle type is still exported for backward compatibility.
Added onInit / onDestroy callbacks to MastraFilesystem
The MastraFilesystem base class now accepts optional lifecycle callbacks via MastraFilesystemOptions, matching the existing onStart / onStop / onDestroy callbacks on MastraSandbox.
const fs = new LocalFilesystem({
basePath: "./data",
onInit: ({ filesystem }) => {
console.log("Filesystem ready:", filesystem.status);
},
onDestroy: ({ filesystem }) => {
console.log("Cleaning up...");
}
});
onInit fires after the filesystem reaches ready status (non-fatal on failure). onDestroy fires before the filesystem is torn down.
Update provider registry and model documentation with latest models and providers (7ef618f)
Fixed Anthropic API rejection errors caused by empty text content blocks in assistant messages. During streaming with web search citations, empty text parts could be persisted to the database and then rejected by Anthropic's API with 'text content blocks must be non-empty' errors. The fix filters out these empty text blocks before persistence, ensuring stored conversation history remains valid for Anthropic models. Fixes #12553. (#12711)
Improve error messages when processor workflows or model fallback retries fail. (#12970)
Fixed tool-not-found errors crashing the agentic loop. When a model hallucinates a tool name (e.g., Gemini 3 Flash adding prefixes like creating:view instead of view), the error is now returned to the model as a tool result instead of throwing. This allows the model to self-correct and retry with the correct tool name on the next turn. The error message includes available tool names to help the model recover. Fixes #12895. (#12961)
Fixed structured output failing with Anthropic models when memory is enabled. The error "assistant message in the final position" occurred because the prompt sent to Anthropic ended with an assistant-role message, which is not supported when using output format. Resolves https://github.com/mastra-ai/mastra/issues/12800 (#12835)
commander@^14.0.3 ↗︎ (from ^14.0.2, in dependencies)7ef618f, b373564, 927c2af, 927c2af, 5fbb1a8, b896b41, 6415277, 0831bbb, 6297864, 63f7eda, a5b67a3, 877b02c, 877b02c, d87e96b, 7567222, af71458, eb36bd8, 3cbf121, 6415277]:
7ef618f, b373564, 927c2af, b896b41, 6415277, 0831bbb, 63f7eda, a5b67a3, 877b02c, 7567222, af71458, eb36bd8, 3cbf121]:
@mastra/opencode: Add opencode plugin for Observational Memory integration (#12925)
Added standalone observe() API that accepts external messages directly, so integrations can trigger observation without duplicating messages into Mastra's storage.
New exports:
ObserveHooks — lifecycle callbacks (onObservationStart, onObservationEnd, onReflectionStart, onReflectionEnd) for hooking into observation/reflection cyclesOBSERVATION_CONTEXT_PROMPT — preamble that introduces the observations blockOBSERVATION_CONTEXT_INSTRUCTIONS — rules for interpreting observations (placed after the <observations> block)OBSERVATION_CONTINUATION_HINT — behavioral guidance that prevents models from awkwardly acknowledging the memory systemgetOrCreateRecord() — now public, allows eager record initialization before the first observation cycleimport { ObservationalMemory } from "@mastra/memory/processors";
const om = new ObservationalMemory({ storage, model: "google/gemini-2.5-flash" });
// Eagerly initialize a record
await om.getOrCreateRecord(threadId);
// Pass messages directly with lifecycle hooks
await om.observe({
threadId,
messages: myMessages,
hooks: {
onObservationStart: () => console.log("Observing..."),
onObservationEnd: () => console.log("Done!"),
onReflectionStart: () => console.log("Reflecting..."),
onReflectionEnd: () => console.log("Reflected!")
}
});
Breaking: observe() now takes an object param instead of positional args. Update calls from observe(threadId, resourceId) to observe({ threadId, resourceId }).
Fixed observational memory writing non-integer token counts to PostgreSQL, which caused invalid input syntax for type integer errors. Token counts are now correctly rounded to integers before all database writes. (#12976)
Fixed cloneThread not copying working memory to the cloned thread. Thread-scoped working memory is now properly carried over when cloning, and resource-scoped working memory is copied when the clone uses a different resourceId. (#12833)
Updated dependencies [7ef618f, b373564, 927c2af, b896b41, 6415277, 0831bbb, 63f7eda, a5b67a3, 877b02c, 7567222, af71458, eb36bd8, 3cbf121]:
Added Datasets and Experiments UI. Includes dataset management (create, edit, delete, duplicate), item CRUD with CSV/JSON import and export, SCD-2 version browsing and comparison, experiment triggering with scorer selection, experiment results with trace visualization, and cross-experiment comparison with score deltas. Uses coreFeatures runtime flag for feature gating instead of build-time env var. (#12747)
Add workflow-step-progress stream event for foreach workflow steps. Each iteration emits a progress event with completedCount, totalCount, currentIndex, iterationStatus (success | failed | suspended), and optional iterationOutput. Both the default and evented execution engines emit these events. (#12838)
The Mastra Studio UI now renders a progress bar with an N/total counter on foreach nodes, updating in real time as iterations complete:
// Consuming progress events from the workflow stream
const run = workflow.createRun();
const result = await run.start({ inputData });
const stream = result.stream;
for await (const chunk of stream) {
if (chunk.type === "workflow-step-progress") {
console.log(`${chunk.payload.completedCount}/${chunk.payload.totalCount} - ${chunk.payload.iterationStatus}`);
}
}
Revamped agent CMS experience with dedicated route pages for each section (Identity, Instruction Blocks, Tools, Agents, Scorers, Workflows, Memory, Variables) and sidebar navigation (#13016)
Added ability to create sub-agents on-the-fly via a SideDialog in the Sub-Agents section of the agent editor (#12952)
dependencies updates: (#12949)
@codemirror/view@^6.39.13 ↗︎ (from ^6.39.12, in dependencies)Removed experiment mode from the agent prompt sidebar. The system prompt is now displayed as readonly. (#12994)
Aligned frontend rule engine types with backend, added support for greater_than_or_equal, less_than_or_equal, exists, and not_exists operators, and switched instruction blocks to use RuleGroup (#12864)
Skip awaitBufferStatus calls when observational memory is disabled. Previously the Studio sidebar would unconditionally hit /memory/observational-memory/buffer-status after every agent message, which returns a 400 when OM is not configured and halts agent execution. (#13025)
Fix prompt experiment localStorage persisting stale prompts: only save to localStorage when the user edits the prompt away from the code-defined value, and clear it when they match. Previously, the code-defined prompt was eagerly saved on first load, causing code changes to agent instructions to be ignored. (#12929)
Fixed chat briefly showing an empty conversation after sending the first message on a new thread. (#13018)
Fixed missing validation for the instructions field in the scorer creation form and replaced manual submission state tracking with the mutation hook's built-in pending state (#12993)
Default Studio file browser to basePath when filesystem containment is disabled, preventing the browser from showing the host root directory. (#12971)
Updated dependencies [7ef618f, b373564, 927c2af, 927c2af, 3da8a73, 927c2af, b896b41, 6415277, 4ba40dc, 0831bbb, 63f7eda, a5b67a3, 877b02c, 877b02c, 7567222, 40f224e, af71458, eb36bd8, 3cbf121]:
/datasets for full CRUD on datasets, items, versions, experiments, and experiment results. Includes batch operations and experiment comparison. (#12747)Fixed the /api/tools endpoint returning an empty list even when tools are registered on the Mastra instance. Closes #12983 (#13008)
Fixed custom API routes registered via registerApiRoute() being silently ignored by Koa, Express, Fastify, and Hono server adapters. Routes previously appeared in the OpenAPI spec but returned 404 at runtime. Custom routes now work correctly across all server adapters. (#12960)
Example:
import Koa from "koa";
import { Mastra } from "@mastra/core";
import { registerApiRoute } from "@mastra/core/server";
import { MastraServer } from "@mastra/koa";
const mastra = new Mastra({
server: {
apiRoutes: [
registerApiRoute("/hello", {
method: "GET",
handler: async (c) => c.json({ message: "Hello!" })
})
]
}
});
const app = new Koa();
const server = new MastraServer({ app, mastra });
await server.init();
// GET /hello now returns 200 instead of 404
Added API routes for stored MCP clients and tool provider discovery. (#12974)
Stored MCP Client Routes
New REST endpoints for managing stored MCP client configurations:
GET /api/stored-mcp-clients — List all stored MCP clientsGET /api/stored-mcp-clients/:id — Get a specific MCP clientPOST /api/stored-mcp-clients — Create a new MCP clientPATCH /api/stored-mcp-clients/:id — Update an existing MCP clientDELETE /api/stored-mcp-clients/:id — Delete an MCP client// Create a stored MCP client
const response = await fetch("/api/stored-mcp-clients", {
method: "POST",
body: JSON.stringify({
id: "my-mcp-client",
name: "My MCP Client",
servers: {
"github-server": { url: "https://mcp.github.com/sse" }
}
})
});
Tool Provider Routes
New REST endpoints for browsing registered tool providers and their tools:
GET /api/tool-providers — List all registered tool providers with metadataGET /api/tool-providers/:providerId/toolkits — List toolkits for a providerGET /api/tool-providers/:providerId/tools — List tools (with optional toolkit/search filtering)GET /api/tool-providers/:providerId/tools/:toolSlug/schema — Get input schema for a tool// List all registered tool providers
const providers = await fetch("/api/tool-providers");
// Browse tools in a specific toolkit
const tools = await fetch("/api/tool-providers/composio/tools?toolkit=github");
// Get schema for a specific tool
const schema = await fetch("/api/tool-providers/composio/tools/GITHUB_LIST_ISSUES/schema");
Updated stored agent schemas to include mcpClients and integrationTools conditional fields, and updated agent version tracking accordingly.
Fixed requestContextSchema missing from the agent list API response. Agents with a requestContextSchema now correctly include it when listed via GET /agents. (#12954)
Expose filesystem info from getInfo() in the GET /api/workspaces/:id API response, including provider type, status, readOnly, and provider-specific metadata. (#12971)
Updated dependencies [7ef618f, b373564, 927c2af, b896b41, 6415277, 0831bbb, 63f7eda, a5b67a3, 877b02c, 7567222, af71458, eb36bd8, 3cbf121]:
Observational memory now buffers background observations/reflections by default to avoid blocking as conversations grow, and introduces structured streaming status/events (data-om-status plus buffering start/end/failed markers) for better UI/telemetry.
Workspaces can now mount multiple filesystem providers (S3/GCS/local/etc.) into a single unified directory tree via CompositeFilesystem, so agents and tools can access files across backends through one path structure.
Added mount support to workspaces, so you can combine multiple storage providers (S3, GCS, local disk, etc.) under a single directory tree. This lets agents access files from different sources through one unified filesystem. (#12851)
Why: Previously a workspace could only use one filesystem. With mounts, you can organize files from different providers under different paths — for example, S3 data at /data and GCS models at /models — without agents needing to know which provider backs each path.
What's new:
CompositeFilesystem for combining multiple filesystems under one treeSandboxTimeoutError, MountError)MastraFilesystem and MastraSandbox base classes with safer concurrent lifecycle handlingimport { Workspace, CompositeFilesystem } from "@mastra/core/workspace";
// Mount multiple filesystems under one tree
const composite = new CompositeFilesystem({
mounts: {
"/data": s3Filesystem,
"/models": gcsFilesystem
}
});
const workspace = new Workspace({
filesystem: composite,
sandbox: e2bSandbox
});
Stored agent fields (tools, model, workflows, agents, memory, scorers, inputProcessors, outputProcessors, defaultOptions) can now be configured as conditional variants with rule groups that evaluate against request context at runtime. All matching variants accumulate — arrays are concatenated and objects are shallow-merged — so agents dynamically compose their configuration based on the incoming request context.
New requestContextSchema field
Stored agents now accept an optional requestContextSchema (JSON Schema) that is converted to a Zod schema and passed to the Agent constructor, enabling request context validation.
Conditional field example
await agentsStore.create({
agent: {
id: "my-agent",
name: "My Agent",
instructions: "You are a helpful assistant",
model: { provider: "openai", name: "gpt-4" },
tools: [
{ value: { "basic-tool": {} } },
{
value: { "premium-tool": {} },
rules: {
operator: "AND",
conditions: [{ field: "tier", operator: "equals", value: "premium" }]
}
}
],
requestContextSchema: {
type: "object",
properties: { tier: { type: "string" } }
}
}
});
Add native @ai-sdk/groq support to model router. Groq models now use the official AI SDK package instead of falling back to OpenAI-compatible mode. (#12741)
scorer-definitions storage domain for storing LLM-as-judge and preset scorer configurations in the databaseVersionedStorageDomain generic base class that unifies AgentsStorage, PromptBlocksStorage, and ScorerDefinitionsStorage with shared CRUD methods (create, getById, getByIdResolved, update, delete, list, listResolved)preset/customLLMJudge config with top-level type, instructions, scoreRange, and presetConfig fieldsMastraEditor to use a namespace pattern (editor.agent.*, editor.scorer.*, editor.prompt.*) backed by a CrudEditorNamespace base class with built-in caching and an onCacheEvict hookrawConfig support to MastraBase and MastraScorer via toRawConfig(), so hydrated primitives carry their stored configurationMastra class (addPromptBlock, removePromptBlock, addScorer, removeScorer)Creating a stored scorer (LLM-as-judge):
const scorer = await editor.scorer.create({
id: "my-scorer",
name: "Response Quality",
type: "llm-judge",
instructions: "Evaluate the response for accuracy and helpfulness.",
model: { provider: "openai", name: "gpt-4o" },
scoreRange: { min: 0, max: 1 }
});
Retrieving and resolving a stored scorer:
// Fetch the stored definition from DB
const definition = await editor.scorer.getById("my-scorer");
// Resolve it into a runnable MastraScorer instance
const runnableScorer = editor.scorer.resolve(definition);
// Execute the scorer
const result = await runnableScorer.run({
input: "What is the capital of France?",
output: "The capital of France is Paris."
});
Editor namespace pattern (before/after):
// Before
const agent = await editor.getStoredAgentById("abc");
const prompts = await editor.listPromptBlocks();
// After
const agent = await editor.agent.getById("abc");
const prompts = await editor.prompt.list();
Generic storage domain methods (before/after):
// Before
const store = storage.getStore("agents");
await store.createAgent({ agent: input });
await store.getAgentById({ id: "abc" });
await store.deleteAgent({ id: "abc" });
// After
const store = storage.getStore("agents");
await store.create({ agent: input });
await store.getById("abc");
await store.delete("abc");
Added mount status and error information to filesystem directory listings, so the UI can show whether each mount is healthy or has issues. Improved error handling when mount operations fail. Fixed tree formatter to use case-insensitive sorting to match native tree output. (#12605)
Added workspace registration and tool context support. (#12607)
Why - Makes it easier to manage multiple workspaces at runtime and lets tools read/write files in the intended workspace.
Workspace Registration - Added a workspace registry so you can list and fetch workspaces by id with addWorkspace(), getWorkspaceById(), and listWorkspaces(). Agent workspaces are auto-registered when adding agents.
Before
const mastra = new Mastra({ workspace: myWorkspace });
// No way to look up workspaces by id or list all workspaces
After
const mastra = new Mastra({ workspace: myWorkspace });
// Look up by id
const ws = mastra.getWorkspaceById("my-workspace");
// List all registered workspaces
const allWorkspaces = mastra.listWorkspaces();
// Register additional workspaces
mastra.addWorkspace(anotherWorkspace);
Tool Workspace Access - Tools can access the workspace through context.workspace during execution, enabling filesystem and sandbox operations.
const myTool = createTool({
id: "file-reader",
execute: async ({ context }) => {
const fs = context.workspace?.filesystem;
const content = await fs?.readFile("config.json");
return { content };
}
});
Dynamic Workspace Configuration - Workspace can be configured dynamically via agent config functions. Dynamically created workspaces are auto-registered with Mastra, making them available via listWorkspaces().
const agent = new Agent({
workspace: ({ mastra, requestContext }) => {
// Return workspace dynamically based on context
const workspaceId = requestContext?.get("workspaceId") || "default";
return mastra.getWorkspaceById(workspaceId);
}
});
tools field from string[] to Record<string, { description?: string }> to allow per-tool description overridesdescription for a tool, the override is applied at resolution timeBreaking: Removed cloneAgent() from the Agent class. Agent cloning is now handled by the editor package via editor.agent.clone(). (#12904)
If you were calling agent.cloneAgent() directly, use the editor's agent namespace instead:
// Before
const result = await agent.cloneAgent({ newId: "my-clone" });
// After
const editor = mastra.getEditor();
const result = await editor.agent.clone(agent, { newId: "my-clone" });
Why: The Agent class should not be responsible for storage serialization. The editor package already handles converting between runtime agents and stored configurations, so cloning belongs there.
Added getConfiguredProcessorIds() to the Agent class, which returns raw input/output processor IDs for the agent's configuration.
Update provider registry and model documentation with latest models and providers (717ffab)
Fixed observational memory progress bars resetting to zero after agent responses finish. (#12934)
Fixed issues with stored agents (#12790)
Fixed sub-agent tool approval and suspend events not being surfaced to the parent agent stream. This enables proper suspend/resume workflows and approval handling when nested agents require tool approvals. (#12732)
Related to issue #12552.
Fixed stale agent data in CMS pages by adding removeAgent method to Mastra and updating clearStoredAgentCache to clear both Editor cache and Mastra registry when stored agents are updated or deleted (#12693)
Fixed stored scorers not being registered on the Mastra instance. Scorers created via the editor are now automatically discoverable through mastra.getScorer() and mastra.getScorerById(), matching the existing behavior of stored agents. Previously, stored scorers could only be resolved inline but were invisible to the runtime registry, causing lookups to fail. (#12903)
Fixed generateTitle running on every conversation turn instead of only the first, which caused redundant title generation calls. This happened when lastMessages was disabled or set to false. Titles are now correctly generated only on the first turn. (#12890)
Fixed workflow step errors not being propagated to the configured Mastra logger. The execution engine now properly propagates the Mastra logger through the inheritance chain, and the evented step executor logs errors with structured MastraError context (matching the default engine behavior). Closes #12793 (#12834)
Update memory config and exports: (#12704)
SerializedMemoryConfig to allow embedder?: EmbeddingModelId | string for flexibilityEMBEDDING_MODELS and EmbeddingModelInfo for use in server endpointsFixed a catch-22 where third-party AI SDK providers (like ollama-ai-provider-v2) were rejected by both stream() and streamLegacy() due to unrecognized specificationVersion values. (#12856)
When a model has a specificationVersion that isn't 'v1', 'v2', or 'v3' (e.g., from a third-party provider), two fixes now apply:
resolveModelConfig(): Models with unknown spec versions that have doStream/doGenerate methods are automatically wrapped as AI SDK v5 models, preventing the catch-22 entirely.specificationVersion instead of creating circular suggestions between stream() and streamLegacy().Fixed routing output so users only see the final answer when routing handles a request directly. Previously, an internal routing explanation appeared before the answer and was duplicated. Fixes #12545. (#12786)
Supporting changes for async buffering in observational memory, including new config options, streaming events, and UI markers. (#12891)
Fixed an issue where processor retry (via abort({ retry: true }) in processOutputStep) would send the rejected assistant response back to the LLM on retry. This confused models and often caused empty text responses. The rejected response is now removed from the message list before the retry iteration. (#12799)
Fixed Moonshot AI (moonshotai and moonshotai-cn) models using the wrong base URL. The Anthropic-compatible endpoint was not being applied, causing API calls to fail with an upstream LLM error. (#12750)
Fixed messages not being persisted to the database when using the stream-legacy endpoint. The thread is now saved to the database immediately when created, preventing a race condition where storage backends like PostgreSQL would reject message inserts because the thread didn't exist yet. Fixes #12566. (#12774)
When calling mastra.setLogger(), memory instances were not being updated with the new logger. This caused memory-related errors to be logged via the default ConsoleLogger instead of the configured logger. (#12905)
Fixed tool input validation failing when LLMs return stringified JSON for array or object parameters. Some models (e.g., GLM4.7) send "[\"file.py\"]" instead of ["file.py"] for array fields, which caused Zod validation to reject the input. The validation pipeline now automatically detects and parses stringified JSON values when the schema expects an array or object. (GitHub #12757) (#12771)
Fixed working memory tools being injected when no thread or resource context is provided. Made working memory tool execute scope-aware: thread-scoped requires threadId, resource-scoped requires resourceId (previously both were always required regardless of scope). (#12831)
Fixed a crash when using agent workflows that have no input schema. Input now passes through on first invocation, so workflows run instead of failing. (#12739) (#12785)
Fixes issue where client tools could not be used with agent.network(). Client tools configured in an agent's defaultOptions will now be available during network execution. (#12821)
Fixes #12752
Steps now support an optional metadata property for storing arbitrary key-value data. This metadata is preserved through step serialization and is available in the workflow graph, enabling use cases like UI annotations or custom step categorization. (#12861)
import { createStep } from "@mastra/core/workflows";
import { z } from "zod";
const step = createStep({
//...step information
+ metadata: {
+ category: "orders",
+ priority: "high",
+ version: "1.0.0",
+ },
});
Metadata values must be serializable (no functions or circular references).
Fixed: You can now pass workflows with a requestContextSchema to the Mastra constructor without a type error. Related: #12773. (#12857)
Fixed TypeScript type errors when using .optional().default() in workflow input schemas. Workflows with default values in their schemas no longer produce false type errors when chaining steps with .then(). Fixes #12634 (#12778)
Fix setLogger to update workflow loggers (#12889)
When calling mastra.setLogger(), workflows were not being updated with the new logger. This caused workflow errors to be logged via the default ConsoleLogger instead of the configured logger (e.g., PinoLogger with HttpTransport), resulting in missing error logs in Cloud deployments.
717ffab, b31c922, e4b6dab, 5719fa8, 83cda45, 11804ad, 2e02cd7, aa95f95, 90f7894, f5501ae, 44573af, 00e3861, 8109aee, 7bfbc52, 1445994, 61f44a2, 37145d2, fdad759, e4569c5, 7309a85, 99424f6, 44eb452, 6c40593, 8c1135d, dd39e54, b6fad9a, 4129c07, 5b930ab, 4be93d0, 047635c, 8c90ff4, ed232d1, 3891795, 4f955b2, 55a4c90]:
Added support for request context presets in Mastra Studio. You can now define a JSON file with named requestContext presets and pass it via the --request-context-presets CLI flag to both mastra dev and mastra studio commands. A dropdown selector appears in the Studio Playground, allowing you to quickly switch between preset configurations. (#12501)
Usage:
mastra dev --request-context-presets ./presets.json
mastra studio --request-context-presets ./presets.json
Presets file format:
{
"development": { "userId": "dev-user", "env": "development" },
"production": { "userId": "prod-user", "env": "production" }
}
When presets are loaded, a dropdown appears above the JSON editor on the Request Context page. Selecting a preset populates the editor, and manually editing the JSON automatically switches back to "Custom".
Fixed bundling of workspace packages in monorepo setups. (#12645)
What was fixed:
Why this happened:
Earlier workspace resolution logic skipped some workspace paths and virtual entries, so those dependencies were missed.
Fixed TypeScript path alias resolution in workspace packages configured with transpilePackages. The bundler now correctly resolves imports using path aliases (e.g., @/_ → ./src/_) in transpiled workspace packages, preventing build failures in monorepo setups. (#12239)
Updated dependencies [717ffab, b31c922, e4b6dab, 6c40593, 5719fa8, 83cda45, 11804ad, 11804ad, aa95f95, 90f7894, f5501ae, 44573af, 00e3861, 8109aee, 7bfbc52, 8109aee, 1445994, fdad759, 61f44a2, 37145d2, fdad759, e4569c5, 7309a85, 4be93d0, b7fe535, 27e9a34, 1d8cd0a, 99424f6, 44eb452, a211248, 218849f, e4b6dab, 6c40593, 8c1135d, dd39e54, b6fad9a, 4129c07, a211248, d917195, 5b930ab, 4be93d0, 047635c, 8c90ff4, ed232d1, 5b930ab, 3891795, 4f955b2, 55a4c90]:
Added requestContextSchema and rule-based conditional fields for stored agents. (#12896)
Stored agent fields (tools, model, workflows, agents, memory, scorers, inputProcessors, outputProcessors, defaultOptions) can now be configured as conditional variants with rule groups that evaluate against request context at runtime. All matching variants accumulate — arrays are concatenated and objects are shallow-merged — so agents dynamically compose their configuration based on the incoming request context.
New requestContextSchema field
Stored agents now accept an optional requestContextSchema (JSON Schema) that is converted to a Zod schema and passed to the Agent constructor, enabling request context validation.
Conditional field example
await agentsStore.create({
agent: {
id: "my-agent",
name: "My Agent",
instructions: "You are a helpful assistant",
model: { provider: "openai", name: "gpt-4" },
tools: [
{ value: { "basic-tool": {} } },
{
value: { "premium-tool": {} },
rules: {
operator: "AND",
conditions: [{ field: "tier", operator: "equals", value: "premium" }]
}
}
],
requestContextSchema: {
type: "object",
properties: { tier: { type: "string" } }
}
}
});
Added dynamic instructions for stored agents. Agent instructions can now be composed from reusable prompt blocks with conditional rules and variable interpolation, enabling a prompt-CMS-like editing experience. (#12861)
Instruction blocks can be mixed in an agent's instructions array:
text — static text with {{variable}} interpolationprompt_block_ref — reference to a versioned prompt block stored in the databaseprompt_block — inline prompt block with optional conditional rulesCreating a prompt block and using it in a stored agent:
// Create a reusable prompt block
const block = await editor.createPromptBlock({
id: "security-rules",
name: "Security Rules",
content: "You must verify the user's identity. The user's role is {{user.role}}.",
rules: {
operator: "AND",
conditions: [{ field: "user.isAuthenticated", operator: "equals", value: true }]
}
});
// Create a stored agent that references the prompt block
await editor.createStoredAgent({
id: "support-agent",
name: "Support Agent",
instructions: [
{ type: "text", content: "You are a helpful support agent for {{company}}." },
{ type: "prompt_block_ref", id: "security-rules" },
{
type: "prompt_block",
content: "Always be polite.",
rules: { operator: "AND", conditions: [{ field: "tone", operator: "equals", value: "formal" }] }
}
],
model: { provider: "openai", name: "gpt-4o" }
});
// At runtime, instructions resolve dynamically based on request context
const agent = await editor.getStoredAgentById("support-agent");
const result = await agent.generate("Help me reset my password", {
requestContext: new RequestContext([
["company", "Acme Corp"],
["user.isAuthenticated", true],
["user.role", "admin"],
["tone", "formal"]
])
});
Prompt blocks are versioned — updating a block's content takes effect immediately for all agents referencing it, with no cache clearing required.
scorer-definitions storage domain for storing LLM-as-judge and preset scorer configurations in the databaseVersionedStorageDomain generic base class that unifies AgentsStorage, PromptBlocksStorage, and ScorerDefinitionsStorage with shared CRUD methods (create, getById, getByIdResolved, update, delete, list, listResolved)preset/customLLMJudge config with top-level type, instructions, scoreRange, and presetConfig fieldsMastraEditor to use a namespace pattern (editor.agent.*, editor.scorer.*, editor.prompt.*) backed by a CrudEditorNamespace base class with built-in caching and an onCacheEvict hookrawConfig support to MastraBase and MastraScorer via toRawConfig(), so hydrated primitives carry their stored configurationMastra class (addPromptBlock, removePromptBlock, addScorer, removeScorer)Creating a stored scorer (LLM-as-judge):
const scorer = await editor.scorer.create({
id: "my-scorer",
name: "Response Quality",
type: "llm-judge",
instructions: "Evaluate the response for accuracy and helpfulness.",
model: { provider: "openai", name: "gpt-4o" },
scoreRange: { min: 0, max: 1 }
});
Retrieving and resolving a stored scorer:
// Fetch the stored definition from DB
const definition = await editor.scorer.getById("my-scorer");
// Resolve it into a runnable MastraScorer instance
const runnableScorer = editor.scorer.resolve(definition);
// Execute the scorer
const result = await runnableScorer.run({
input: "What is the capital of France?",
output: "The capital of France is Paris."
});
Editor namespace pattern (before/after):
// Before
const agent = await editor.getStoredAgentById("abc");
const prompts = await editor.listPromptBlocks();
// After
const agent = await editor.agent.getById("abc");
const prompts = await editor.prompt.list();
Generic storage domain methods (before/after):
// Before
const store = storage.getStore("agents");
await store.createAgent({ agent: input });
await store.getAgentById({ id: "abc" });
await store.deleteAgent({ id: "abc" });
// After
const store = storage.getStore("agents");
await store.create({ agent: input });
await store.getById("abc");
await store.delete("abc");
tools field from string[] to Record<string, { description?: string }> to allow per-tool description overridesdescription for a tool, the override is applied at resolution timeBreaking: Removed cloneAgent() from the Agent class. Agent cloning is now handled by the editor package via editor.agent.clone(). (#12904)
If you were calling agent.cloneAgent() directly, use the editor's agent namespace instead:
// Before
const result = await agent.cloneAgent({ newId: "my-clone" });
// After
const editor = mastra.getEditor();
const result = await editor.agent.clone(agent, { newId: "my-clone" });
Why: The Agent class should not be responsible for storage serialization. The editor package already handles converting between runtime agents and stored configurations, so cloning belongs there.
Added getConfiguredProcessorIds() to the Agent class, which returns raw input/output processor IDs for the agent's configuration.
Fixed stale agent data in CMS pages by adding removeAgent method to Mastra and updating clearStoredAgentCache to clear both Editor cache and Mastra registry when stored agents are updated or deleted (#12693)
Fixed stored scorers not being registered on the Mastra instance. Scorers created via the editor are now automatically discoverable through mastra.getScorer() and mastra.getScorerById(), matching the existing behavior of stored agents. Previously, stored scorers could only be resolved inline but were invisible to the runtime registry, causing lookups to fail. (#12903)
Fix memory persistence: (#12704)
embedder field when creating agents from stored configUpdated dependencies [717ffab, b31c922, e4b6dab, 5719fa8, 83cda45, 11804ad, 2e02cd7, aa95f95, 90f7894, f5501ae, 44573af, 00e3861, 8109aee, 7bfbc52, 1445994, 61f44a2, 37145d2, fdad759, e4569c5, 7309a85, 99424f6, 44eb452, 6c40593, 8c1135d, dd39e54, b6fad9a, 4129c07, 5b930ab, 4be93d0, 047635c, 8c90ff4, ed232d1, 3891795, 4f955b2, 55a4c90]:
Fixed faithfulness scorer failing with 'expected record, received array' error when used with live agents. The preprocess step now returns claims as an object instead of a raw array, matching the expected storage schema. (#12892)
Fixed LLM scorer schema compatibility with Anthropic API by replacing z.number().min(0).max(1) with z.number().refine() for score validation. The min/max constraints were being converted to JSON Schema minimum/maximum properties which some providers don't support. (#12722)
Updated dependencies [717ffab, b31c922, e4b6dab, 5719fa8, 83cda45, 11804ad, aa95f95, 90f7894, f5501ae, 44573af, 00e3861, 8109aee, 7bfbc52, 1445994, 61f44a2, 37145d2, fdad759, e4569c5, 7309a85, 99424f6, 44eb452, 6c40593, 8c1135d, dd39e54, b6fad9a, 4129c07, 5b930ab, 4be93d0, 047635c, 8c90ff4, ed232d1, 3891795, 4f955b2, 55a4c90]:
Update README (#12817)
Updated dependencies [717ffab, b31c922, e4b6dab, 5719fa8, 83cda45, 11804ad, aa95f95, 90f7894, f5501ae, 44573af, 00e3861, 8109aee, 7bfbc52, 1445994, 61f44a2, 37145d2, fdad759, e4569c5, 7309a85, 99424f6, 44eb452, 6c40593, 8c1135d, dd39e54, b6fad9a, 4129c07, 5b930ab, 4be93d0, 047635c, 8c90ff4, ed232d1, 3891795, 4f955b2, 55a4c90]:
717ffab, b31c922, e4b6dab, 5719fa8, 83cda45, 11804ad, aa95f95, 90f7894, f5501ae, 44573af, 00e3861, 8109aee, 7bfbc52, 1445994, 61f44a2, 37145d2, fdad759, e4569c5, 7309a85, 99424f6, 44eb452, 6c40593, 8c1135d, dd39e54, b6fad9a, 2d2decc, 4129c07, 5b930ab, 4be93d0, 047635c, 8c90ff4, ed232d1, 3891795, 4f955b2, 55a4c90]:
Async buffering for observational memory is now enabled by default. Observations are pre-computed in the background as conversations grow — when the context window fills up, buffered observations activate instantly with no blocking LLM call. This keeps agents responsive during long conversations. (#12939)
Default settings:
observation.bufferTokens: 0.2 — buffer every 20% of messageTokens (~6k tokens with the default 30k threshold)observation.bufferActivation: 0.8 — on activation, retain 20% of the message windowreflection.bufferActivation: 0.5 — start background reflection at 50% of the observation thresholdDisabling async buffering:
Set observation.bufferTokens: false to disable async buffering for both observations and reflections:
const memory = new Memory({
options: {
observationalMemory: {
model: "google/gemini-2.5-flash",
observation: {
bufferTokens: false
}
}
}
});
Model is now required when passing an observational memory config object. Use observationalMemory: true for the default (google/gemini-2.5-flash), or set a model explicitly:
// Uses default model (google/gemini-2.5-flash)
observationalMemory: true
// Explicit model
observationalMemory: {
model: "google/gemini-2.5-flash",
}
shareTokenBudget requires bufferTokens: false (temporary limitation). If you use shareTokenBudget: true, you must explicitly disable async buffering:
observationalMemory: {
model: "google/gemini-2.5-flash",
shareTokenBudget: true,
observation: { bufferTokens: false },
}
New streaming event: data-om-status replaces data-om-progress with a structured status object containing active window usage, buffered observation/reflection state, and projected activation impact.
Buffering markers: New data-om-buffering-start, data-om-buffering-end, and data-om-buffering-failed streaming events for UI feedback during background operations.
Fixed observational memory progress bars resetting to zero after agent responses finish. (#12934)
Fixed working memory tools being injected when no thread or resource context is provided. Made working memory tool execute scope-aware: thread-scoped requires threadId, resource-scoped requires resourceId (previously both were always required regardless of scope). (#12831)
Updated dependencies [717ffab, b31c922, e4b6dab, 5719fa8, 83cda45, 11804ad, aa95f95, 90f7894, f5501ae, 44573af, 00e3861, 8109aee, 7bfbc52, 1445994, 61f44a2, 37145d2, fdad759, e4569c5, 7309a85, 99424f6, 44eb452, 6c40593, 8c1135d, dd39e54, b6fad9a, 4129c07, 5b930ab, 4be93d0, 047635c, 8c90ff4, ed232d1, 3891795, 4f955b2, 55a4c90]:
Added new agent creation page with CMS-style layout featuring Identity, Capabilities, and Revisions tabs. The page includes a prompt editor with Handlebars template support, partials management, and instruction diff viewing for revisions. (#12569)
Added support for request context presets in Mastra Studio. You can now define a JSON file with named requestContext presets and pass it via the --request-context-presets CLI flag to both mastra dev and mastra studio commands. A dropdown selector appears in the Studio Playground, allowing you to quickly switch between preset configurations. (#12501)
Usage:
mastra dev --request-context-presets ./presets.json
mastra studio --request-context-presets ./presets.json
Presets file format:
{
"development": { "userId": "dev-user", "env": "development" },
"production": { "userId": "prod-user", "env": "production" }
}
When presets are loaded, a dropdown appears above the JSON editor on the Request Context page. Selecting a preset populates the editor, and manually editing the JSON automatically switches back to "Custom".
Added multi-block instruction editing for agents. Instructions can now be split into separate blocks that are reorderable via drag-and-drop, each with optional conditional display rules based on agent variables. Includes a preview dialog to test how blocks compile with different variable values. (#12759)
Update peer dependencies to match core package version bump (1.1.0) (#12508)
dependencies updates: (#11750)
@assistant-ui/react@^0.12.3 ↗︎ (from ^0.11.47, in dependencies)@assistant-ui/react-markdown@^0.12.1 ↗︎ (from ^0.11.6, in dependencies)@assistant-ui/react-syntax-highlighter@^0.12.1 ↗︎ (from ^0.11.6, in dependencies)dependencies updates: (#11829)
@codemirror/autocomplete@^6.20.0 ↗︎ (from ^6.18.0, in dependencies)@codemirror/lang-javascript@^6.2.4 ↗︎ (from ^6.2.2, in dependencies)@codemirror/state@^6.5.4 ↗︎ (from ^6.5.2, in dependencies)@codemirror/view@^6.39.12 ↗︎ (from ^6.38.6, in dependencies)dependencies updates: (#11830)
@uiw/codemirror-theme-dracula@^4.25.4 ↗︎ (from ^4.23.14, in dependencies)dependencies updates: (#11843)
@uiw/react-codemirror@^4.25.4 ↗︎ (from ^4.23.14, in dependencies)dependencies updates: (#12782)
@uiw/codemirror-theme-github@^4.25.4 ↗︎ (from ^4.25.3, in dependencies)Improved workspace file browser with mount point support. Mounted directories now display provider-specific icons (S3, GCS, Azure, Cloudflare, MinIO) and optional description tooltips. File entries include mount metadata for distinguishing storage providers at a glance. (#12851)
Fix memory configuration in agent forms: (#12704)
SerializedMemoryConfig object instead of stringMemoryConfigurator component for proper memory settings UIuseVectors and useEmbedders hooks to fetch available options from APIUpdate docs links to request context (#12144)
Supporting changes for async buffering in observational memory, including new config options, streaming events, and UI markers. (#12891)
Fixed skill install/update/remove error toasts showing generic "Internal Server Error" instead of the actual error message. Added mount status indicators to the file browser. (#12605)
Supporting work to enable workflow step metadata (#12508)
Made description and instructions required fields in the scorer edit form (#12897)
Fixed chat messages flashing when loading a thread. Messages now update reactively via useEffect instead of lazy state initialization, preventing the brief flash of empty state. (#12863)
Fixed observational memory progress bars resetting to zero after agent responses finish. The messages and observations sidebar bars now retain their values on stream completion, cancellation, and page reload. (#12939)
Fixed skills search dialog to correctly identify individual skills. Selection now highlights only the clicked skill, and the Installed badge only shows for the exact skill that was installed (matching by source repository + name). Gracefully handles older server versions without source info by falling back to name-only matching. (#12678)
Updated dependencies [717ffab, b31c922, e4b6dab, 6c40593, 5719fa8, 83cda45, 11804ad, 11804ad, aa95f95, f8772f5, 047635c, 90f7894, f5501ae, 44573af, 00e3861, 2e02cd7, 8109aee, 8109aee, 7bfbc52, 42a2e13, 1445994, 73b0925, 61f44a2, 37145d2, fdad759, e4569c5, be42958, 7309a85, 5bee8ea, 99424f6, 1445994, 44eb452, a211248, 6c40593, 4493fb9, 8c1135d, dd39e54, b6fad9a, e8f3910, 4129c07, 5b930ab, 4be93d0, 047635c, 047635c, 8c90ff4, ed232d1, 3891795, 4f955b2, 55a4c90]:
Added stored scorer CRUD API and updated editor namespace calls (#12846)
StoredScorer resource to the client SDK with full CRUD supporteditor.agent.getById, editor.agent.list, editor.prompt.preview) and generic storage domain methods (store.create, store.getById, store.delete)Update peer dependencies to match core package version bump (1.1.0) (#12508)
tools field from string[] to Record<string, { description?: string }> to allow per-tool description overridesdescription for a tool, the override is applied at resolution timeFixed observational memory progress bars resetting to zero after agent responses finish. (#12934)
Fixed issues with stored agents (#12790)
Added requestContextSchema and conditional field validation to stored agent API schemas. The stored agent create, update, and version endpoints now accept conditional variants for dynamically-configurable fields (tools, model, workflows, agents, memory, scorers, inputProcessors, outputProcessors, defaultOptions). (#12896)
Fix stored agents functionality: (#12704)
activeVersionId wasn't being updated when creating new versionsGET /vectors endpoint to list available vector storesGET /embedders endpoint to list available embedding modelshandleAutoVersioning to use the active version instead of latestAdded POST /stored/agents/preview-instructions endpoint for resolving instruction blocks against a request context. This enables UI previews of how agent instructions will render with specific variables and rule conditions. Updated Zod schemas to support the new AgentInstructionBlock union type (text, prompt_block_ref, inline prompt_block) in agent version and stored agent responses. (#12776)
Improved workspace lookup performance while keeping backwards compatibility. (#12607)
The workspace handlers now use Mastra's workspace registry (getWorkspaceById()) for faster lookup when available, and fall back to iterating through agents for older @mastra/core versions.
This change is backwards compatible - newer @mastra/server works with both older and newer @mastra/core versions.
Route server errors through Mastra logger instead of console.error (#12888)
Server adapter errors (handler errors, parsing errors, auth errors) now use the configured Mastra logger instead of console.error. This ensures errors are properly formatted as structured logs and sent to configured transports like HttpTransport.
Fixed Swagger UI not including the API prefix (e.g., /api) in request URLs. The OpenAPI spec now includes a servers field with the configured prefix, so Swagger UI correctly generates URLs like http://localhost:4111/api/agents instead of http://localhost:4111/agents. (#12847)
Fixed sort direction parameters being silently ignored in Thread Messages API when using bracket notation query params (e.g., orderBy[field]=createdAt&orderBy[direction]=DESC). The normalizeQueryParams function now reconstructs nested objects from bracket-notation keys, so both JSON format and bracket notation work correctly for orderBy, filter, metadata, and other complex query parameters. (Fixes #12816) (#12832)
Supporting work to enable workflow step metadata (#12508)
Made description and instructions required fields in the scorer edit form (#12897)
Added mount metadata to the workspace file listing response. File entries now include provider, icon, display name, and description for mounted filesystems. (#12851)
Added source repository info to workspace skill listings so clients can distinguish identically named skills installed from different repos. (#12678)
Improved error messages when skill installation fails, now showing the actual error instead of a generic message. (#12605)
Breaking: Removed cloneAgent() from the Agent class. Agent cloning is now handled by the editor package via editor.agent.clone(). (#12904)
If you were calling agent.cloneAgent() directly, use the editor's agent namespace instead:
// Before
const result = await agent.cloneAgent({ newId: "my-clone" });
// After
const editor = mastra.getEditor();
const result = await editor.agent.clone(agent, { newId: "my-clone" });
Why: The Agent class should not be responsible for storage serialization. The editor package already handles converting between runtime agents and stored configurations, so cloning belongs there.
Added getConfiguredProcessorIds() to the Agent class, which returns raw input/output processor IDs for the agent's configuration.
Updated dependencies [717ffab, b31c922, e4b6dab, 5719fa8, 83cda45, 11804ad, aa95f95, 90f7894, f5501ae, 44573af, 00e3861, 8109aee, 7bfbc52, 1445994, 61f44a2, 37145d2, fdad759, e4569c5, 7309a85, 99424f6, 44eb452, 6c40593, 8c1135d, dd39e54, b6fad9a, 4129c07, 5b930ab, 4be93d0, 047635c, 8c90ff4, ed232d1, 3891795, 4f955b2, 55a4c90]:
Observational Memory is a new Mastra Memory feature which makes small context windows behave like large ones, while retaining long-term memory. It compresses conversations into dense observations logs (5–40x smaller than raw messages). When observations grow too long, they're condensed into reflections. Supports thread and resource scopes. It requires the latest versions of @mastra/core, @mastra/memory, mastra, and @mastra/pg, @mastra/libsql, or @mastra/mongodb.
@mastra/server adds skills.sh proxy endpoints (search/browse/preview/install/update/remove), Studio adds an “Add Skill” dialog for browsing/installing skills, and the CLI wizard can optionally install Mastra skills during create-mastra (with non-interactive --skills support).
ToolSearchProcessorAdds ToolSearchProcessor to let agents search and load tools on demand via built-in search_tools and load_tool meta-tools, dramatically reducing context usage for large tool libraries (e.g., MCP/integration-heavy setups).
@mastra/editor: store, version, and resolve agents from a databaseIntroduces @mastra/editor for persisting complete agent configurations (instructions, models, tools, workflows, nested agents, processors, memory), managing versions/activation, and instantiating dependencies from the Mastra registry with caching and type-safe serialization.
@mastra/elasticsearch: vector document IDs now come from Elasticsearch _id; stored id fields are no longer written (breaking if you relied on source.id).Update provider registry and model documentation with latest models and providers
Fixes: e6fc281
Fixed processors returning { tools: {}, toolChoice: 'none' } being ignored. Previously, when a processor returned empty tools with an explicit toolChoice: 'none' to prevent tool calls, the toolChoice was discarded and defaulted to 'auto'. This fix preserves the explicit 'none' value, enabling patterns like ensuring a final text response when maxSteps is reached.
Fixes: #12601
Internal changes to enable observational memory
Internal changes to enable @mastra/editor
Fix moonshotai/kimi-k2.5 multi-step tool calling failing with "reasoning_content is missing in assistant tool call message"
Changed moonshotai and moonshotai-cn (China version) providers to use Anthropic-compatible API endpoints instead of OpenAI-compatible
moonshotai: https://api.moonshot.ai/anthropic/v1
moonshotai-cn: https://api.moonshot.cn/anthropic/v1
This properly handles reasoning_content for kimi-k2.5 model
Fixes: #12530
Fixed custom input processors from disabling workspace skill tools in generate() and stream(). Custom processors now replace only the processors you configured, while memory and skills remain available. Fixes #12612.
Fixes: #12676
Fixed Workspace search index names now use underscores so they work with SQL-based vector stores (PgVector, LibSQL).
Added
You can now set a custom index name with searchIndexName.
Why Some SQL vector stores reject hyphens in index names.
Example
// Before - would fail with PgVector
new Workspace({ id: "my-workspace", vectorStore, embedder });
// After - works with all vector stores
new Workspace({ id: "my-workspace", vectorStore, embedder });
// Or use a custom index name
new Workspace({ vectorStore, embedder, searchIndexName: "my_workspace_vectors" });
Fixes: #12673
Added logger support to Workspace filesystem and sandbox providers. Providers extending MastraFilesystem or MastraSandbox now automatically receive the Mastra logger for consistent logging of file operations and command executions.
Fixes: #12606
Added ToolSearchProcessor for dynamic tool discovery.
Agents can now discover and load tools on demand instead of having all tools available upfront. This reduces context token usage by ~94% when working with large tool libraries.
New API:
import { ToolSearchProcessor } from "@mastra/core/processors";
import { Agent } from "@mastra/core";
// Create a processor with searchable tools
const toolSearch = new ToolSearchProcessor({
tools: {
createIssue: githubTools.createIssue,
sendEmail: emailTools.send
// ... hundreds of tools
},
search: {
topK: 5, // Return top 5 results (default: 5)
minScore: 0.1 // Filter results below this score (default: 0)
}
});
// Attach processor to agent
const agent = new Agent({
name: "my-agent",
inputProcessors: [toolSearch],
tools: {
/* always-available tools */
}
});
How it works:
The processor automatically provides two meta-tools to the agent:
search_tools - Search for available tools by keyword relevanceload_tool - Load a specific tool into the conversationThe agent discovers what it needs via search and loads tools on demand. Loaded tools are available immediately and persist within the conversation thread.
Why:
When agents have access to 100+ tools (from MCP servers or integrations), including all tool definitions in the context can consume significant tokens (~1,500 tokens per tool). This pattern reduces context usage by giving agents only the tools they need, when they need them.
Fixes: #12290
Catch up evented workflows on parity with default execution engine
Fixes: #12555
Expose token usage from embedding operations
saveMessages now returns usage: { tokens: number } with aggregated token count from all embeddings
recall now returns usage: { tokens: number } from the vector search query embedding
Updated abstract method signatures in MastraMemory to include optional usage in return types
This allows users to track embedding token usage when using the Memory class.
Fixes: #12556
Fixed a security issue where sensitive observability credentials (such as Langfuse API keys) could be exposed in tool execution error logs. The tracingContext is now properly excluded from logged data.
Fixes: #12669
Fixed issue where some models incorrectly call skill names directly as tools instead of using skill-activate. Added clearer system instructions that explicitly state skills are NOT tools and must be activated via skill-activate with the skill name as the "name" parameter. Fixes #12654.
Fixes: #12677
Improved workspace filesystem error handling: return 404 for not-found errors instead of 500, show user-friendly error messages in UI, and add MastraClientError class with status/body properties for better error handling
Fixes: #12533
Improved workspace tool descriptions with clearer usage guidance for read_file, edit_file, and execute_command tools.
Fixes: #12640
Fixed JSON parsing in agent network to handle malformed LLM output. Uses parsePartialJson from AI SDK to recover truncated JSON, missing braces, and unescaped control characters instead of failing immediately. This reduces unnecessary retry round-trips when the routing agent generates slightly malformed JSON for tool/workflow prompts. Fixes #12519.
Fixes: #12526
Internal changes to enable observational memory
Internal changes to enable @mastra/editor
Improved workspace filesystem error handling: return 404 for not-found errors instead of 500, show user-friendly error messages in UI, and add MastraClientError class with status/body properties for better error handling
Fixes: #12533
Fixed import path for storage constants in Convex server storage to use the correct @mastra/core/storage/constants subpath export
Fixes: #12560
Created @mastra/editor package for managing and resolving stored agent configurations
This major addition introduces the editor package, which provides a complete solution for storing, versioning, and instantiating agent configurations from a database. The editor seamlessly integrates with Mastra's storage layer to enable dynamic agent management.
Key Features:
Usage Example:
import { MastraEditor } from "@mastra/editor";
import { Mastra } from "@mastra/core";
// Initialize editor with Mastra
const mastra = new Mastra({
/* config */
editor: new MastraEditor()
});
// Store an agent configuration
const agentId = await mastra.storage.stores?.agents?.createAgent({
name: "customer-support",
instructions: "Help customers with inquiries",
model: { provider: "openai", name: "gpt-4" },
tools: ["search-kb", "create-ticket"],
workflows: ["escalation-flow"],
memory: { vector: "pinecone-db" }
});
// Retrieve and use the stored agent
const agent = await mastra.getEditor()?.getStoredAgentById(agentId);
const response = await agent?.generate("How do I reset my password?");
// List all stored agents
const agents = await mastra.getEditor()?.listStoredAgents({ pageSize: 10 });
Storage Improvements:
Fixes: #12631
Added API key, basic, and bearer authentication options for Elasticsearch connections.
Changed vector IDs now come from Elasticsearch _id; stored id fields are no longer written (breaking if you relied on source.id).
Why This aligns with Elasticsearch auth best practices and avoids duplicate IDs in stored documents.
Before
const store = new ElasticSearchVector({ url, id: "my-index" });
After
const store = new ElasticSearchVector({
url,
id: "my-index",
auth: { apiKey: process.env.ELASTICSEARCH_API_KEY! }
});
Fixes: #11298
Added getContext hook to hallucination scorer for dynamic context resolution at runtime. This enables live scoring scenarios where context (like tool results) is only available when the scorer runs. Also added extractToolResults utility function to help extract tool results from scorer output.
Before (static context):
const scorer = createHallucinationScorer({
model: openai("gpt-4o"),
options: {
context: ["The capital of France is Paris.", "France is in Europe."]
}
});
After (dynamic context from tool results):
import { extractToolResults } from "@mastra/evals/scorers";
const scorer = createHallucinationScorer({
model: openai("gpt-4o"),
options: {
getContext: ({ run }) => {
const toolResults = extractToolResults(run.output);
return toolResults.map((t) => JSON.stringify({ tool: t.toolName, result: t.result }));
}
}
});
Fixes: #12639
Fixed missing cross-origin headers on streaming responses when using the Fastify adapter. Headers set by plugins (like @fastify/cors) are now preserved when streaming. See https://github.com/mastra-ai/mastra/issues/12622
Fixes: #12633
Fix long running steps causing inngest workflow to fail
Fixes: #12522
Internal changes to enable observational memory
Internal changes to enable @mastra/editor
Restructure and tidy up the MCP Docs Server. It now focuses more on documentation and uses fewer tools.
Removed tools that sourced content from:
The local docs source is now using the generated llms.txt files from the official documentation, making it more accurate and easier to maintain.
Fixes: #12623
Added Observational Memory — a new memory system that keeps your agent's context window small while preserving long-term memory across conversations.
Why: Long conversations cause context rot and waste tokens. Observational Memory compresses conversation history into observations (5–40x compression) and periodically condenses those into reflections. Your agent stays fast and focused, even after thousands of messages.
Usage:
import { Memory } from "@mastra/memory";
import { PostgresStore } from "@mastra/pg";
const memory = new Memory({
storage: new PostgresStore({ connectionString: process.env.DATABASE_URL }),
options: {
observationalMemory: true
}
});
const agent = new Agent({
name: "my-agent",
model: openai("gpt-4o"),
memory
});
What's new:
observationalMemory: true enables the three-tier memory system (recent messages → observations → reflections)observe() API for triggering observation outside the normal agent loopAgent.findProcessor() method for looking up processors by IDprocessorStates for persisting processor state across loop iterationsProcessorStreamWriter for custom stream events from processorsFixes: #12599
Expose token usage from embedding operations
saveMessages now returns usage: { tokens: number } with aggregated token count from all embeddings
recall now returns usage: { tokens: number } from the vector search query embedding
Updated abstract method signatures in MastraMemory to include optional usage in return types
This allows users to track embedding token usage when using the Memory class.
Fixes: #12556
Internal changes to enable observational memory
Internal changes to enable @mastra/editor
Increased default serialization limits for AI tracing. The maxStringLength is now 128KB (previously 1KB) and maxDepth is 8 (previously 6). These changes prevent truncation of large LLM prompts and responses during tracing.
To restore the previous behavior, set serializationOptions in your observability config:
serializationOptions: {
maxStringLength: 1024,
maxDepth: 6,
}
Fixes: #12579
Fix CloudFlare Workers deployment failure caused by fileURLToPath being called at module initialization time.
Moved SNAPSHOTS_DIR calculation from top-level module code into a lazy getter function. In CloudFlare Workers (V8 runtime), import.meta.url is undefined during worker startup, causing the previous code to throw. The snapshot functionality is only used for testing, so deferring initialization has no impact on normal operation.
Fixes: #12540
Internal changes to enable observational memory
Internal changes to enable @mastra/editor
Use EntryCell icon prop for source indicator in agent table
Fixes: #12515
Add Observational Memory UI to the playground. Shows observation/reflection markers inline in the chat thread, and adds an Observational Memory panel to the agent info section with observations, reflection history, token usage, and config. All OM UI is gated behind a context provider that no-ops when OM isn't configured.
Fixes: #12599
Added MultiCombobox component for multi-select scenarios, and JSONSchemaForm compound component for building JSON schema definitions visually. The Combobox component now supports description text on options and error states.
Fixes: #12616
Added ContentBlocks, a reusable drag-and-drop component for building ordered lists of editable content. Also includes AgentCMSBlocks, a ready-to-use implementation for agent system prompts with add, delete, and reorder functionality.
Fixes: #12629
Redesigned toast component with outline circle icons, left-aligned layout, and consistent design system styling
Fixes: #12618
Updated Badge component styling: increased height to 28px, changed to pill shape with rounded-full, added border, and increased padding for better visual appearance.
Fixes: #12511
Fixed custom gateway provider detection in Studio.
What changed:
acme/custom are now found when the agent uses model acme/custom/gpt-4o)Why:
Custom gateway providers are stored with a gateway prefix (e.g., acme/custom), but the model router extracts just the provider part (e.g., custom). The lookups were failing because they only did exact matching. Now both backend and frontend use fallback logic to find providers with gateway prefixes.
Fixes: #11815
Fixed variable highlighting in markdown lists - variables like {{name}} now correctly display in orange inside list items.
Fixes: #12653
Added markdown language support to CodeEditor with syntax highlighting for headings, emphasis, links, and code blocks. New language prop accepts 'json' (default) or 'markdown'. Added variable highlighting extension that visually distinguishes {{variableName}} patterns with orange styling when highlightVariables prop is enabled.
Fixes: #12621
Fixed the Tools page incorrectly displaying as empty when tools are defined inline in agent files.
Fixes: #12531
Fixed rule engine bugs: type-safe comparisons for greater_than/less_than operators, array support for contains/not_contains, consistent path parsing for dot notation, and prevented Infinity/NaN strings from being converted to JS special values
Fixes: #12624
Added Add Skill dialog for browsing and installing skills from skills.sh registry.
New features:
Fixes: #12492
Fixed toast imports to use custom wrapper for consistent styling
Fixes: #12618
Fixed sidebar tooltip styling in collapsed mode by removing hardcoded color overrides
Fixes: #12537
Added CMS block conditional rules component and unified JsonSchema types across the codebase. The new AgentCMSBlockRules component allows content blocks to be displayed conditionally based on rules. Also added jsonSchemaToFields utility for bi-directional schema conversion.
Fixes: #12651
Improved workspace filesystem error handling: return 404 for not-found errors instead of 500, show user-friendly error messages in UI, and add MastraClientError class with status/body properties for better error handling
Fixes: #12533
Fixed combobox dropdowns in agent create/edit dialogs to render within the modal container, preventing z-index and scrolling issues.
Fixes: #12510
Added warning toast and banner when installing skills that aren't discovered due to missing .agents/skills path configuration.
Fixes: #12547
Added Standard Schema support to @mastra/schema-compat. This enables interoperability with any schema library that implements the Standard Schema specification.
New exports:
toStandardSchema() - Convert Zod, JSON Schema, or AI SDK schemas to Standard Schema formatStandardSchemaWithJSON - Type for schemas implementing both validation and JSON Schema conversionInferInput, InferOutput - Utility types for type inferenceExample usage:
import { toStandardSchema } from "@mastra/schema-compat/schema";
import { z } from "zod";
// Convert a Zod schema to Standard Schema
const zodSchema = z.object({ name: z.string(), age: z.number() });
const standardSchema = toStandardSchema(zodSchema);
// Use validation
const result = standardSchema["~standard"].validate({ name: "John", age: 30 });
// Get JSON Schema
const jsonSchema = standardSchema["~standard"].jsonSchema.output({ target: "draft-07" });
Fixes: #12527
Internal changes to enable observational memory
Internal changes to enable @mastra/editor
Internal changes for better gateway selection in Studio
Improved workspace filesystem error handling: return 404 for not-found errors instead of 500, show user-friendly error messages in UI, and add MastraClientError class with status/body properties for better error handling
Fixes: #12533
Fixed peer dependency checker fix command to suggest the correct package to upgrade:
If peer dep is too old (below range) → suggests upgrading the peer dep (e.g., @mastra/core)
If peer dep is too new (above range) → suggests upgrading the package requiring it (e.g., @mastra/libsql)
Fixes: #12529
New feature: You can install the Mastra skill during the create-mastra wizard.
The wizard now asks you to install the official Mastra skill. Choose your favorite agent and your newly created project is set up.
For non-interactive setup, use the --skills flag that accepts comma-separated agent names (e.g. --skills claude-code).
Fixes: #12582
Pre-select Claude Code, Codex, OpenCode, and Cursor as default agents when users choose to install Mastra skills during project creation. Codex has been promoted to the popular agents list for better visibility.
Fixes: #12626
Add AGENTS.md file (and optionally CLAUDE.md) during create mastra creation
Fixes: #12658
A new Workspace capability unifies agent-accessible filesystem operations, sandboxed command/code execution, keyword/semantic/hybrid search, and SKILL.md discovery with safety controls (read-only, approval flows, read-before-write guards). The Workspace is exposed end-to-end: core Workspace class (@mastra/core/workspace), server API endpoints (/workspaces/...), and new @mastra/client-js workspace client methods (files, skills, references, search).
Tracing is more actionable: listTraces now returns a status (success|error|running), spans are cleaner (inherit entity metadata, remove internal spans, emit model chunk spans for all streaming chunks), and tool approval requests are visible in traces. Token accounting for Langfuse/PostHog is corrected (cached tokens separated) and default tracing tags are preserved across exporters.
Express/Fastify/Hono/Koa adapters gain mcpOptions (notably serverless: true) to run MCP HTTP transport statelessly in serverless/edge environments without overriding response handling. Adapters and @mastra/server also add explicit requiresAuth per route (defaulting to protected), improved custom-route auth enforcement (including path params), and corrected route prefix replacement/normalization.
Tools, agents, workflows, and steps can now define a requestContextSchema (Zod) to validate required context at runtime and get typed access in execution; RequestContext.all provides convenient access to all validated values. This also flows into Studio UX (Request Context tab/forms) and fixes/improves propagation through agent networks and nested workflow execution for better observability and analytics.
text-embedding-004; use google/gemini-embedding-001 instead.@isaacs/ttlcache@^2.1.4 ↗︎ (from ^1.4.1, in dependencies)Fixes: #10184
Fixes: 1cf5d2e
Fixes: #12485
The agent record now only stores metadata fields (id, status, activeVersionId, authorId, metadata, timestamps). All configuration fields (name, instructions, model, tools, etc.) live exclusively in version snapshot rows, enabling full version history and rollback.
Key changes:
ownerId to authorId for multi-tenant filteringmemory field type from string to Record<string, unknown>status field ('draft' | 'published') to agent recordsFixes: #12488
@ai-sdk/anthropic, @ai-sdk/openai) to their correct SDK instead of falling back to openai-compatible. Add cerebras, togetherai, and deepinfra as native SDK providers.Fixes: #12450
Fixes: #12303
Fixes: #12379
New Features:
Storage:
API:
Client SDK:
Usage Example:
// Server-side: Configure storage
import { Mastra } from '@mastra/core';
import { PgAgentsStorage } from '@mastra/pg';
const mastra = new Mastra({
agents: { agentOne },
storage: {
agents: new PgAgentsStorage({
connectionString: process.env.DATABASE_URL,
}),
},
});
// Client-side: Use the SDK
import { MastraClient } from '@mastra/client-js';
const client = new MastraClient({ baseUrl: 'http://localhost:3000' });
// Create a stored agent
const agent = await client.createStoredAgent({
name: 'Customer Support Agent',
description: 'Handles customer inquiries',
model: { provider: 'ANTHROPIC', name: 'claude-sonnet-4-5' },
instructions: 'You are a helpful customer support agent...',
tools: ['search', 'email'],
});
// Create a version snapshot
await client.storedAgent(agent.id).createVersion({
name: 'v1.0 - Initial release',
changeMessage: 'First production version',
});
// Compare versions
const diff = await client.storedAgent(agent.id).compareVersions('version-1', 'version-2');
Why: This feature enables teams to manage agents dynamically without code changes, making it easier to iterate on agent configurations and maintain a complete audit trail of changes.
Fixes: #12038
Previously, ModelRouterLanguageModel (used when specifying models as strings like "mistral/mistral-large-latest" or "openai/gpt-4o") had supportedUrls hardcoded as an empty object. This caused Mastra to download all file URLs and convert them to bytes/base64, even when the model provider supports URLs natively.
This fix:
supportedUrls to a lazy PromiseLike that resolves the underlying model's supported URL patternsllm-execution-step.ts to properly await supportedUrls when preparing messagesImpact:
Note: Users who were relying on Mastra to download files from URLs that model providers cannot directly access (internal URLs, auth-protected URLs) may need to adjust their approach by either using base64-encoded content or ensuring URLs are publicly accessible to the model provider.
Fixes: #12167
Example:
// Working memory is loaded but agent cannot update it
const response = await agent.generate("What do you know about me?", {
memory: {
thread: "conversation-123",
resource: "user-alice-456",
options: { readOnly: true },
},
});
Fixes: #12471
Fixes: #12344
New Workspace class combines filesystem, sandbox, and search into a single interface that agents can use for file operations, command execution, and content search.
Key features:
Usage:
import { Workspace, LocalFilesystem, LocalSandbox } from '@mastra/core/workspace';
const workspace = new Workspace({
filesystem: new LocalFilesystem({ basePath: './workspace' }),
sandbox: new LocalSandbox({ workingDirectory: './workspace' }),
bm25: true,
});
const agent = new Agent({
workspace,
// Agent automatically receives workspace tools
});
Fixes: #11986
Fixes: #12429
Fixes: #12325
Fixes: #12220
Fixes: #12082
status field to listTraces response. The status field indicates the trace state: success (completed without error), error (has error), or running (still in progress). This makes it easier to filter and display traces by their current state without having to derive it from the error and endedAt fields.Fixes: #12213
Fixes: #12477
Before (required wrapper):
const agent = new Agent({
voice: new CompositeVoice({ output: new OpenAIVoice() }),
});
After (direct usage):
const agent = new Agent({
voice: new OpenAIVoice(),
});
Fixes: #12329
Fixes: #12346
text-embedding-004 embedding model from the model router. Google shut down this model on January 14, 2026. Use google/gemini-embedding-001 instead.Fixes: #12433
Fixes: #12396
Fixes: #12370
Fixes: #12418
RequestContext.all to access the entire RequestContext object values.const { userId, featureFlags } = requestContext.all;
Added requestContextSchema support to tools, agents, workflows, and steps. Define a Zod schema to validate and type requestContext values at runtime.
Tool example:
import { createTool } from '@mastra/core/tools';
import { z } from 'zod';
const myTool = createTool({
id: 'my-tool',
inputSchema: z.object({ query: z.string() }),
requestContextSchema: z.object({
userId: z.string(),
apiKey: z.string(),
}),
execute: async (input, context) => {
// context.requestContext is typed as RequestContext<{ userId: string, apiKey: string }>
const userId = context.requestContext?.get('userId');
return { result: 'success' };
},
});
Agent example:
import { Agent } from '@mastra/core/agent';
import { z } from 'zod';
const agent = new Agent({
name: 'my-agent',
model: openai('gpt-4o'),
requestContextSchema: z.object({
userId: z.string(),
featureFlags: z.object({
debugMode: z.boolean().optional(),
enableSearch: z.boolean().optional(),
}).optional(),
}),
instructions: ({ requestContext }) => {
// Access validated context values with type safety
const { userId, featureFlags } = requestContext.all;
const baseInstructions = `You are a helpful assistant. The current user ID is: ${userId}.`;
if (featureFlags?.debugMode) {
return `${baseInstructions} Debug mode is enabled - provide verbose responses.`;
}
return baseInstructions;
},
tools: ({ requestContext }) => {
const tools: Record<string, any> = {
weatherInfo,
};
// Conditionally add tools based on validated feature flags
const { featureFlags } = requestContext.all;
if (featureFlags?.enableSearch) {
tools['web_search_preview'] = openai.tools.webSearchPreview();
}
return tools;
},
});
Workflow example:
import { createWorkflow } from '@mastra/core/workflows';
import { z } from 'zod';
const workflow = createWorkflow({
id: 'my-workflow',
inputSchema: z.object({ data: z.string() }),
requestContextSchema: z.object({
tenantId: z.string(),
}),
});
const step = createStep({
id: 'my-step',
description: 'My step description',
inputSchema: z.object({ data: z.string() }),
outputSchema: z.object({ result: z.string() }),
requestContextSchema: z.object({
userId: z.string(),
}),
execute: async ({ inputData, requestContext }) => {
const userId = requestContext?.get('userId');
return {
result: 'some result here',
};
},
});
workflow.then(step).commit()
When requestContextSchema is defined, validation runs automatically and throws an error if required context values are missing or invalid.
Fixes: #12259
User-configured processors are now correctly passed to the routing agent, while memory-derived processors (which could interfere with routing logic) are excluded.
Fixes: #12074
Fixes: #12425
Fixes: #10184
Fixes: #12373
Fixes: #12320
Fixes: #12338
requestContext in Hono context Variables. Previously, only mastra was typed, causing TypeScript errors when accessing c.get('requestContext') even though the runtime correctly provided this context.Fixes: #12419
Example Before:
const stream = await agent.network(task);
After:
const controller = new AbortController();
const stream = await agent.network(task, {
abortSignal: controller.signal,
onAbort: ({ primitiveType, primitiveId }) => {
logger.info(`Aborted ${primitiveType}:${primitiveId}`);
},
});
controller.abort();
Related issue: #12282
Fixes: #12351
Fixes: #12122
Fixes: #12120
csv-to-questions-workflow.ts were incorrectly converted to all-lowercase (csvtoquestionsworkflow.ts) instead of proper camelCase (csvToQuestionsWorkflow.ts). PascalCase and acronym-boundary conversions are also fixed.Fixes: #12436
Fixes: #12347
Fixed TypeScript build errors when using toAISdkStream() with for await...of loops. The function now explicitly imports ReadableStream and TransformStream from 'node:stream/web', ensuring the Node.js types (which include Symbol.asyncIterator support) are used instead of global types that may not have async iterator support in all TypeScript configurations.
This resolves issue #11884 where users encountered the error: "Type 'ReadableStream<InferUIMessageChunk<UIMessage>>' must have a 'Symbol.asyncIterator' method that returns an async iterator."
Fixes: #12159
@arizeai/openinference-genai@0.1.5 ↗︎ (from 0.1.0, in dependencies)Fixes: #12146
@arizeai/openinference-semantic-conventions@^2.1.7 ↗︎ (from ^2.1.2, in dependencies)Fixes: #12147
Fixes: #12423
New Features:
Storage:
API:
Client SDK:
Usage Example:
// Server-side: Configure storage
import { Mastra } from '@mastra/core';
import { PgAgentsStorage } from '@mastra/pg';
const mastra = new Mastra({
agents: { agentOne },
storage: {
agents: new PgAgentsStorage({
connectionString: process.env.DATABASE_URL,
}),
},
});
// Client-side: Use the SDK
import { MastraClient } from '@mastra/client-js';
const client = new MastraClient({ baseUrl: 'http://localhost:3000' });
// Create a stored agent
const agent = await client.createStoredAgent({
name: 'Customer Support Agent',
description: 'Handles customer inquiries',
model: { provider: 'ANTHROPIC', name: 'claude-sonnet-4-5' },
instructions: 'You are a helpful customer support agent...',
tools: ['search', 'email'],
});
// Create a version snapshot
await client.storedAgent(agent.id).createVersion({
name: 'v1.0 - Initial release',
changeMessage: 'First production version',
});
// Compare versions
const diff = await client.storedAgent(agent.id).compareVersions('version-1', 'version-2');
Why: This feature enables teams to manage agents dynamically without code changes, making it easier to iterate on agent configurations and maintain a complete audit trail of changes.
Fixes: #12038
status field to listTraces response. The status field indicates the trace state: success (completed without error), error (has error), or running (still in progress). This makes it easier to filter and display traces by their current state without having to derive it from the error and endedAt fields.Fixes: #12213
The agent record now only stores metadata fields (id, status, activeVersionId, authorId, metadata, timestamps). All configuration fields (name, instructions, model, tools, etc.) live exclusively in version snapshot rows, enabling full version history and rollback.
Key changes:
ownerId to authorId for multi-tenant filteringmemory field type from string to Record<string, unknown>status field ('draft' | 'published') to agent recordsFixes: #12488
New methods on MastraClient:
listWorkspaces() - List all available workspacesgetWorkspace(workspaceId) - Get a workspace client for a specific workspaceworkspace.info() - Get workspace info and capabilitiesworkspace.listFiles() / readFile() / writeFile() / delete() / mkdir() / stat() - Filesystem operationsworkspace.listSkills() / getSkill(name).details() / .listReferences() / .getReference() - Skill managementworkspace.search() - Search indexed contentUsage:
import { MastraClient } from '@mastra/client-js';
const client = new MastraClient({ baseUrl: 'http://localhost:4111' });
// List workspaces and get the first one
const { workspaces } = await client.listWorkspaces();
const workspace = client.getWorkspace(workspaces[0].id);
// Read a file
const { content } = await workspace.readFile('/docs/guide.md');
// List skills
const { skills } = await workspace.listSkills();
// Get skill details
const skill = workspace.getSkill('my-skill');
const details = await skill.details();
// Search content
const { results } = await workspace.search({ query: 'authentication', mode: 'hybrid' });
Fixes: #11986
Fixes: #12259
New Features:
Storage:
API:
Client SDK:
Usage Example:
// Server-side: Configure storage
import { Mastra } from '@mastra/core';
import { PgAgentsStorage } from '@mastra/pg';
const mastra = new Mastra({
agents: { agentOne },
storage: {
agents: new PgAgentsStorage({
connectionString: process.env.DATABASE_URL,
}),
},
});
// Client-side: Use the SDK
import { MastraClient } from '@mastra/client-js';
const client = new MastraClient({ baseUrl: 'http://localhost:3000' });
// Create a stored agent
const agent = await client.createStoredAgent({
name: 'Customer Support Agent',
description: 'Handles customer inquiries',
model: { provider: 'ANTHROPIC', name: 'claude-sonnet-4-5' },
instructions: 'You are a helpful customer support agent...',
tools: ['search', 'email'],
});
// Create a version snapshot
await client.storedAgent(agent.id).createVersion({
name: 'v1.0 - Initial release',
changeMessage: 'First production version',
});
// Compare versions
const diff = await client.storedAgent(agent.id).compareVersions('version-1', 'version-2');
Why: This feature enables teams to manage agents dynamically without code changes, making it easier to iterate on agent configurations and maintain a complete audit trail of changes.
Fixes: #12038
apiPrefix option to MastraClient for connecting to servers with custom API route prefixes.Before: The client always used the /api prefix for all endpoints.
After: You can now specify a custom prefix when deploying Mastra behind non-default paths:
const client = new MastraClient({
baseUrl: 'http://localhost:3000',
apiPrefix: '/mastra', // Calls /mastra/agents, /mastra/workflows, etc.
});
The default remains /api for backward compatibility. See #12261 for more details.
Fixes: #12295
Fixes: #12504
@mastra/client-js so stored agent edit flows work correctly. Fix stored agent schema migration in @mastra/libsql and @mastra/pg to drop and recreate the versions table when the old snapshot-based schema is detected, clean up stale draft records from partial create failures, and remove lingering legacy tables. Restores create and edit flows for stored agents.Fixes: #12504
New Features:
Storage:
API:
Client SDK:
Usage Example:
// Server-side: Configure storage
import { Mastra } from '@mastra/core';
import { PgAgentsStorage } from '@mastra/pg';
const mastra = new Mastra({
agents: { agentOne },
storage: {
agents: new PgAgentsStorage({
connectionString: process.env.DATABASE_URL,
}),
},
});
// Client-side: Use the SDK
import { MastraClient } from '@mastra/client-js';
const client = new MastraClient({ baseUrl: 'http://localhost:3000' });
// Create a stored agent
const agent = await client.createStoredAgent({
name: 'Customer Support Agent',
description: 'Handles customer inquiries',
model: { provider: 'ANTHROPIC', name: 'claude-sonnet-4-5' },
instructions: 'You are a helpful customer support agent...',
tools: ['search', 'email'],
});
// Create a version snapshot
await client.storedAgent(agent.id).createVersion({
name: 'v1.0 - Initial release',
changeMessage: 'First production version',
});
// Compare versions
const diff = await client.storedAgent(agent.id).compareVersions('version-1', 'version-2');
Why: This feature enables teams to manage agents dynamically without code changes, making it easier to iterate on agent configurations and maintain a complete audit trail of changes.
Fixes: #12038
@clack/prompts@1.0.0-alpha.9 ↗︎ (from 1.0.0-alpha.6, in dependencies)Fixes: #11584
workflow-get-init-data codemod that transforms getInitData() calls to getInitData<any>().This codemod helps migrate code after the getInitData return type changed from any to unknown. Adding the explicit <any> type parameter restores the previous behavior while maintaining type safety.
Usage:
npx @mastra/codemod@latest v1/workflow-get-init-data .
Before:
createStep({
execute: async ({ getInitData }) => {
const initData = getInitData();
if (initData.key === 'value') {}
},
});
After:
createStep({
execute: async ({ getInitData }) => {
const initData = getInitData<any>();
if (initData.key === 'value') {}
},
});
Fixes: #12212
mastra_workflow_snapshots index by_record_id referenced a missing id field. The id field is now explicitly defined in the Convex workflow snapshots table schema. This enables successful npx convex dev deployments that were previously failing with SchemaDefinitionError.Fixes: #12319
@babel/core@^7.28.6 ↗︎ (from ^7.28.5, in dependencies)Fixes: #12191
rollup@~4.55.1 ↗︎ (from ~4.50.2, in dependencies)Fixes: #9737
New Features:
Storage:
API:
Client SDK:
Usage Example:
// Server-side: Configure storage
import { Mastra } from '@mastra/core';
import { PgAgentsStorage } from '@mastra/pg';
const mastra = new Mastra({
agents: { agentOne },
storage: {
agents: new PgAgentsStorage({
connectionString: process.env.DATABASE_URL,
}),
},
});
// Client-side: Use the SDK
import { MastraClient } from '@mastra/client-js';
const client = new MastraClient({ baseUrl: 'http://localhost:3000' });
// Create a stored agent
const agent = await client.createStoredAgent({
name: 'Customer Support Agent',
description: 'Handles customer inquiries',
model: { provider: 'ANTHROPIC', name: 'claude-sonnet-4-5' },
instructions: 'You are a helpful customer support agent...',
tools: ['search', 'email'],
});
// Create a version snapshot
await client.storedAgent(agent.id).createVersion({
name: 'v1.0 - Initial release',
changeMessage: 'First production version',
});
// Compare versions
const diff = await client.storedAgent(agent.id).compareVersions('version-1', 'version-2');
Why: This feature enables teams to manage agents dynamically without code changes, making it easier to iterate on agent configurations and maintain a complete audit trail of changes.
Fixes: #12038
Fixes: #12476
Fixes: #11786
What's fixed:
hono) that aren't in your project are now resolved correctlyWhy this happened:
Previously, dependency versions were resolved at bundle time without the correct project context, causing the bundler to fall back to latest instead of using the actual installed version.
Fixes: #12125
@babel/core@^7.28.6 ↗︎ (from ^7.28.5, in dependencies)Fixes: #12191
rollup@~4.55.1 ↗︎ (from ~4.50.2, in dependencies)Fixes: #9737
Fixes: #12420
Fixes: #12505
Fixes: #12153
Fixes: #12339
mcpOptions to server adapters for serverless MCP support.Why: MCP HTTP transport uses session management by default, which requires persistent state across requests. This doesn't work in serverless environments like Cloudflare Workers or Vercel Edge where each request runs in isolation. The new mcpOptions parameter lets you enable stateless mode without overriding the entire sendResponse() method.
Before:
const server = new MastraServer({
app,
mastra,
});
// No way to pass serverless option to MCP HTTP transport
After:
const server = new MastraServer({
app,
mastra,
mcpOptions: {
serverless: true,
},
});
// MCP HTTP transport now runs in stateless mode
Fixes: #12324
Fixes: #12221
Fixes: #12153
Fixes: #12339
mcpOptions to server adapters for serverless MCP support.Why: MCP HTTP transport uses session management by default, which requires persistent state across requests. This doesn't work in serverless environments like Cloudflare Workers or Vercel Edge where each request runs in isolation. The new mcpOptions parameter lets you enable stateless mode without overriding the entire sendResponse() method.
Before:
const server = new MastraServer({
app,
mastra,
});
// No way to pass serverless option to MCP HTTP transport
After:
const server = new MastraServer({
app,
mastra,
mcpOptions: {
serverless: true,
},
});
// MCP HTTP transport now runs in stateless mode
Fixes: #12324
Fixes: #12221
@google-cloud/pubsub@^5.2.2 ↗︎ (from ^5.2.0, in dependencies)Fixes: #12265
Fixes: #12153
Fixes: #12339
Fixes: #12332
mcpOptions to server adapters for serverless MCP support.Why: MCP HTTP transport uses session management by default, which requires persistent state across requests. This doesn't work in serverless environments like Cloudflare Workers or Vercel Edge where each request runs in isolation. The new mcpOptions parameter lets you enable stateless mode without overriding the entire sendResponse() method.
Before:
const server = new MastraServer({
app,
mastra,
});
// No way to pass serverless option to MCP HTTP transport
After:
const server = new MastraServer({
app,
mastra,
mcpOptions: {
serverless: true,
},
});
// MCP HTTP transport now runs in stateless mode
Fixes: #12324
Fixes: #12221
Fixes: #12259
Fixes: #12153
Fixes: #12339
mcpOptions to server adapters for serverless MCP support.Why: MCP HTTP transport uses session management by default, which requires persistent state across requests. This doesn't work in serverless environments like Cloudflare Workers or Vercel Edge where each request runs in isolation. The new mcpOptions parameter lets you enable stateless mode without overriding the entire sendResponse() method.
Before:
const server = new MastraServer({
app,
mastra,
});
// No way to pass serverless option to MCP HTTP transport
After:
const server = new MastraServer({
app,
mastra,
mcpOptions: {
serverless: true,
},
});
// MCP HTTP transport now runs in stateless mode
Fixes: #12324
Fixes: #12221
input token count now correctly excludes cached tokens, matching each platform's expected format for accurate cost calculation. Cache read and cache write tokens are now properly reported as separate fields (cache_read_input_tokens, cache_creation_input_tokens) rather than being included in the base input count. Added defensive clamping to ensure input tokens never go negative if cache values exceed the total.Fixes: #12465
The agent record now only stores metadata fields (id, status, activeVersionId, authorId, metadata, timestamps). All configuration fields (name, instructions, model, tools, etc.) live exclusively in version snapshot rows, enabling full version history and rollback.
Key changes:
ownerId to authorId for multi-tenant filteringmemory field type from string to Record<string, unknown>status field ('draft' | 'published') to agent recordsFixes: #12488
New Features:
Storage:
API:
Client SDK:
Usage Example:
// Server-side: Configure storage
import { Mastra } from '@mastra/core';
import { PgAgentsStorage } from '@mastra/pg';
const mastra = new Mastra({
agents: { agentOne },
storage: {
agents: new PgAgentsStorage({
connectionString: process.env.DATABASE_URL,
}),
},
});
// Client-side: Use the SDK
import { MastraClient } from '@mastra/client-js';
const client = new MastraClient({ baseUrl: 'http://localhost:3000' });
// Create a stored agent
const agent = await client.createStoredAgent({
name: 'Customer Support Agent',
description: 'Handles customer inquiries',
model: { provider: 'ANTHROPIC', name: 'claude-sonnet-4-5' },
instructions: 'You are a helpful customer support agent...',
tools: ['search', 'email'],
});
// Create a version snapshot
await client.storedAgent(agent.id).createVersion({
name: 'v1.0 - Initial release',
changeMessage: 'First production version',
});
// Compare versions
const diff = await client.storedAgent(agent.id).compareVersions('version-1', 'version-2');
Why: This feature enables teams to manage agents dynamically without code changes, making it easier to iterate on agent configurations and maintain a complete audit trail of changes.
Fixes: #12038
status field to listTraces response. The status field indicates the trace state: success (completed without error), error (has error), or running (still in progress). This makes it easier to filter and display traces by their current state without having to derive it from the error and endedAt fields.Fixes: #12213
Fixes: #12504
@mastra/client-js so stored agent edit flows work correctly. Fix stored agent schema migration in @mastra/libsql and @mastra/pg to drop and recreate the versions table when the old snapshot-based schema is detected, clean up stale draft records from partial create failures, and remove lingering legacy tables. Restores create and edit flows for stored agents.Fixes: #12504
Fixes: #12505
Fixes: #12505
Example:
// Working memory is loaded but agent cannot update it
const response = await agent.generate("What do you know about me?", {
memory: {
thread: "conversation-123",
resource: "user-alice-456",
options: { readOnly: true },
},
});
Fixes: #12471
The agent record now only stores metadata fields (id, status, activeVersionId, authorId, metadata, timestamps). All configuration fields (name, instructions, model, tools, etc.) live exclusively in version snapshot rows, enabling full version history and rollback.
Key changes:
ownerId to authorId for multi-tenant filteringmemory field type from string to Record<string, unknown>status field ('draft' | 'published') to agent recordsFixes: #12488
New Features:
Storage:
API:
Client SDK:
Usage Example:
// Server-side: Configure storage
import { Mastra } from '@mastra/core';
import { PgAgentsStorage } from '@mastra/pg';
const mastra = new Mastra({
agents: { agentOne },
storage: {
agents: new PgAgentsStorage({
connectionString: process.env.DATABASE_URL,
}),
},
});
// Client-side: Use the SDK
import { MastraClient } from '@mastra/client-js';
const client = new MastraClient({ baseUrl: 'http://localhost:3000' });
// Create a stored agent
const agent = await client.createStoredAgent({
name: 'Customer Support Agent',
description: 'Handles customer inquiries',
model: { provider: 'ANTHROPIC', name: 'claude-sonnet-4-5' },
instructions: 'You are a helpful customer support agent...',
tools: ['search', 'email'],
});
// Create a version snapshot
await client.storedAgent(agent.id).createVersion({
name: 'v1.0 - Initial release',
changeMessage: 'First production version',
});
// Compare versions
const diff = await client.storedAgent(agent.id).compareVersions('version-1', 'version-2');
Why: This feature enables teams to manage agents dynamically without code changes, making it easier to iterate on agent configurations and maintain a complete audit trail of changes.
Fixes: #12038
status field to listTraces response. The status field indicates the trace state: success (completed without error), error (has error), or running (still in progress). This makes it easier to filter and display traces by their current state without having to derive it from the error and endedAt fields.Fixes: #12213
status field to listTraces response. The status field indicates the trace state: success (completed without error), error (has error), or running (still in progress). This makes it easier to filter and display traces by their current state without having to derive it from the error and endedAt fields.Fixes: #12213
Fixes: #12505
Fixes: #12220
Fixes: #12370
Fixes: #12370
Added the ability to see tool approval requests in traces for debugging purposes. When a tool requires approval, a MODEL_CHUNK span named chunk: 'tool-call-approval' is now created containing:
This enables users to debug their system by seeing approval requests in traces, making it easier to understand the flow of tool approvals and their payloads.
Fixes: #12171
The agent record now only stores metadata fields (id, status, activeVersionId, authorId, metadata, timestamps). All configuration fields (name, instructions, model, tools, etc.) live exclusively in version snapshot rows, enabling full version history and rollback.
Key changes:
ownerId to authorId for multi-tenant filteringmemory field type from string to Record<string, unknown>status field ('draft' | 'published') to agent recordsFixes: #12488
New Features:
Storage:
API:
Client SDK:
Usage Example:
// Server-side: Configure storage
import { Mastra } from '@mastra/core';
import { PgAgentsStorage } from '@mastra/pg';
const mastra = new Mastra({
agents: { agentOne },
storage: {
agents: new PgAgentsStorage({
connectionString: process.env.DATABASE_URL,
}),
},
});
// Client-side: Use the SDK
import { MastraClient } from '@mastra/client-js';
const client = new MastraClient({ baseUrl: 'http://localhost:3000' });
// Create a stored agent
const agent = await client.createStoredAgent({
name: 'Customer Support Agent',
description: 'Handles customer inquiries',
model: { provider: 'ANTHROPIC', name: 'claude-sonnet-4-5' },
instructions: 'You are a helpful customer support agent...',
tools: ['search', 'email'],
});
// Create a version snapshot
await client.storedAgent(agent.id).createVersion({
name: 'v1.0 - Initial release',
changeMessage: 'First production version',
});
// Compare versions
const diff = await client.storedAgent(agent.id).compareVersions('version-1', 'version-2');
Why: This feature enables teams to manage agents dynamically without code changes, making it easier to iterate on agent configurations and maintain a complete audit trail of changes.
Fixes: #12038
status field to listTraces response. The status field indicates the trace state: success (completed without error), error (has error), or running (still in progress). This makes it easier to filter and display traces by their current state without having to derive it from the error and endedAt fields.Fixes: #12213
Fixes: #12504
@mastra/client-js so stored agent edit flows work correctly. Fix stored agent schema migration in @mastra/libsql and @mastra/pg to drop and recreate the versions table when the old snapshot-based schema is detected, clean up stale draft records from partial create failures, and remove lingering legacy tables. Restores create and edit flows for stored agents.Fixes: #12504
Fixes: #12122
Fixes: #12357
Fixes: #12368
The agent record now only stores metadata fields (id, status, activeVersionId, authorId, metadata, timestamps). All configuration fields (name, instructions, model, tools, etc.) live exclusively in version snapshot rows, enabling full version history and rollback.
Key changes:
ownerId to authorId for multi-tenant filteringmemory field type from string to Record<string, unknown>status field ('draft' | 'published') to agent recordsFixes: #12488
Fixes: #12407
Fixes: #12259
Fixes: #12458
Fixes: #12377
Fixes: #11784
Fixes: #12367
Fixes: #12456
New Features:
Storage:
API:
Client SDK:
Usage Example:
// Server-side: Configure storage
import { Mastra } from '@mastra/core';
import { PgAgentsStorage } from '@mastra/pg';
const mastra = new Mastra({
agents: { agentOne },
storage: {
agents: new PgAgentsStorage({
connectionString: process.env.DATABASE_URL,
}),
},
});
// Client-side: Use the SDK
import { MastraClient } from '@mastra/client-js';
const client = new MastraClient({ baseUrl: 'http://localhost:3000' });
// Create a stored agent
const agent = await client.createStoredAgent({
name: 'Customer Support Agent',
description: 'Handles customer inquiries',
model: { provider: 'ANTHROPIC', name: 'claude-sonnet-4-5' },
instructions: 'You are a helpful customer support agent...',
tools: ['search', 'email'],
});
// Create a version snapshot
await client.storedAgent(agent.id).createVersion({
name: 'v1.0 - Initial release',
changeMessage: 'First production version',
});
// Compare versions
const diff = await client.storedAgent(agent.id).compareVersions('version-1', 'version-2');
Why: This feature enables teams to manage agents dynamically without code changes, making it easier to iterate on agent configurations and maintain a complete audit trail of changes.
Fixes: #12038
Fixes: #12229
Fixes: #12487
Fixes: #12409
Fixes: #12360
Fixes: #12349
Fixes: #12186
useTableKeyboardNavigation hook and isActive prop on RowFixes: #12352
Fixes: #12138
Fixes: #12350
Fixes: #12142
Fixes: #12360
Fixes: #12491
Fixes: #12256
Fixes: #12151
Fixes: #12406
Fixes: #12358
Fixes: #12497
New components:
FileBrowser - Browse and manage workspace files with breadcrumb navigationFileViewer - View file contents with syntax highlightingSkillsTable - List and search available skillsSkillDetail - View skill details, instructions, and referencesSearchPanel - Search workspace content with BM25/vector/hybrid modesReferenceViewerDialog - View skill reference file contentsUsage:
import { FileBrowser, FileViewer, SkillsTable } from '@mastra/playground-ui';
// File browser with navigation
<FileBrowser
entries={files}
currentPath="/docs"
onNavigate={setPath}
onFileSelect={handleFileSelect}
/>
// Skills table with search
<SkillsTable
skills={skills}
onSkillSelect={handleSkillSelect}
/>
Fixes: #11986
Fixes: #12410
/mastra instead of the default /api). See #12261 for more details.Fixes: #12295
input token count now correctly excludes cached tokens, matching each platform's expected format for accurate cost calculation. Cache read and cache write tokens are now properly reported as separate fields (cache_read_input_tokens, cache_creation_input_tokens) rather than being included in the base input count. Added defensive clamping to ensure input tokens never go negative if cache values exceed the total.Fixes: #12465
Extended RecursiveCharacterTransformer to support all languages defined in the Language enum. Previously, only 6 languages were supported (CPP, C, TS, MARKDOWN, LATEX, PHP), causing runtime errors for other defined languages.
Newly supported languages:
Each language has been configured with appropriate separators based on its syntax patterns (modules, classes, functions, control structures) to enable semantic code chunking.
Before:
import { RecursiveCharacterTransformer, Language } from '@mastra/rag';
// These would all throw "Language X is not supported!" errors
const goTransformer = RecursiveCharacterTransformer.fromLanguage(Language.GO);
const pythonTransformer = RecursiveCharacterTransformer.fromLanguage(Language.PYTHON);
const rustTransformer = RecursiveCharacterTransformer.fromLanguage(Language.RUST);
After:
import { RecursiveCharacterTransformer, Language } from '@mastra/rag';
// All languages now work seamlessly
const goTransformer = RecursiveCharacterTransformer.fromLanguage(Language.GO);
const goChunks = goTransformer.transform(goCodeDocument);
const pythonTransformer = RecursiveCharacterTransformer.fromLanguage(Language.PYTHON);
const pythonChunks = pythonTransformer.transform(pythonCodeDocument);
const rustTransformer = RecursiveCharacterTransformer.fromLanguage(Language.RUST);
const rustChunks = rustTransformer.transform(rustCodeDocument);
// All languages in the Language enum are now fully supported
Fixes: #12154
Previously, the Language enum defined PHP, but it was not supported in the getSeparatorsForLanguage method. This caused runtime errors when trying to use PHP for code chunking.
This change adds proper separator definitions for PHP, ensuring that PHP defined in the Language enum is now fully supported. PHP has been configured with appropriate separators based on its syntax and common programming patterns (classes, functions, control structures, etc.).
Before:
import { RecursiveCharacterTransformer, Language } from '@mastra/rag';
const transformer = RecursiveCharacterTransformer.fromLanguage(Language.PHP);
const chunks = transformer.transform(phpCodeDocument);
// Throws: "Language PHP is not supported!"
After:
import { RecursiveCharacterTransformer, Language } from '@mastra/rag';
const transformer = RecursiveCharacterTransformer.fromLanguage(Language.PHP);
const chunks = transformer.transform(phpCodeDocument);
// Successfully chunks PHP code at namespace, class, function boundaries
Fixes the issue where using Language.PHP would throw "Language PHP is not supported!" error.
Fixes: #12124
apiPrefix prop to MastraClientProvider for connecting to servers with custom API route prefixes (defaults to /api).Default usage (no change required):
<MastraClientProvider baseUrl="http://localhost:3000">
{children}
</MastraClientProvider>
Custom prefix usage:
<MastraClientProvider baseUrl="http://localhost:3000" apiPrefix="/mastra">
{children}
</MastraClientProvider>
See #12261 for more details.
Fixes: #12295
Fixes: #12138
Fixes: #12142
Fixes: #12151
The agent record now only stores metadata fields (id, status, activeVersionId, authorId, metadata, timestamps). All configuration fields (name, instructions, model, tools, etc.) live exclusively in version snapshot rows, enabling full version history and rollback.
Key changes:
ownerId to authorId for multi-tenant filteringmemory field type from string to Record<string, unknown>status field ('draft' | 'published') to agent recordsFixes: #12488
Fixes: #12481
Fixes: #12153
Fixes: #12428
Fixes: #12446
Fixes: #12259
Fixes: #12339
New Features:
Storage:
API:
Client SDK:
Usage Example:
// Server-side: Configure storage
import { Mastra } from '@mastra/core';
import { PgAgentsStorage } from '@mastra/pg';
const mastra = new Mastra({
agents: { agentOne },
storage: {
agents: new PgAgentsStorage({
connectionString: process.env.DATABASE_URL,
}),
},
});
// Client-side: Use the SDK
import { MastraClient } from '@mastra/client-js';
const client = new MastraClient({ baseUrl: 'http://localhost:3000' });
// Create a stored agent
const agent = await client.createStoredAgent({
name: 'Customer Support Agent',
description: 'Handles customer inquiries',
model: { provider: 'ANTHROPIC', name: 'claude-sonnet-4-5' },
instructions: 'You are a helpful customer support agent...',
tools: ['search', 'email'],
});
// Create a version snapshot
await client.storedAgent(agent.id).createVersion({
name: 'v1.0 - Initial release',
changeMessage: 'First production version',
});
// Compare versions
const diff = await client.storedAgent(agent.id).compareVersions('version-1', 'version-2');
Why: This feature enables teams to manage agents dynamically without code changes, making it easier to iterate on agent configurations and maintain a complete audit trail of changes.
Fixes: #12038
Fixes: #12332
Fixes issue where custom API routes with path parameters (e.g., /users/:id) were incorrectly requiring authentication even when requiresAuth was set to false. The authentication middleware now uses pattern matching to correctly match dynamic routes against registered patterns.
Changes:
isCustomRoutePublic() to iterate through routes and match path patternspathMatchesPattern() to support path parameters (:id), optional parameters (:id?), and wildcards (*)Fixes: #12143
Fixes: #12068
mcpOptions to server adapters for serverless MCP support.Why: MCP HTTP transport uses session management by default, which requires persistent state across requests. This doesn't work in serverless environments like Cloudflare Workers or Vercel Edge where each request runs in isolation. The new mcpOptions parameter lets you enable stateless mode without overriding the entire sendResponse() method.
Before:
const server = new MastraServer({
app,
mastra,
});
// No way to pass serverless option to MCP HTTP transport
After:
const server = new MastraServer({
app,
mastra,
mcpOptions: {
serverless: true,
},
});
// MCP HTTP transport now runs in stateless mode
Fixes: #12324
Fixes: #12251
registerApiRoute() now appear in the OpenAPI spec alongside built-in Mastra routes.Fixes: #11786
New endpoints:
GET /workspaces - List all workspaces (from Mastra instance and agents)GET /workspaces/:workspaceId - Get workspace info and capabilitiesGET/POST/DELETE /workspaces/:workspaceId/fs/* - Filesystem operations (read, write, list, delete, mkdir, stat)GET /workspaces/:workspaceId/skills - List available skillsGET /workspaces/:workspaceId/skills/:skillName - Get skill detailsGET /workspaces/:workspaceId/skills/:skillName/references - List skill referencesGET /workspaces/:workspaceId/search - Search indexed contentUsage:
// List workspaces
const { workspaces } = await fetch('/api/workspaces').then(r => r.json());
const workspaceId = workspaces[0].id;
// Read a file
const response = await fetch(`/api/workspaces/${workspaceId}/fs/read?path=/docs/guide.md`);
const { content } = await response.json();
// List skills
const skills = await fetch(`/api/workspaces/${workspaceId}/skills`).then(r => r.json());
// Search content
const results = await fetch(`/api/workspaces/${workspaceId}/search?query=authentication&mode=hybrid`)
.then(r => r.json());
Fixes: #11986
Fixes: #12295
Fixes: #12221
ws@^8.19.0 ↗︎ (from ^8.18.3, in dependencies)Fixes: #11645
ws@^8.19.0 ↗︎ (from ^8.18.3, in dependencies)Fixes: #11645
Fixes: #11784
Fixes: #12108
Fixes: #12495
mastra dev and mastra build. The CLI now checks if installed @mastra/* packages satisfy each other's peer dependency requirements and displays a warning with upgrade instructions when mismatches are detected.Fixes: #12469
Improved peer dependency version mismatch warnings in the CLI:
When the dev server crashes with an error, a hint is now shown suggesting that updating mismatched packages may fix the issue
The update command now uses the correct package manager (pnpm/npm/yarn) detected from lockfiles
The update command uses add @package@latest instead of update to ensure major version updates are applied
Added MASTRA_SKIP_PEERDEP_CHECK=1 environment variable to skip the peer dependency check
Fixes: #12479
Fixes: #11784
New Features:
Storage:
API:
Client SDK:
Usage Example:
// Server-side: Configure storage
import { Mastra } from '@mastra/core';
import { PgAgentsStorage } from '@mastra/pg';
const mastra = new Mastra({
agents: { agentOne },
storage: {
agents: new PgAgentsStorage({
connectionString: process.env.DATABASE_URL,
}),
},
});
// Client-side: Use the SDK
import { MastraClient } from '@mastra/client-js';
const client = new MastraClient({ baseUrl: 'http://localhost:3000' });
// Create a stored agent
const agent = await client.createStoredAgent({
name: 'Customer Support Agent',
description: 'Handles customer inquiries',
model: { provider: 'ANTHROPIC', name: 'claude-sonnet-4-5' },
instructions: 'You are a helpful customer support agent...',
tools: ['search', 'email'],
});
// Create a version snapshot
await client.storedAgent(agent.id).createVersion({
name: 'v1.0 - Initial release',
changeMessage: 'First production version',
});
// Compare versions
const diff = await client.storedAgent(agent.id).compareVersions('version-1', 'version-2');
Why: This feature enables teams to manage agents dynamically without code changes, making it easier to iterate on agent configurations and maintain a complete audit trail of changes.
Fixes: #12038
Fixes: #12126
--server-api-prefix option to mastra studio command for connecting to servers with custom API route prefixes.# Connect to server using custom prefix
mastra studio --server-port 3000 --server-api-prefix /mastra
Fixes: #12295
Add structured output support to agent.network() method. Users can now pass a structuredOutput option with a Zod schema to get typed results from network execution. (#11701)
The stream exposes .object (Promise) and .objectStream (ReadableStream) getters, and emits network-object and network-object-result chunk types. The structured output is generated after task completion using the provided schema.
const stream = await agent.network('Research AI trends', {
structuredOutput: {
schema: z.object({
summary: z.string(),
recommendations: z.array(z.string()),
}),
},
});
const result = await stream.object;
// result is typed: { summary: string; recommendations: string[] }
feat(observability): add zero-config environment variable support for all exporters (#11686)
All observability exporters now support zero-config setup via environment variables. Set the appropriate environment variables and instantiate exporters with no configuration:
LANGFUSE_PUBLIC_KEY, LANGFUSE_SECRET_KEY, LANGFUSE_BASE_URLBRAINTRUST_API_KEY, BRAINTRUST_ENDPOINTPOSTHOG_API_KEY, POSTHOG_HOSTARIZE_SPACE_ID, ARIZE_API_KEY, ARIZE_PROJECT_NAME, PHOENIX_ENDPOINT, PHOENIX_API_KEY, PHOENIX_PROJECT_NAMEDASH0_API_KEY, DASH0_ENDPOINT, DASH0_DATASETSIGNOZ_API_KEY, SIGNOZ_REGION, SIGNOZ_ENDPOINTNEW_RELIC_LICENSE_KEY, NEW_RELIC_ENDPOINTTRACELOOP_API_KEY, TRACELOOP_DESTINATION_ID, TRACELOOP_ENDPOINTLMNR_PROJECT_API_KEY, LAMINAR_ENDPOINT, LAMINAR_TEAM_IDExample usage:
// Zero-config - reads from environment variables
new LangfuseExporter();
new BraintrustExporter();
new PosthogExporter();
new ArizeExporter();
new OtelExporter({ provider: { signoz: {} } });
Explicit configuration still works and takes precedence over environment variables.
feat(observability): add zero-config environment variable support for all exporters (#11686)
All observability exporters now support zero-config setup via environment variables. Set the appropriate environment variables and instantiate exporters with no configuration:
LANGFUSE_PUBLIC_KEY, LANGFUSE_SECRET_KEY, LANGFUSE_BASE_URLBRAINTRUST_API_KEY, BRAINTRUST_ENDPOINTPOSTHOG_API_KEY, POSTHOG_HOSTARIZE_SPACE_ID, ARIZE_API_KEY, ARIZE_PROJECT_NAME, PHOENIX_ENDPOINT, PHOENIX_API_KEY, PHOENIX_PROJECT_NAMEDASH0_API_KEY, DASH0_ENDPOINT, DASH0_DATASETSIGNOZ_API_KEY, SIGNOZ_REGION, SIGNOZ_ENDPOINTNEW_RELIC_LICENSE_KEY, NEW_RELIC_ENDPOINTTRACELOOP_API_KEY, TRACELOOP_DESTINATION_ID, TRACELOOP_ENDPOINTLMNR_PROJECT_API_KEY, LAMINAR_ENDPOINT, LAMINAR_TEAM_IDExample usage:
// Zero-config - reads from environment variables
new LangfuseExporter();
new BraintrustExporter();
new PosthogExporter();
new ArizeExporter();
new OtelExporter({ provider: { signoz: {} } });
Explicit configuration still works and takes precedence over environment variables.
Add structured output support to agent.network() method. Users can now pass a structuredOutput option with a Zod schema to get typed results from network execution. (#11701)
The stream exposes .object (Promise) and .objectStream (ReadableStream) getters, and emits network-object and network-object-result chunk types. The structured output is generated after task completion using the provided schema.
const stream = await agent.network('Research AI trends', {
structuredOutput: {
schema: z.object({
summary: z.string(),
recommendations: z.array(z.string()),
}),
},
});
const result = await stream.object;
// result is typed: { summary: string; recommendations: string[] }
Add additional context to workflow onFinish and onError callbacks (#11705)
The onFinish and onError lifecycle callbacks now receive additional properties:
runId - The unique identifier for the workflow runworkflowId - The workflow's identifierresourceId - Optional resource identifier (if provided when creating the run)getInitData() - Function that returns the initial input data passed to the workflowmastra - The Mastra instance (if workflow is registered with Mastra)requestContext - Request-scoped context datalogger - The workflow's logger instancestate - The workflow's current state objectconst workflow = createWorkflow({
id: 'order-processing',
inputSchema: z.object({ orderId: z.string() }),
outputSchema: z.object({ status: z.string() }),
options: {
onFinish: async ({ runId, workflowId, getInitData, logger, state, mastra }) => {
const inputData = getInitData();
logger.info(`Workflow ${workflowId} run ${runId} completed`, {
orderId: inputData.orderId,
finalState: state,
});
// Access other Mastra components if needed
const agent = mastra?.getAgent('notification-agent');
},
onError: async ({ runId, workflowId, error, logger, requestContext }) => {
logger.error(`Workflow ${workflowId} run ${runId} failed: ${error?.message}`);
// Access request context for additional debugging
const userId = requestContext.get('userId');
},
},
});
Make initialState optional in studio (#11744)
Refactor: consolidate duplicate applyMessages helpers in workflow.ts (#11688)
defaultSource parameter to ProcessorRunner.applyMessagesToMessageList to support both 'input' and 'response' default sourcesapplyMessages helper functions from workflow.ts (in input, outputResult, and outputStep phases)ProcessorRunner.applyMessagesToMessageList static methodThis is an internal refactoring with no changes to external behavior.
Cache processor instances in MastraMemory to preserve embedding cache across calls (#11720) Fixed issue where getInputProcessors() and getOutputProcessors() created new processor instances on each call, causing the SemanticRecall embedding cache to be discarded. Processor instances (SemanticRecall, WorkingMemory, MessageHistory) are now cached and reused, reducing unnecessary embedding API calls and improving latency. Also added cache invalidation when setStorage(), setVector(), or setEmbedder() are called to ensure processors use updated dependencies. Fixes #11455
Added a Datadog LLM Observability exporter for Mastra applications. (#11305)
This exporter integrates with Datadog's LLM Observability product to provide comprehensive tracing and monitoring for AI/LLM applications built with Mastra.
Required settings:
mlApp: Groups traces under an ML application name (required)apiKey: Datadog API key (required for agentless mode)Optional settings:
site: Datadog site (datadoghq.com, datadoghq.eu, us3.datadoghq.com)agentless: true for direct HTTPS (default), false for local agentservice, env: APM taggingintegrationsEnabled: Enable dd-trace auto-instrumentation (default: false)import { Mastra } from '@mastra/core';
import { Observability } from '@mastra/observability';
import { DatadogExporter } from '@mastra/datadog';
const mastra = new Mastra({
observability: new Observability({
configs: {
datadog: {
serviceName: 'my-service',
exporters: [
new DatadogExporter({
mlApp: 'my-llm-app',
apiKey: process.env.DD_API_KEY,
}),
],
},
},
}),
});
This is an initial experimental beta release. Breaking changes may occur in future versions as the API evolves.
Added createServe factory function to support multiple web framework adapters for Inngest workflows. (#11667)
Previously, the serve function only supported Hono. Now you can use any framework adapter provided by the Inngest package (Express, Fastify, Koa, Next.js, and more).
Before (Hono only)
import { serve } from '@mastra/inngest';
// Only worked with Hono
app.all('/api/inngest', c => serve({ mastra, inngest })(c));
After (any framework)
import { createServe } from '@mastra/inngest';
import { serve as expressAdapter } from 'inngest/express';
import { serve as fastifyAdapter } from 'inngest/fastify';
// Express
app.use('/api/inngest', createServe(expressAdapter)({ mastra, inngest }));
// Fastify
fastify.route({
method: ['GET', 'POST', 'PUT'],
url: '/api/inngest',
handler: createServe(fastifyAdapter)({ mastra, inngest }),
});
The existing serve export remains available for backward compatibility with Hono.
Fixes #10053
Add additional context to workflow onFinish and onError callbacks (#11705)
The onFinish and onError lifecycle callbacks now receive additional properties:
runId - The unique identifier for the workflow runworkflowId - The workflow's identifierresourceId - Optional resource identifier (if provided when creating the run)getInitData() - Function that returns the initial input data passed to the workflowmastra - The Mastra instance (if workflow is registered with Mastra)requestContext - Request-scoped context datalogger - The workflow's logger instancestate - The workflow's current state objectconst workflow = createWorkflow({
id: 'order-processing',
inputSchema: z.object({ orderId: z.string() }),
outputSchema: z.object({ status: z.string() }),
options: {
onFinish: async ({ runId, workflowId, getInitData, logger, state, mastra }) => {
const inputData = getInitData();
logger.info(`Workflow ${workflowId} run ${runId} completed`, {
orderId: inputData.orderId,
finalState: state,
});
// Access other Mastra components if needed
const agent = mastra?.getAgent('notification-agent');
},
onError: async ({ runId, workflowId, error, logger, requestContext }) => {
logger.error(`Workflow ${workflowId} run ${runId} failed: ${error?.message}`);
// Access request context for additional debugging
const userId = requestContext.get('userId');
},
},
});
feat(observability): add zero-config environment variable support for all exporters (#11686)
All observability exporters now support zero-config setup via environment variables. Set the appropriate environment variables and instantiate exporters with no configuration:
LANGFUSE_PUBLIC_KEY, LANGFUSE_SECRET_KEY, LANGFUSE_BASE_URLBRAINTRUST_API_KEY, BRAINTRUST_ENDPOINTPOSTHOG_API_KEY, POSTHOG_HOSTARIZE_SPACE_ID, ARIZE_API_KEY, ARIZE_PROJECT_NAME, PHOENIX_ENDPOINT, PHOENIX_API_KEY, PHOENIX_PROJECT_NAMEDASH0_API_KEY, DASH0_ENDPOINT, DASH0_DATASETSIGNOZ_API_KEY, SIGNOZ_REGION, SIGNOZ_ENDPOINTNEW_RELIC_LICENSE_KEY, NEW_RELIC_ENDPOINTTRACELOOP_API_KEY, TRACELOOP_DESTINATION_ID, TRACELOOP_ENDPOINTLMNR_PROJECT_API_KEY, LAMINAR_ENDPOINT, LAMINAR_TEAM_IDExample usage:
// Zero-config - reads from environment variables
new LangfuseExporter();
new BraintrustExporter();
new PosthogExporter();
new ArizeExporter();
new OtelExporter({ provider: { signoz: {} } });
Explicit configuration still works and takes precedence over environment variables.
feat(mcp-docs-server): add embedded docs MCP tools (#11532)
Adds 5 new MCP tools to read embedded documentation from installed @mastra/* packages:
These tools enable AI coding agents to understand Mastra packages by reading documentation directly from node_modules.
feat(observability): add zero-config environment variable support for all exporters (#11686)
All observability exporters now support zero-config setup via environment variables. Set the appropriate environment variables and instantiate exporters with no configuration:
LANGFUSE_PUBLIC_KEY, LANGFUSE_SECRET_KEY, LANGFUSE_BASE_URLBRAINTRUST_API_KEY, BRAINTRUST_ENDPOINTPOSTHOG_API_KEY, POSTHOG_HOSTARIZE_SPACE_ID, ARIZE_API_KEY, ARIZE_PROJECT_NAME, PHOENIX_ENDPOINT, PHOENIX_API_KEY, PHOENIX_PROJECT_NAMEDASH0_API_KEY, DASH0_ENDPOINT, DASH0_DATASETSIGNOZ_API_KEY, SIGNOZ_REGION, SIGNOZ_ENDPOINTNEW_RELIC_LICENSE_KEY, NEW_RELIC_ENDPOINTTRACELOOP_API_KEY, TRACELOOP_DESTINATION_ID, TRACELOOP_ENDPOINTLMNR_PROJECT_API_KEY, LAMINAR_ENDPOINT, LAMINAR_TEAM_IDExample usage:
// Zero-config - reads from environment variables
new LangfuseExporter();
new BraintrustExporter();
new PosthogExporter();
new ArizeExporter();
new OtelExporter({ provider: { signoz: {} } });
Explicit configuration still works and takes precedence over environment variables.
Replace deprecated client.getTraces with a client.listTraces (#11711)
Make initialState optional in studio (#11744)
feat(observability): add zero-config environment variable support for all exporters (#11686)
All observability exporters now support zero-config setup via environment variables. Set the appropriate environment variables and instantiate exporters with no configuration:
LANGFUSE_PUBLIC_KEY, LANGFUSE_SECRET_KEY, LANGFUSE_BASE_URLBRAINTRUST_API_KEY, BRAINTRUST_ENDPOINTPOSTHOG_API_KEY, POSTHOG_HOSTARIZE_SPACE_ID, ARIZE_API_KEY, ARIZE_PROJECT_NAME, PHOENIX_ENDPOINT, PHOENIX_API_KEY, PHOENIX_PROJECT_NAMEDASH0_API_KEY, DASH0_ENDPOINT, DASH0_DATASETSIGNOZ_API_KEY, SIGNOZ_REGION, SIGNOZ_ENDPOINTNEW_RELIC_LICENSE_KEY, NEW_RELIC_ENDPOINTTRACELOOP_API_KEY, TRACELOOP_DESTINATION_ID, TRACELOOP_ENDPOINTLMNR_PROJECT_API_KEY, LAMINAR_ENDPOINT, LAMINAR_TEAM_IDExample usage:
// Zero-config - reads from environment variables
new LangfuseExporter();
new BraintrustExporter();
new PosthogExporter();
new ArizeExporter();
new OtelExporter({ provider: { signoz: {} } });
Explicit configuration still works and takes precedence over environment variables.
ai package peer dependency to enable compatibility with AI SDK v6. The rag package doesn't directly use the ai package, so this peer dependency was unnecessarily constraining version compatibility. (#11724)Fix oneOf schema conversion generating invalid JavaScript (#11626)
The upstream json-schema-to-zod library generates TypeScript syntax (reduce<z.ZodError[]>) when converting oneOf schemas. This TypeScript generic annotation fails when evaluated at runtime with Function(), causing schema resolution to fail.
The fix removes TypeScript generic syntax from the generated output, producing valid JavaScript that can be evaluated at runtime. This resolves issues where MCP tools with oneOf in their output schemas would fail validation.
Replace deprecated client.getTraces with a client.listTraces (#11711)
Fix query parameter parsing for complex nested optional types (#11711)
Fixes an issue where complex query parameters (like startedAt and endedAt date range filters) would fail with "Expected object, received string" errors when using the listTraces API.
The fix addresses two issues:
.partial() on already-optional fields)Replace deprecated client.getTraces with a client.listTraces (#11711)
Make initialState optional in studio (#11744)
Replace deprecated client.getTraces with a client.listTraces (#11711)
Make initialState optional in studio (#11744)
Full Changelog: 8edb597
Add embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Fixed agent network not returning text response when routing agent handles requests without delegation. (#11497)
What changed:
Why this matters:
Previously, when using toAISdkV5Stream or networkRoute() outside of the Mastra Studio UI, no text content was returned when the routing agent handled requests directly. This fix ensures consistent behavior across all API routes.
Fixes #11219
Refactor the MessageList class from ~4000 LOC monolith to ~850 LOC with focused, single-responsibility modules. This improves maintainability, testability, and makes the codebase easier to understand. (#11658)
message-list/
├── message-list.ts # Main class (~850 LOC, down from ~4000)
├── adapters/ # SDK format conversions
│ ├── AIV4Adapter.ts # MastraDBMessage <-> AI SDK V4
│ └── AIV5Adapter.ts # MastraDBMessage <-> AI SDK V5
├── cache/
│ └── CacheKeyGenerator.ts # Deduplication keys
├── conversion/
│ ├── input-converter.ts # Any format -> MastraDBMessage
│ ├── output-converter.ts # MastraDBMessage -> SDK formats
│ ├── step-content.ts # Step content extraction
│ └── to-prompt.ts # LLM prompt formatting
├── detection/
│ └── TypeDetector.ts # Format identification
├── merge/
│ └── MessageMerger.ts # Streaming merge logic
├── state/
│ └── MessageStateManager.ts # Source & persistence tracking
└── utils/
└── provider-compat.ts # Provider-specific fixes
Fix autoresume not working fine in useChat (#11486)
Add embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Add embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Add embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Add embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Improves AI SDK message conversion for Braintrust Thread view: (#11673)
result and v5 output fields for tool resultsAdd embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Add embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Added startExclusive and endExclusive options to dateRange filter for message queries. (#11479)
What changed: The filter.dateRange parameter in listMessages() and Memory.recall() now supports startExclusive and endExclusive boolean options. When set to true, messages with timestamps exactly matching the boundary are excluded from results.
Why this matters: Enables cursor-based pagination for chat applications. When new messages arrive during a session, offset-based pagination can skip or duplicate messages. Using endExclusive: true with the oldest message's timestamp as a cursor ensures consistent pagination without gaps or duplicates.
Example:
// Get first page
const page1 = await memory.recall({
threadId: 'thread-123',
perPage: 10,
orderBy: { field: 'createdAt', direction: 'DESC' },
});
// Get next page using cursor-based pagination
const oldestMessage = page1.messages[page1.messages.length - 1];
const page2 = await memory.recall({
threadId: 'thread-123',
perPage: 10,
orderBy: { field: 'createdAt', direction: 'DESC' },
filter: {
dateRange: {
end: oldestMessage.createdAt,
endExclusive: true, // Excludes the cursor message
},
},
});
Unified getWorkflowRunById and getWorkflowRunExecutionResult into a single API that returns WorkflowState with both metadata and execution state. (#11429)
What changed:
getWorkflowRunById now returns a unified WorkflowState object containing metadata (runId, workflowName, resourceId, createdAt, updatedAt) along with processed execution state (status, result, error, payload, steps)fields parameter to request only specific fields for better performancewithNestedWorkflows parameter to control nested workflow step inclusiongetWorkflowRunExecutionResult - use getWorkflowRunById instead (breaking change)/execution-result API endpoints from server (breaking change)runExecutionResult() method from client SDK (breaking change)GetWorkflowRunExecutionResultResponse type from client SDK (breaking change)Before:
// Had to call two different methods for different data
const run = await workflow.getWorkflowRunById(runId); // Returns raw WorkflowRun with snapshot
const result = await workflow.getWorkflowRunExecutionResult(runId); // Returns processed execution state
After:
// Single method returns everything
const run = await workflow.getWorkflowRunById(runId);
// Returns: { runId, workflowName, resourceId, createdAt, updatedAt, status, result, error, payload, steps }
// Request only specific fields for better performance (avoids expensive step fetching)
const status = await workflow.getWorkflowRunById(runId, { fields: ['status'] });
// Skip nested workflow steps for faster response
const run = await workflow.getWorkflowRunById(runId, { withNestedWorkflows: false });
Why: The previous API required calling two separate methods to get complete workflow run information. This unification simplifies the API surface and gives users control over performance - fetching all steps (especially nested workflows) can be expensive, so the fields and withNestedWorkflows options let users request only what they need.
Add embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Remove streamVNext, resumeStreamVNext, and observeStreamVNext methods, call stream, resumeStream and observeStream directly (#11499)
+ const run = await workflow.createRun({ runId: '123' });
- const stream = await run.streamVNext({ inputData: { ... } });
+ const stream = await run.stream({ inputData: { ... } });
Add initial state input to workflow form in studio (#11560)
Adds thread cloning to create independent copies of conversations that can diverge. (#11517)
// Clone a thread
const { thread, clonedMessages } = await memory.cloneThread({
sourceThreadId: 'thread-123',
title: 'My Clone',
options: {
messageLimit: 10, // optional: only copy last N messages
},
});
// Check if a thread is a clone
if (memory.isClone(thread)) {
const source = await memory.getSourceThread(thread.id);
}
// List all clones of a thread
const clones = await memory.listClones('thread-123');
Includes:
POST /api/memory/threads/:threadId/cloneMake agentId optional for memory read operations (getThread, listThreads, listMessages) (#11540)
When workflows use multiple agents sharing the same threadId/resourceId, users can now retrieve threads and messages without specifying an agentId. The server falls back to using storage directly when agentId is not provided.
Add embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Added startExclusive and endExclusive options to dateRange filter for message queries. (#11479)
What changed: The filter.dateRange parameter in listMessages() and Memory.recall() now supports startExclusive and endExclusive boolean options. When set to true, messages with timestamps exactly matching the boundary are excluded from results.
Why this matters: Enables cursor-based pagination for chat applications. When new messages arrive during a session, offset-based pagination can skip or duplicate messages. Using endExclusive: true with the oldest message's timestamp as a cursor ensures consistent pagination without gaps or duplicates.
Example:
// Get first page
const page1 = await memory.recall({
threadId: 'thread-123',
perPage: 10,
orderBy: { field: 'createdAt', direction: 'DESC' },
});
// Get next page using cursor-based pagination
const oldestMessage = page1.messages[page1.messages.length - 1];
const page2 = await memory.recall({
threadId: 'thread-123',
perPage: 10,
orderBy: { field: 'createdAt', direction: 'DESC' },
filter: {
dateRange: {
end: oldestMessage.createdAt,
endExclusive: true, // Excludes the cursor message
},
},
});
Add embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Added startExclusive and endExclusive options to dateRange filter for message queries. (#11479)
What changed: The filter.dateRange parameter in listMessages() and Memory.recall() now supports startExclusive and endExclusive boolean options. When set to true, messages with timestamps exactly matching the boundary are excluded from results.
Why this matters: Enables cursor-based pagination for chat applications. When new messages arrive during a session, offset-based pagination can skip or duplicate messages. Using endExclusive: true with the oldest message's timestamp as a cursor ensures consistent pagination without gaps or duplicates.
Example:
// Get first page
const page1 = await memory.recall({
threadId: 'thread-123',
perPage: 10,
orderBy: { field: 'createdAt', direction: 'DESC' },
});
// Get next page using cursor-based pagination
const oldestMessage = page1.messages[page1.messages.length - 1];
const page2 = await memory.recall({
threadId: 'thread-123',
perPage: 10,
orderBy: { field: 'createdAt', direction: 'DESC' },
filter: {
dateRange: {
end: oldestMessage.createdAt,
endExclusive: true, // Excludes the cursor message
},
},
});
Breaking Change: memory.readOnly has been moved to memory.options.readOnly (#11523)
The readOnly option now lives inside memory.options alongside other memory configuration like lastMessages and semanticRecall.
Before:
agent.stream('Hello', {
memory: {
thread: threadId,
resource: resourceId,
readOnly: true,
},
});
After:
agent.stream('Hello', {
memory: {
thread: threadId,
resource: resourceId,
options: {
readOnly: true,
},
},
});
Migration: Run the codemod to update your code automatically:
npx @mastra/codemod@beta v1/memory-readonly-to-options .
This also fixes issue #11519 where readOnly: true was being ignored and messages were saved to memory anyway.
Add embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Added startExclusive and endExclusive options to dateRange filter for message queries. (#11479)
What changed: The filter.dateRange parameter in listMessages() and Memory.recall() now supports startExclusive and endExclusive boolean options. When set to true, messages with timestamps exactly matching the boundary are excluded from results.
Why this matters: Enables cursor-based pagination for chat applications. When new messages arrive during a session, offset-based pagination can skip or duplicate messages. Using endExclusive: true with the oldest message's timestamp as a cursor ensures consistent pagination without gaps or duplicates.
Example:
// Get first page
const page1 = await memory.recall({
threadId: 'thread-123',
perPage: 10,
orderBy: { field: 'createdAt', direction: 'DESC' },
});
// Get next page using cursor-based pagination
const oldestMessage = page1.messages[page1.messages.length - 1];
const page2 = await memory.recall({
threadId: 'thread-123',
perPage: 10,
orderBy: { field: 'createdAt', direction: 'DESC' },
filter: {
dateRange: {
end: oldestMessage.createdAt,
endExclusive: true, // Excludes the cursor message
},
},
});
Deprecate default: { enabled: true } observability configuration (#11674)
The shorthand default: { enabled: true } configuration is now deprecated and will be removed in a future version. Users should migrate to explicit configuration with DefaultExporter, CloudExporter, and SensitiveDataFilter.
Before (deprecated):
import { Observability } from '@mastra/observability';
const mastra = new Mastra({
observability: new Observability({
default: { enabled: true },
}),
});
After (recommended):
import { Observability, DefaultExporter, CloudExporter, SensitiveDataFilter } from '@mastra/observability';
const mastra = new Mastra({
observability: new Observability({
configs: {
default: {
serviceName: 'mastra',
exporters: [new DefaultExporter(), new CloudExporter()],
spanOutputProcessors: [new SensitiveDataFilter()],
},
},
}),
});
The explicit configuration makes it clear exactly what exporters and processors are being used, improving code readability and maintainability.
A deprecation warning will be logged when using the old configuration pattern.
Fix processor tracing to create individual spans per processor (#11683)
input processor: validator) instead of combined workflow IDsINPUT_STEP_PROCESSOR and OUTPUT_STEP_PROCESSOR entity types for finer-grained tracingprocessorType span attribute to processorExecutor with values 'workflow' or 'legacy'Add completion validation to agent networks using custom scorers (#11562)
You can now validate whether an agent network has completed its task by passing MastraScorers to agent.network(). When validation fails, the network automatically retries with feedback injected into the conversation.
Example: Creating a scorer to verify test coverage
import { createScorer } from '@mastra/core/evals';
import { z } from 'zod';
// Create a scorer that checks if tests were written
const testsScorer = createScorer({
id: 'tests-written',
description: 'Validates that unit tests were included in the response',
type: 'agent',
}).generateScore({
description: 'Return 1 if tests are present, 0 if missing',
outputSchema: z.number(),
createPrompt: ({ run }) => `
Does this response include unit tests?
Response: ${run.output}
Return 1 if tests are present, 0 if not.
`,
});
// Use the scorer with agent.network()
const stream = await agent.network('Implement a fibonacci function with tests', {
completion: {
scorers: [testsScorer],
strategy: 'all', // all scorers must pass (score >= 0.5)
},
maxSteps: 3,
});
What this enables:
score: 0, its reason is injected into the conversation so the network can address the gap on the next iterationstrategy: 'all' (all must pass) or strategy: 'any' (at least one must pass)This replaces guesswork with reliable, repeatable validation that ensures agent networks produce outputs meeting your specific requirements.
Unified getWorkflowRunById and getWorkflowRunExecutionResult into a single API that returns WorkflowState with both metadata and execution state. (#11429)
What changed:
getWorkflowRunById now returns a unified WorkflowState object containing metadata (runId, workflowName, resourceId, createdAt, updatedAt) along with processed execution state (status, result, error, payload, steps)fields parameter to request only specific fields for better performancewithNestedWorkflows parameter to control nested workflow step inclusiongetWorkflowRunExecutionResult - use getWorkflowRunById instead (breaking change)/execution-result API endpoints from server (breaking change)runExecutionResult() method from client SDK (breaking change)GetWorkflowRunExecutionResultResponse type from client SDK (breaking change)Before:
// Had to call two different methods for different data
const run = await workflow.getWorkflowRunById(runId); // Returns raw WorkflowRun with snapshot
const result = await workflow.getWorkflowRunExecutionResult(runId); // Returns processed execution state
After:
// Single method returns everything
const run = await workflow.getWorkflowRunById(runId);
// Returns: { runId, workflowName, resourceId, createdAt, updatedAt, status, result, error, payload, steps }
// Request only specific fields for better performance (avoids expensive step fetching)
const status = await workflow.getWorkflowRunById(runId, { fields: ['status'] });
// Skip nested workflow steps for faster response
const run = await workflow.getWorkflowRunById(runId, { withNestedWorkflows: false });
Why: The previous API required calling two separate methods to get complete workflow run information. This unification simplifies the API surface and gives users control over performance - fetching all steps (especially nested workflows) can be expensive, so the fields and withNestedWorkflows options let users request only what they need.
Add embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Add support for retries and scorers parameters across all createStep overloads.
(#11495)
The createStep function now includes support for the retries and scorers fields across all step creation patterns, enabling step-level retry configuration and AI evaluation support for regular steps, agent-based steps, and tool-based steps.
import { init } from '@mastra/inngest';
import { z } from 'zod';
const { createStep } = init(inngest);
// 1. Regular step with retries
const regularStep = createStep({
id: 'api-call',
inputSchema: z.object({ url: z.string() }),
outputSchema: z.object({ data: z.any() }),
retries: 3, // ← Will retry up to 3 times on failure
execute: async ({ inputData }) => {
const response = await fetch(inputData.url);
return { data: await response.json() };
},
});
// 2. Agent step with retries and scorers
const agentStep = createStep(myAgent, {
retries: 3,
scorers: [{ id: 'accuracy-scorer', scorer: myAccuracyScorer }],
});
// 3. Tool step with retries and scorers
const toolStep = createStep(myTool, {
retries: 2,
scorers: [{ id: 'quality-scorer', scorer: myQualityScorer }],
});
This change ensures API consistency across all createStep overloads. All step types now support retry and evaluation configurations.
This is a non-breaking change - steps without these parameters continue to work exactly as before.
Fixes #9351
Remove streamVNext, resumeStreamVNext, and observeStreamVNext methods, call stream, resumeStream and observeStream directly (#11499)
+ const run = await workflow.createRun({ runId: '123' });
- const stream = await run.streamVNext({ inputData: { ... } });
+ const stream = await run.stream({ inputData: { ... } });
Fix workflow tool not executing when requireApproval is true and tool call is approved (#11538)
Breaking Change: memory.readOnly has been moved to memory.options.readOnly (#11523)
The readOnly option now lives inside memory.options alongside other memory configuration like lastMessages and semanticRecall.
Before:
agent.stream('Hello', {
memory: {
thread: threadId,
resource: resourceId,
readOnly: true,
},
});
After:
agent.stream('Hello', {
memory: {
thread: threadId,
resource: resourceId,
options: {
readOnly: true,
},
},
});
Migration: Run the codemod to update your code automatically:
npx @mastra/codemod@beta v1/memory-readonly-to-options .
This also fixes issue #11519 where readOnly: true was being ignored and messages were saved to memory anyway.
Fix agent runs with multiple steps only showing last text chunk in observability tools (#11672)
When an agent model executes multiple steps and generates multiple text chunks, the onFinish payload was only receiving the text from the last step instead of all accumulated text. This caused observability tools like Braintrust to only display the final text chunk. The fix now correctly concatenates all text chunks from all steps.
Fix tool input validation destroying non-plain objects (#11541)
The convertUndefinedToNull function in tool input validation was treating all objects as plain objects and recursively processing them. For objects like Date, Map, URL, and class instances, this resulted in empty objects {} because they have no enumerable own properties.
This fix changes the approach to only recurse into plain objects (objects with Object.prototype or null prototype). All other objects (Date, Map, Set, URL, RegExp, Error, custom class instances, etc.) are now preserved as-is.
Fixes #11502
Fixed client-side tool invocations not being stored in memory. Previously, tool invocations with state 'call' were filtered out before persistence, which incorrectly removed client-side tools. Now only streaming intermediate states ('partial-call') are filtered. (#11630)
Fixed a crash when updating working memory with an empty or null update; existing data is now preserved.
Fixed memory readOnly option not being respected when agents share a RequestContext. Previously, when output processors were resolved, the readOnly check happened too early - before the agent could set its own MastraMemory context. This caused child agents to inherit their parent's readOnly setting when sharing a RequestContext. (#11653)
The readOnly check is now only done at execution time in each processor's processOutputResult method, allowing proper isolation.
Fix network validation not seeing previous iteration results in multi-step tasks (#11691)
The validation LLM was unable to determine task completion for multi-step tasks because it couldn't see what primitives had already executed. Now includes a compact list of completed primitives in the validation prompt.
Fix provider-executed tools (like openai.tools.webSearch()) not working correctly with AI SDK v6 models. The agent's generate() method was ending prematurely with finishReason: 'tool-calls' instead of completing with a text response after tool execution. (#11622)
The issue was that V6 provider tools have type: 'provider' while V5 uses type: 'provider-defined'. The tool preparation code now detects the model version and uses the correct type.
Added startExclusive and endExclusive options to dateRange filter for message queries. (#11479)
What changed: The filter.dateRange parameter in listMessages() and Memory.recall() now supports startExclusive and endExclusive boolean options. When set to true, messages with timestamps exactly matching the boundary are excluded from results.
Why this matters: Enables cursor-based pagination for chat applications. When new messages arrive during a session, offset-based pagination can skip or duplicate messages. Using endExclusive: true with the oldest message's timestamp as a cursor ensures consistent pagination without gaps or duplicates.
Example:
// Get first page
const page1 = await memory.recall({
threadId: 'thread-123',
perPage: 10,
orderBy: { field: 'createdAt', direction: 'DESC' },
});
// Get next page using cursor-based pagination
const oldestMessage = page1.messages[page1.messages.length - 1];
const page2 = await memory.recall({
threadId: 'thread-123',
perPage: 10,
orderBy: { field: 'createdAt', direction: 'DESC' },
filter: {
dateRange: {
end: oldestMessage.createdAt,
endExclusive: true, // Excludes the cursor message
},
},
});
fix(core): support LanguageModelV3 in MastraModelGateway.resolveLanguageModel (#11489)
Fixed agent network not returning text response when routing agent handles requests without delegation. (#11497)
What changed:
Why this matters:
Previously, when using toAISdkV5Stream or networkRoute() outside of the Mastra Studio UI, no text content was returned when the routing agent handled requests directly. This fix ensures consistent behavior across all API routes.
Fixes #11219
Add initial state input to workflow form in studio (#11560)
Added missing stream types to @mastra/core/stream for better TypeScript support (#11513)
New types available:
ToolCallChunk, ToolResultChunk, SourceChunk, FileChunk, ReasoningChunkToolCallPayload, ToolResultPayload, TextDeltaPayload, ReasoningDeltaPayload, FilePayload, SourcePayloadJSONValue, JSONObject, JSONArray and readonly variantsThese types are now properly exported, enabling full TypeScript IntelliSense when working with streaming data.
Refactor the MessageList class from ~4000 LOC monolith to ~850 LOC with focused, single-responsibility modules. This improves maintainability, testability, and makes the codebase easier to understand. (#11658)
message-list/
├── message-list.ts # Main class (~850 LOC, down from ~4000)
├── adapters/ # SDK format conversions
│ ├── AIV4Adapter.ts # MastraDBMessage <-> AI SDK V4
│ └── AIV5Adapter.ts # MastraDBMessage <-> AI SDK V5
├── cache/
│ └── CacheKeyGenerator.ts # Deduplication keys
├── conversion/
│ ├── input-converter.ts # Any format -> MastraDBMessage
│ ├── output-converter.ts # MastraDBMessage -> SDK formats
│ ├── step-content.ts # Step content extraction
│ └── to-prompt.ts # LLM prompt formatting
├── detection/
│ └── TypeDetector.ts # Format identification
├── merge/
│ └── MessageMerger.ts # Streaming merge logic
├── state/
│ └── MessageStateManager.ts # Source & persistence tracking
└── utils/
└── provider-compat.ts # Provider-specific fixes
Resolve suspendPayload when tripwire is set off in agentic loop to prevent unresolved promises hanging. (#11621)
Fix OpenAI reasoning model + memory failing on second generate with "missing item" error (#11492)
When using OpenAI reasoning models with memory enabled, the second agent.generate() call would fail with: "Item 'rs_...' of type 'reasoning' was provided without its required following item."
The issue was that text-start events contain providerMetadata with the text's itemId (e.g., msg_xxx), but this metadata was not being captured. When memory replayed the conversation, the reasoning part had its rs_ ID but the text part was missing its msg_ ID, causing OpenAI to reject the request.
The fix adds handlers for text-start (to capture text providerMetadata) and text-end (to clear it and prevent leaking into subsequent parts).
Fixes #11481
Fix reasoning content being lost when text-start chunk arrives before reasoning-end (#11494)
Some model providers (e.g., ZAI/glm-4.6) return streaming chunks where text-start arrives before reasoning-end. Previously, this would clear the accumulated reasoning deltas, resulting in empty reasoning content in the final message. Now text-start is properly excluded from triggering the reasoning state reset, allowing reasoning-end to correctly save the reasoning content.
Add resumeGenerate method for resuming agent via generate (#11503)
Add runId and suspendPayload to fullOutput of agent stream
Default suspendedToolRunId to empty string to prevent null issue
Adds thread cloning to create independent copies of conversations that can diverge. (#11517)
// Clone a thread
const { thread, clonedMessages } = await memory.cloneThread({
sourceThreadId: 'thread-123',
title: 'My Clone',
options: {
messageLimit: 10, // optional: only copy last N messages
},
});
// Check if a thread is a clone
if (memory.isClone(thread)) {
const source = await memory.getSourceThread(thread.id);
}
// List all clones of a thread
const clones = await memory.listClones('thread-123');
Includes:
POST /api/memory/threads/:threadId/cloneFix runEvals() to automatically save scores to storage, making them visible in Studio observability. (#11516)
Previously, runEvals() would calculate scores but not persist them to storage, requiring users to manually implement score saving via the onItemComplete callback. Scores now automatically save when the target (Agent/Workflow) has an associated Mastra instance with storage configured.
What changed:
getMastraInstance()) and Workflow (.mastra getter)groundTruth (in additionalContext), requestContext, traceId, and spanIdsource: 'TEST' to distinguish them from live scoringMigration:
No action required. The onItemComplete workaround for saving scores can be removed if desired, but will continue to work for custom logic.
Example:
const result = await runEvals({
target: mastra.getWorkflow("myWorkflow"),
data: [{ input: {...}, groundTruth: {...} }],
scorers: [myScorer],
});
// Scores are now automatically saved and visible in Studio!
Fix autoresume not working fine in useChat (#11486)
Add embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Add embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Fixed module resolution failing on Windows with ERR_INVALID_URL_SCHEME errors. Windows absolute paths (e.g., C:\path\to\file) are now correctly skipped during node_modules resolution instead of being treated as package names. (#11639)
Add Bun runtime detection for bundler platform selection (#11548)
When running under Bun, the bundler now uses neutral esbuild platform instead of node to preserve Bun-specific globals (like Bun.s3). This fixes compatibility issues where Bun APIs were being incorrectly transformed during the build process.
Improved file persistence in dev mode. Files created by mastra dev are now saved in the public directory, so you can commit them to version control or ignore them via .gitignore. (#11234)
Fixed Windows crash where the Mastra dev server failed to start with ERR_UNSUPPORTED_ESM_URL_SCHEME error. The deployer now correctly handles Windows file paths. (#11340)
Add embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Add embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Add embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Add embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Add embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Added startExclusive and endExclusive options to dateRange filter for message queries. (#11479)
What changed: The filter.dateRange parameter in listMessages() and Memory.recall() now supports startExclusive and endExclusive boolean options. When set to true, messages with timestamps exactly matching the boundary are excluded from results.
Why this matters: Enables cursor-based pagination for chat applications. When new messages arrive during a session, offset-based pagination can skip or duplicate messages. Using endExclusive: true with the oldest message's timestamp as a cursor ensures consistent pagination without gaps or duplicates.
Example:
// Get first page
const page1 = await memory.recall({
threadId: 'thread-123',
perPage: 10,
orderBy: { field: 'createdAt', direction: 'DESC' },
});
// Get next page using cursor-based pagination
const oldestMessage = page1.messages[page1.messages.length - 1];
const page2 = await memory.recall({
threadId: 'thread-123',
perPage: 10,
orderBy: { field: 'createdAt', direction: 'DESC' },
filter: {
dateRange: {
end: oldestMessage.createdAt,
endExclusive: true, // Excludes the cursor message
},
},
});
Add embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Add embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Users no longer need to install the ai package as a peer dependency; this package now includes the necessary types internally. (#11625)
Unified getWorkflowRunById and getWorkflowRunExecutionResult into a single API that returns WorkflowState with both metadata and execution state. (#11429)
What changed:
getWorkflowRunById now returns a unified WorkflowState object containing metadata (runId, workflowName, resourceId, createdAt, updatedAt) along with processed execution state (status, result, error, payload, steps)fields parameter to request only specific fields for better performancewithNestedWorkflows parameter to control nested workflow step inclusiongetWorkflowRunExecutionResult - use getWorkflowRunById instead (breaking change)/execution-result API endpoints from server (breaking change)runExecutionResult() method from client SDK (breaking change)GetWorkflowRunExecutionResultResponse type from client SDK (breaking change)Before:
// Had to call two different methods for different data
const run = await workflow.getWorkflowRunById(runId); // Returns raw WorkflowRun with snapshot
const result = await workflow.getWorkflowRunExecutionResult(runId); // Returns processed execution state
After:
// Single method returns everything
const run = await workflow.getWorkflowRunById(runId);
// Returns: { runId, workflowName, resourceId, createdAt, updatedAt, status, result, error, payload, steps }
// Request only specific fields for better performance (avoids expensive step fetching)
const status = await workflow.getWorkflowRunById(runId, { fields: ['status'] });
// Skip nested workflow steps for faster response
const run = await workflow.getWorkflowRunById(runId, { withNestedWorkflows: false });
Why: The previous API required calling two separate methods to get complete workflow run information. This unification simplifies the API surface and gives users control over performance - fetching all steps (especially nested workflows) can be expensive, so the fields and withNestedWorkflows options let users request only what they need.
Add embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Fixed inconsistent query parameter handling across server adapters. (#11429)
What changed: Query parameters are now processed consistently across all server adapters (Express, Hono, Fastify, Koa). Added internal helper normalizeQueryParams and ParsedRequestParams type to @mastra/server for adapter implementations.
Why: Different HTTP frameworks handle query parameters differently - some return single strings while others return arrays for repeated params like ?tag=a&tag=b. This caused type inconsistencies that could lead to validation failures in certain adapters.
User impact: None for typical usage - HTTP endpoints and client SDK behavior are unchanged. If you extend server adapter classes and override getParams or parseQueryParams, update your implementation to use Record<string, string | string[]> for query parameters.
Add embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
feat: Add Fastify and Koa server adapters (#11568)
Introduces two new server adapters for Mastra:
Both adapters provide full MastraServerBase implementation including route registration, streaming responses, multipart uploads, auth middleware, and MCP transport support.
import Fastify from 'fastify';
import { MastraServer } from '@mastra/fastify';
import { mastra } from './mastra';
const app = Fastify();
const server = new MastraServer({ app, mastra });
await server.init();
app.listen({ port: 4111 });
import Koa from 'koa';
import bodyParser from 'koa-bodyparser';
import { MastraServer } from '@mastra/koa';
import { mastra } from './mastra';
const app = new Koa();
app.use(bodyParser());
const server = new MastraServer({ app, mastra });
await server.init();
app.listen(4111);
Fixed inconsistent query parameter handling across server adapters. (#11429)
What changed: Query parameters are now processed consistently across all server adapters (Express, Hono, Fastify, Koa). Added internal helper normalizeQueryParams and ParsedRequestParams type to @mastra/server for adapter implementations.
Why: Different HTTP frameworks handle query parameters differently - some return single strings while others return arrays for repeated params like ?tag=a&tag=b. This caused type inconsistencies that could lead to validation failures in certain adapters.
User impact: None for typical usage - HTTP endpoints and client SDK behavior are unchanged. If you extend server adapter classes and override getParams or parseQueryParams, update your implementation to use Record<string, string | string[]> for query parameters.
Unified getWorkflowRunById and getWorkflowRunExecutionResult into a single API that returns WorkflowState with both metadata and execution state. (#11429)
What changed:
getWorkflowRunById now returns a unified WorkflowState object containing metadata (runId, workflowName, resourceId, createdAt, updatedAt) along with processed execution state (status, result, error, payload, steps)fields parameter to request only specific fields for better performancewithNestedWorkflows parameter to control nested workflow step inclusiongetWorkflowRunExecutionResult - use getWorkflowRunById instead (breaking change)/execution-result API endpoints from server (breaking change)runExecutionResult() method from client SDK (breaking change)GetWorkflowRunExecutionResultResponse type from client SDK (breaking change)Before:
// Had to call two different methods for different data
const run = await workflow.getWorkflowRunById(runId); // Returns raw WorkflowRun with snapshot
const result = await workflow.getWorkflowRunExecutionResult(runId); // Returns processed execution state
After:
// Single method returns everything
const run = await workflow.getWorkflowRunById(runId);
// Returns: { runId, workflowName, resourceId, createdAt, updatedAt, status, result, error, payload, steps }
// Request only specific fields for better performance (avoids expensive step fetching)
const status = await workflow.getWorkflowRunById(runId, { fields: ['status'] });
// Skip nested workflow steps for faster response
const run = await workflow.getWorkflowRunById(runId, { withNestedWorkflows: false });
Why: The previous API required calling two separate methods to get complete workflow run information. This unification simplifies the API surface and gives users control over performance - fetching all steps (especially nested workflows) can be expensive, so the fields and withNestedWorkflows options let users request only what they need.
Add embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Fixed inconsistent query parameter handling across server adapters. (#11429)
What changed: Query parameters are now processed consistently across all server adapters (Express, Hono, Fastify, Koa). Added internal helper normalizeQueryParams and ParsedRequestParams type to @mastra/server for adapter implementations.
Why: Different HTTP frameworks handle query parameters differently - some return single strings while others return arrays for repeated params like ?tag=a&tag=b. This caused type inconsistencies that could lead to validation failures in certain adapters.
User impact: None for typical usage - HTTP endpoints and client SDK behavior are unchanged. If you extend server adapter classes and override getParams or parseQueryParams, update your implementation to use Record<string, string | string[]> for query parameters.
Add support for retries and scorers parameters across all createStep overloads.
(#11495)
The createStep function now includes support for the retries and scorers fields across all step creation patterns, enabling step-level retry configuration and AI evaluation support for regular steps, agent-based steps, and tool-based steps.
import { init } from '@mastra/inngest';
import { z } from 'zod';
const { createStep } = init(inngest);
// 1. Regular step with retries
const regularStep = createStep({
id: 'api-call',
inputSchema: z.object({ url: z.string() }),
outputSchema: z.object({ data: z.any() }),
retries: 3, // ← Will retry up to 3 times on failure
execute: async ({ inputData }) => {
const response = await fetch(inputData.url);
return { data: await response.json() };
},
});
// 2. Agent step with retries and scorers
const agentStep = createStep(myAgent, {
retries: 3,
scorers: [{ id: 'accuracy-scorer', scorer: myAccuracyScorer }],
});
// 3. Tool step with retries and scorers
const toolStep = createStep(myTool, {
retries: 2,
scorers: [{ id: 'quality-scorer', scorer: myQualityScorer }],
});
This change ensures API consistency across all createStep overloads. All step types now support retry and evaluation configurations.
This is a non-breaking change - steps without these parameters continue to work exactly as before.
Fixes #9351
Remove streamVNext, resumeStreamVNext, and observeStreamVNext methods, call stream, resumeStream and observeStream directly (#11499)
+ const run = await workflow.createRun({ runId: '123' });
- const stream = await run.streamVNext({ inputData: { ... } });
+ const stream = await run.stream({ inputData: { ... } });
Unified getWorkflowRunById and getWorkflowRunExecutionResult into a single API that returns WorkflowState with both metadata and execution state. (#11429)
What changed:
getWorkflowRunById now returns a unified WorkflowState object containing metadata (runId, workflowName, resourceId, createdAt, updatedAt) along with processed execution state (status, result, error, payload, steps)fields parameter to request only specific fields for better performancewithNestedWorkflows parameter to control nested workflow step inclusiongetWorkflowRunExecutionResult - use getWorkflowRunById instead (breaking change)/execution-result API endpoints from server (breaking change)runExecutionResult() method from client SDK (breaking change)GetWorkflowRunExecutionResultResponse type from client SDK (breaking change)Before:
// Had to call two different methods for different data
const run = await workflow.getWorkflowRunById(runId); // Returns raw WorkflowRun with snapshot
const result = await workflow.getWorkflowRunExecutionResult(runId); // Returns processed execution state
After:
// Single method returns everything
const run = await workflow.getWorkflowRunById(runId);
// Returns: { runId, workflowName, resourceId, createdAt, updatedAt, status, result, error, payload, steps }
// Request only specific fields for better performance (avoids expensive step fetching)
const status = await workflow.getWorkflowRunById(runId, { fields: ['status'] });
// Skip nested workflow steps for faster response
const run = await workflow.getWorkflowRunById(runId, { withNestedWorkflows: false });
Why: The previous API required calling two separate methods to get complete workflow run information. This unification simplifies the API surface and gives users control over performance - fetching all steps (especially nested workflows) can be expensive, so the fields and withNestedWorkflows options let users request only what they need.
Add cron scheduling support to Inngest workflows. Workflows can now be automatically triggered on a schedule by adding a cron property along with optional inputData and initialState: (#11518)
const workflow = createWorkflow({
id: 'scheduled-workflow',
inputSchema: z.object({ value: z.string() }),
outputSchema: z.object({ result: z.string() }),
steps: [step1],
cron: '0 0 * * *', // Run daily at midnight
inputData: { value: 'scheduled-run' }, // Optional inputData for the scheduled workflow run
initialState: { count: 0 }, // Optional initialState for the scheduled workflow run
});
feat: Add Fastify and Koa server adapters (#11568)
Introduces two new server adapters for Mastra:
Both adapters provide full MastraServerBase implementation including route registration, streaming responses, multipart uploads, auth middleware, and MCP transport support.
import Fastify from 'fastify';
import { MastraServer } from '@mastra/fastify';
import { mastra } from './mastra';
const app = Fastify();
const server = new MastraServer({ app, mastra });
await server.init();
app.listen({ port: 4111 });
import Koa from 'koa';
import bodyParser from 'koa-bodyparser';
import { MastraServer } from '@mastra/koa';
import { mastra } from './mastra';
const app = new Koa();
app.use(bodyParser());
const server = new MastraServer({ app, mastra });
await server.init();
app.listen(4111);
Fixed inconsistent query parameter handling across server adapters. (#11429)
What changed: Query parameters are now processed consistently across all server adapters (Express, Hono, Fastify, Koa). Added internal helper normalizeQueryParams and ParsedRequestParams type to @mastra/server for adapter implementations.
Why: Different HTTP frameworks handle query parameters differently - some return single strings while others return arrays for repeated params like ?tag=a&tag=b. This caused type inconsistencies that could lead to validation failures in certain adapters.
User impact: None for typical usage - HTTP endpoints and client SDK behavior are unchanged. If you extend server adapter classes and override getParams or parseQueryParams, update your implementation to use Record<string, string | string[]> for query parameters.
Add embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Added startExclusive and endExclusive options to dateRange filter for message queries. (#11479)
What changed: The filter.dateRange parameter in listMessages() and Memory.recall() now supports startExclusive and endExclusive boolean options. When set to true, messages with timestamps exactly matching the boundary are excluded from results.
Why this matters: Enables cursor-based pagination for chat applications. When new messages arrive during a session, offset-based pagination can skip or duplicate messages. Using endExclusive: true with the oldest message's timestamp as a cursor ensures consistent pagination without gaps or duplicates.
Example:
// Get first page
const page1 = await memory.recall({
threadId: 'thread-123',
perPage: 10,
orderBy: { field: 'createdAt', direction: 'DESC' },
});
// Get next page using cursor-based pagination
const oldestMessage = page1.messages[page1.messages.length - 1];
const page2 = await memory.recall({
threadId: 'thread-123',
perPage: 10,
orderBy: { field: 'createdAt', direction: 'DESC' },
filter: {
dateRange: {
end: oldestMessage.createdAt,
endExclusive: true, // Excludes the cursor message
},
},
});
Add embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Add embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Add embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Added startExclusive and endExclusive options to dateRange filter for message queries. (#11479)
What changed: The filter.dateRange parameter in listMessages() and Memory.recall() now supports startExclusive and endExclusive boolean options. When set to true, messages with timestamps exactly matching the boundary are excluded from results.
Why this matters: Enables cursor-based pagination for chat applications. When new messages arrive during a session, offset-based pagination can skip or duplicate messages. Using endExclusive: true with the oldest message's timestamp as a cursor ensures consistent pagination without gaps or duplicates.
Example:
// Get first page
const page1 = await memory.recall({
threadId: 'thread-123',
perPage: 10,
orderBy: { field: 'createdAt', direction: 'DESC' },
});
// Get next page using cursor-based pagination
const oldestMessage = page1.messages[page1.messages.length - 1];
const page2 = await memory.recall({
threadId: 'thread-123',
perPage: 10,
orderBy: { field: 'createdAt', direction: 'DESC' },
filter: {
dateRange: {
end: oldestMessage.createdAt,
endExclusive: true, // Excludes the cursor message
},
},
});
Adds thread cloning to create independent copies of conversations that can diverge. (#11517)
// Clone a thread
const { thread, clonedMessages } = await memory.cloneThread({
sourceThreadId: 'thread-123',
title: 'My Clone',
options: {
messageLimit: 10, // optional: only copy last N messages
},
});
// Check if a thread is a clone
if (memory.isClone(thread)) {
const source = await memory.getSourceThread(thread.id);
}
// List all clones of a thread
const clones = await memory.listClones('thread-123');
Includes:
POST /api/memory/threads/:threadId/cloneAdd embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Add embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Add embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Fixed client-side tool invocations not being stored in memory. Previously, tool invocations with state 'call' were filtered out before persistence, which incorrectly removed client-side tools. Now only streaming intermediate states ('partial-call') are filtered. (#11630)
Fixed a crash when updating working memory with an empty or null update; existing data is now preserved.
Adds thread cloning to create independent copies of conversations that can diverge. (#11517)
// Clone a thread
const { thread, clonedMessages } = await memory.cloneThread({
sourceThreadId: 'thread-123',
title: 'My Clone',
options: {
messageLimit: 10, // optional: only copy last N messages
},
});
// Check if a thread is a clone
if (memory.isClone(thread)) {
const source = await memory.getSourceThread(thread.id);
}
// List all clones of a thread
const clones = await memory.listClones('thread-123');
Includes:
POST /api/memory/threads/:threadId/cloneUnified getWorkflowRunById and getWorkflowRunExecutionResult into a single API that returns WorkflowState with both metadata and execution state. (#11429)
What changed:
getWorkflowRunById now returns a unified WorkflowState object containing metadata (runId, workflowName, resourceId, createdAt, updatedAt) along with processed execution state (status, result, error, payload, steps)fields parameter to request only specific fields for better performancewithNestedWorkflows parameter to control nested workflow step inclusiongetWorkflowRunExecutionResult - use getWorkflowRunById instead (breaking change)/execution-result API endpoints from server (breaking change)runExecutionResult() method from client SDK (breaking change)GetWorkflowRunExecutionResultResponse type from client SDK (breaking change)Before:
// Had to call two different methods for different data
const run = await workflow.getWorkflowRunById(runId); // Returns raw WorkflowRun with snapshot
const result = await workflow.getWorkflowRunExecutionResult(runId); // Returns processed execution state
After:
// Single method returns everything
const run = await workflow.getWorkflowRunById(runId);
// Returns: { runId, workflowName, resourceId, createdAt, updatedAt, status, result, error, payload, steps }
// Request only specific fields for better performance (avoids expensive step fetching)
const status = await workflow.getWorkflowRunById(runId, { fields: ['status'] });
// Skip nested workflow steps for faster response
const run = await workflow.getWorkflowRunById(runId, { withNestedWorkflows: false });
Why: The previous API required calling two separate methods to get complete workflow run information. This unification simplifies the API surface and gives users control over performance - fetching all steps (especially nested workflows) can be expensive, so the fields and withNestedWorkflows options let users request only what they need.
Add embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Added startExclusive and endExclusive options to dateRange filter for message queries. (#11479)
What changed: The filter.dateRange parameter in listMessages() and Memory.recall() now supports startExclusive and endExclusive boolean options. When set to true, messages with timestamps exactly matching the boundary are excluded from results.
Why this matters: Enables cursor-based pagination for chat applications. When new messages arrive during a session, offset-based pagination can skip or duplicate messages. Using endExclusive: true with the oldest message's timestamp as a cursor ensures consistent pagination without gaps or duplicates.
Example:
// Get first page
const page1 = await memory.recall({
threadId: 'thread-123',
perPage: 10,
orderBy: { field: 'createdAt', direction: 'DESC' },
});
// Get next page using cursor-based pagination
const oldestMessage = page1.messages[page1.messages.length - 1];
const page2 = await memory.recall({
threadId: 'thread-123',
perPage: 10,
orderBy: { field: 'createdAt', direction: 'DESC' },
filter: {
dateRange: {
end: oldestMessage.createdAt,
endExclusive: true, // Excludes the cursor message
},
},
});
Add embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Added startExclusive and endExclusive options to dateRange filter for message queries. (#11479)
What changed: The filter.dateRange parameter in listMessages() and Memory.recall() now supports startExclusive and endExclusive boolean options. When set to true, messages with timestamps exactly matching the boundary are excluded from results.
Why this matters: Enables cursor-based pagination for chat applications. When new messages arrive during a session, offset-based pagination can skip or duplicate messages. Using endExclusive: true with the oldest message's timestamp as a cursor ensures consistent pagination without gaps or duplicates.
Example:
// Get first page
const page1 = await memory.recall({
threadId: 'thread-123',
perPage: 10,
orderBy: { field: 'createdAt', direction: 'DESC' },
});
// Get next page using cursor-based pagination
const oldestMessage = page1.messages[page1.messages.length - 1];
const page2 = await memory.recall({
threadId: 'thread-123',
perPage: 10,
orderBy: { field: 'createdAt', direction: 'DESC' },
filter: {
dateRange: {
end: oldestMessage.createdAt,
endExclusive: true, // Excludes the cursor message
},
},
});
Deprecate default: { enabled: true } observability configuration (#11674)
The shorthand default: { enabled: true } configuration is now deprecated and will be removed in a future version. Users should migrate to explicit configuration with DefaultExporter, CloudExporter, and SensitiveDataFilter.
Before (deprecated):
import { Observability } from '@mastra/observability';
const mastra = new Mastra({
observability: new Observability({
default: { enabled: true },
}),
});
After (recommended):
import { Observability, DefaultExporter, CloudExporter, SensitiveDataFilter } from '@mastra/observability';
const mastra = new Mastra({
observability: new Observability({
configs: {
default: {
serviceName: 'mastra',
exporters: [new DefaultExporter(), new CloudExporter()],
spanOutputProcessors: [new SensitiveDataFilter()],
},
},
}),
});
The explicit configuration makes it clear exactly what exporters and processors are being used, improving code readability and maintainability.
A deprecation warning will be logged when using the old configuration pattern.
Fix processor tracing to create individual spans per processor (#11683)
input processor: validator) instead of combined workflow IDsINPUT_STEP_PROCESSOR and OUTPUT_STEP_PROCESSOR entity types for finer-grained tracingprocessorType span attribute to processorExecutor with values 'workflow' or 'legacy'Add embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Fix trace-level sampling to sample entire traces instead of individual spans (#11676)
Previously, sampling decisions were made independently for each span, causing fragmented traces where some spans were sampled and others were not. This defeated the purpose of ratio or custom sampling strategies.
Now:
Fixes #11504
Add embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Add embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Add embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Add embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Added startExclusive and endExclusive options to dateRange filter for message queries. (#11479)
What changed: The filter.dateRange parameter in listMessages() and Memory.recall() now supports startExclusive and endExclusive boolean options. When set to true, messages with timestamps exactly matching the boundary are excluded from results.
Why this matters: Enables cursor-based pagination for chat applications. When new messages arrive during a session, offset-based pagination can skip or duplicate messages. Using endExclusive: true with the oldest message's timestamp as a cursor ensures consistent pagination without gaps or duplicates.
Example:
// Get first page
const page1 = await memory.recall({
threadId: 'thread-123',
perPage: 10,
orderBy: { field: 'createdAt', direction: 'DESC' },
});
// Get next page using cursor-based pagination
const oldestMessage = page1.messages[page1.messages.length - 1];
const page2 = await memory.recall({
threadId: 'thread-123',
perPage: 10,
orderBy: { field: 'createdAt', direction: 'DESC' },
filter: {
dateRange: {
end: oldestMessage.createdAt,
endExclusive: true, // Excludes the cursor message
},
},
});
Fix thread timestamps being returned in incorrect timezone from listThreadsByResourceId (#11498)
The method was not using the timezone-aware columns (createdAtZ/updatedAtZ), causing timestamps to be interpreted in local timezone instead of UTC. Now correctly uses TIMESTAMPTZ columns with fallback for legacy data.
Adds thread cloning to create independent copies of conversations that can diverge. (#11517)
// Clone a thread
const { thread, clonedMessages } = await memory.cloneThread({
sourceThreadId: 'thread-123',
title: 'My Clone',
options: {
messageLimit: 10, // optional: only copy last N messages
},
});
// Check if a thread is a clone
if (memory.isClone(thread)) {
const source = await memory.getSourceThread(thread.id);
}
// List all clones of a thread
const clones = await memory.listClones('thread-123');
Includes:
POST /api/memory/threads/:threadId/cloneAdd embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Add missing loading state handlers to TanStack Query hooks. Components now properly show skeleton/loading UI instead of returning null or rendering incomplete states while data is being fetched. (#11681)
Remove streamVNext, resumeStreamVNext, and observeStreamVNext methods, call stream, resumeStream and observeStream directly (#11499)
+ const run = await workflow.createRun({ runId: '123' });
- const stream = await run.streamVNext({ inputData: { ... } });
+ const stream = await run.stream({ inputData: { ... } });
Dedupe Avatar component by removing UI avatar and using DS Avatar with size variants (#11637)
Consolidate date picker components by removing duplicate DatePicker and Calendar components. DateField now uses the DayPicker wrapper from date-time-picker folder directly. (#11649)
Consolidate tabs components: remove redundant implementations and add onClose prop support (#11650)
Use the same Button component every where. Remove duplicates. (#11635)
Add initial state input to workflow form in studio (#11560)
Remove unused files and dependencies identified by Knip (#11677)
Fix react/react-DOM version mismatch. (#11620)
Adds thread cloning to create independent copies of conversations that can diverge. (#11517)
// Clone a thread
const { thread, clonedMessages } = await memory.cloneThread({
sourceThreadId: 'thread-123',
title: 'My Clone',
options: {
messageLimit: 10, // optional: only copy last N messages
},
});
// Check if a thread is a clone
if (memory.isClone(thread)) {
const source = await memory.getSourceThread(thread.id);
}
// List all clones of a thread
const clones = await memory.listClones('thread-123');
Includes:
POST /api/memory/threads/:threadId/cloneDisplay network completion validation results and scorer feedback in the Playground when viewing agent network runs, letting users see pass/fail status and actionable feedback from completion scorers (#11562)
Unified getWorkflowRunById and getWorkflowRunExecutionResult into a single API that returns WorkflowState with both metadata and execution state. (#11429)
What changed:
getWorkflowRunById now returns a unified WorkflowState object containing metadata (runId, workflowName, resourceId, createdAt, updatedAt) along with processed execution state (status, result, error, payload, steps)fields parameter to request only specific fields for better performancewithNestedWorkflows parameter to control nested workflow step inclusiongetWorkflowRunExecutionResult - use getWorkflowRunById instead (breaking change)/execution-result API endpoints from server (breaking change)runExecutionResult() method from client SDK (breaking change)GetWorkflowRunExecutionResultResponse type from client SDK (breaking change)Before:
// Had to call two different methods for different data
const run = await workflow.getWorkflowRunById(runId); // Returns raw WorkflowRun with snapshot
const result = await workflow.getWorkflowRunExecutionResult(runId); // Returns processed execution state
After:
// Single method returns everything
const run = await workflow.getWorkflowRunById(runId);
// Returns: { runId, workflowName, resourceId, createdAt, updatedAt, status, result, error, payload, steps }
// Request only specific fields for better performance (avoids expensive step fetching)
const status = await workflow.getWorkflowRunById(runId, { fields: ['status'] });
// Skip nested workflow steps for faster response
const run = await workflow.getWorkflowRunById(runId, { withNestedWorkflows: false });
Why: The previous API required calling two separate methods to get complete workflow run information. This unification simplifies the API surface and gives users control over performance - fetching all steps (especially nested workflows) can be expensive, so the fields and withNestedWorkflows options let users request only what they need.
Add embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Add embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Add dynamic vectorStore resolver support for multi-tenant applications (#11542)
The vectorStore option in createVectorQueryTool and createGraphRAGTool now accepts a resolver function in addition to static instances. This enables multi-tenant setups where each tenant has isolated data in separate PostgreSQL schemas.
Also improves providerOptions type safety by using MastraEmbeddingOptions types instead of a generic Record type.
Add embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Fix TypeScript errors during build declaration generation (#11682)
Updated test file toUIMessage.test.ts to match current @mastra/core types:
error property from string to Error object (per StepFailure type)resumeSchema property to tool-call-approval payloads (per ToolCallApprovalPayload type)zod as peer/dev dependency for test type supportFixed agent network not returning text response when routing agent handles requests without delegation. (#11497)
What changed:
Why this matters:
Previously, when using toAISdkV5Stream or networkRoute() outside of the Mastra Studio UI, no text content was returned when the routing agent handled requests directly. This fix ensures consistent behavior across all API routes.
Fixes #11219
Display network completion validation results and scorer feedback in the Playground when viewing agent network runs, letting users see pass/fail status and actionable feedback from completion scorers (#11562)
Add embedded documentation support for Mastra packages (#11472)
Mastra packages now include embedded documentation in the published npm package under dist/docs/. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from node_modules.
Each package includes:
Documentation is driven by the packages frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
Note: Release notes were truncated due to GitHub's 125,000 character limit. See the full changelog details at the link below.
Full Changelog: b3e7a74
Fix data chunk property filtering to only include type, data, and id properties (#11477)
Previously, when isDataChunkType checks were performed, the entire chunk object was returned, potentially letting extra properties like from, runId, metadata, etc go through. This could cause issues with useChat and other UI components.
Now, all locations that handle DataChunkType properly destructure and return only the allowed properties:
type (required): The chunk type identifier starting with "data-"data (required): The actual data payloadid (optional): An optional identifier for the chunkAdd embedderOptions support to Memory for AI SDK 5+ provider-specific embedding options (#11462)
With AI SDK 5+, embedding models no longer accept options in their constructor. Options like outputDimensionality for Google embedding models must now be passed when calling embed() or embedMany(). This change adds embedderOptions to Memory configuration to enable passing these provider-specific options.
You can now configure embedder options when creating Memory:
import { Memory } from '@mastra/core';
import { google } from '@ai-sdk/google';
// Before: No way to specify providerOptions
const memory = new Memory({
embedder: google.textEmbeddingModel('text-embedding-004'),
});
// After: Pass embedderOptions with providerOptions
const memory = new Memory({
embedder: google.textEmbeddingModel('text-embedding-004'),
embedderOptions: {
providerOptions: {
google: {
outputDimensionality: 768,
taskType: 'RETRIEVAL_DOCUMENT',
},
},
},
});
This is especially important for:
text-embedding-004: Control output dimensions (default 768)gemini-embedding-001: Reduce from default 3072 dimensions to avoid pgvector's 2000 dimension limit for HNSW indexesFixes #8248
Fix Anthropic API error when tool calls have empty input objects (#11474)
Fixes issue #11376 where Anthropic models would fail with error "messages.17.content.2.tool_use.input: Field required" when a tool call in a previous step had an empty object {} as input.
The fix adds proper reconstruction of tool call arguments when converting messages to AIV5 model format. Tool-result parts now correctly include the input field from the matching tool call, which is required by Anthropic's API validation.
Changes:
findToolCallArgs() helper method to search through messages and retrieve original tool call argumentsaiV5UIMessagesToAIV5ModelMessages() to populate the input field on tool-result partsFixed an issue where deprecated Groq models were shown during template creation. The model selection now filters out models marked as deprecated, displaying only active and supported models. (#11445)
Fix AI SDK v6 (specificationVersion: "v3") model support in sub-agent calls. Previously, when a parent agent invoked a sub-agent with a v3 model through the agents property, the version check only matched "v2", causing v3 models to incorrectly fall back to legacy streaming methods and throw "V2 models are not supported for streamLegacy" error. (#11452)
The fix updates version checks in listAgentTools and llm-mapping-step.ts to use the centralized supportedLanguageModelSpecifications array which includes both v2 and v3.
Also adds missing v3 test coverage to tool-handling.test.ts to prevent regression.
Fixed "Transforms cannot be represented in JSON Schema" error when using Zod v4 with structuredOutput (#11466)
When using schemas with .optional(), .nullable(), .default(), or .nullish().default("") patterns with structuredOutput and Zod v4, users would encounter an error because OpenAI schema compatibility layer adds transforms that Zod v4's native toJSONSchema() cannot handle.
The fix uses Mastra's transform-safe zodToJsonSchema function which gracefully handles transforms by using the unrepresentable: 'any' option.
Also exported isZodType utility from @mastra/schema-compat and updated it to detect both Zod v3 (_def) and Zod v4 (_zod) schemas.
Improved test description in ModelsDevGateway to clearly reflect the behavior being tested (#11460)
Fix npm resolving wrong @mastra/server version (#11467)
Changed @mastra/server dependency from workspace:^ to workspace:* to prevent npm from resolving to incompatible stable versions (e.g., 1.0.3) instead of the required beta versions.
Remove extra console log statements in node-modules-extension-resolver (#11470)
Remove pg-promise dependency and use pg.Pool directly (#11450)
BREAKING CHANGE: This release replaces pg-promise with vanilla node-postgres (pg).
store.pgp: The pg-promise library instance is no longer exposed{ client: pgPromiseDb } is no longer supported. Use { pool: pgPool } insteadmax and idleTimeoutMillis must now be passed via pgPoolOptionsstore.pool: Exposes the underlying pg.Pool for direct database access or ORM integration (e.g., Drizzle)store.db: Provides a DbClient interface with methods like one(), any(), tx(), etc.store.db.connect(): Acquire a client for session-level operations// Before (pg-promise)
import pgPromise from 'pg-promise';
const pgp = pgPromise();
const client = pgp(connectionString);
const store = new PostgresStore({ id: 'my-store', client });
// After (pg.Pool)
import { Pool } from 'pg';
const pool = new Pool({ connectionString });
const store = new PostgresStore({ id: 'my-store', pool });
// Use store.pool with any library that accepts a pg.Pool
Added exportSchemas() function to generate Mastra database schema as SQL DDL without a database connection. (#11448)
What's New
You can now export your Mastra database schema as SQL DDL statements without connecting to a database. This is useful for:
Example
import { exportSchemas } from '@mastra/pg';
// Export schema for default 'public' schema
const ddl = exportSchemas();
console.log(ddl);
// Export schema for a custom schema
const customDdl = exportSchemas('my_schema');
// Creates: CREATE SCHEMA IF NOT EXISTS "my_schema"; and all tables within it
Fixed "Transforms cannot be represented in JSON Schema" error when using Zod v4 with structuredOutput (#11466)
When using schemas with .optional(), .nullable(), .default(), or .nullish().default("") patterns with structuredOutput and Zod v4, users would encounter an error because OpenAI schema compatibility layer adds transforms that Zod v4's native toJSONSchema() cannot handle.
The fix uses Mastra's transform-safe zodToJsonSchema function which gracefully handles transforms by using the unrepresentable: 'any' option.
Also exported isZodType utility from @mastra/schema-compat and updated it to detect both Zod v3 (_def) and Zod v4 (_zod) schemas.
fix(schema-compat): handle undefined values in optional fields for OpenAI compat layers (#11469)
When a Zod schema has nested objects with .partial(), the optional fields would fail validation with "expected string, received undefined" errors. This occurred because the OpenAI schema compat layer converted .optional() to .nullable(), which only accepts null values, not undefined.
Changed .nullable() to .nullish() so that optional fields now accept both null (when explicitly provided by the LLM) and undefined (when fields are omitted entirely).
Fixes #11457
Full Changelog: 4837644
Fixed semantic recall fetching all thread messages instead of only matched ones. (#11435)
When using semanticRecall with scope: 'thread', the processor was incorrectly fetching all messages from the thread instead of just the semantically matched messages with their context. This caused memory to return far more messages than expected when topK and messageRange were set to small values.
Fixes #11428
Fix SensitiveDataFilter destroying Date objects (#11437)
The deepFilter method now correctly preserves Date objects instead of converting them to empty objects {}. This fixes issues with downstream exporters like BraintrustExporter that rely on Date methods like getTime().
Previously, Object.keys(new Date()) returned [], causing Date objects to be incorrectly converted to {}. The fix adds an explicit check for Date instances before generic object processing.
Full Changelog: 3d9c9fb