April 22, 2026
Agents can now dispatch slow tool calls as background tasks while the main conversation keeps streaming, then inject results back into the loop when they finish. This comes with new /api/background-tasks endpoints (list/get/SSE stream), client methods (listBackgroundTasks, getBackgroundTask, streamBackgroundTasks), and new BackgroundTasksStorage domain implementations across major storage adapters.
@mastra/redis)Introduces @mastra/redis, a Redis-backed Mastra storage provider (memory/workflows/scores) using the official node-redis client, with flexible connection options including connection strings or injected preconfigured clients.
NetlifyDeployer adds a target: 'edge' option to deploy as Netlify Edge Functions (Deno at the edge) with CPU-time limits instead of hard wall-clock timeouts—better suited for longer-running AI workflows than 60s serverless limits.
RAG ingestion runs now appear in observability traces alongside agents/workflows, and traces can be filtered by traceId. New lightweight schemas and endpoints (including GET /observability/traces/:traceId/light and storage getTraceLight) reduce timeline payloads dramatically by omitting heavy span fields until details are requested.
Span serialization is hardened to prevent LLM/API credentials and auth headers from leaking into telemetry across routers, gateways, and model wrappers. Additionally, server calls can now set tracingOptions (tags, hideInput, hideOutput) per request to control span labeling and redaction.
RAG ingestion runs now appear in observability traces, next to your agents, workflows, and scorers. (#15512)
You can now filter traces by traceId when listing them.
Added lightweight span and trace schemas (LightSpanRecord, GetTraceLightResponse) that exclude heavy fields like input, output, attributes, and metadata — reducing per-span payload by ~97% for timeline rendering.
Fixed potential credential leakage in observability spans. LLM API keys, authentication headers, and gateway tokens could previously appear in span input or output data sent to telemetry backends. (#15489)
What's fixed
The model router, AI SDK model wrappers (v4 legacy, v5, v6), built-in gateways (Mastra, Netlify, Models.dev, Azure OpenAI), and the voice provider base class now restrict what they expose to spans. Only public identity fields — model ID, provider, gateway ID, voice name — are included. Private configuration such as API keys, Authorization headers, OAuth tokens, and proxy credentials is no longer serialized into spans.
Legacy AI SDK v4 models passed to resolveModelConfig were previously returned unwrapped. They are now wrapped in AISDKV4LegacyLanguageModel, which applies the same serializeForSpan() safety as the v5/v6 wrappers while preserving the LanguageModelV1 interface so existing consumers continue to work.
The SensitiveDataFilter span output processor already redacted values under common field names (apiKey, token, authorization, etc.) when enabled. This fix closes the gap for users who did not have it configured, and for cases where credentials were nested under custom field names that the filter's exact-match list did not cover.
Recommended action
MastraModelGateway and custom voice providers extending MastraVoice are automatically covered — they inherit the new safe default. Override serializeForSpan() only if you want to expose additional non-sensitive fields.input, output, attributes, or metadata) that holds enumerable fields with credentials or other sensitive state, add a serializeForSpan() method. TypeScript-private properties are still walked by span serialization because private is compile-time only.class MyServiceClient {
constructor(private config: { apiKey: string; endpoint: string }) {}
// Without this, spans carrying a MyServiceClient instance would
// serialize `config.apiKey` through every enumerable property.
serializeForSpan() {
return { endpoint: this.config.endpoint };
}
}
Added support for sub-agent version overrides in core execution. Global defaults can be set on the Mastra instance and overridden per generate()/stream() call, with cascading propagation via requestContext. (#15373)
Added per-entry modelSettings, providerOptions, and headers to agent model fallback arrays. Each entry can now specify its own temperature, topP, provider-specific options, and HTTP headers — either statically or as a function of requestContext. Closes #15421. (#15429)
Example
const agent = new Agent({
model: [
{
model: 'google/gemini-2.5-flash',
maxRetries: 2,
modelSettings: { temperature: 0.3 },
providerOptions: { google: { thinkingConfig: { thinkingBudget: 0 } } },
},
{
model: 'openai/gpt-5-mini',
maxRetries: 2,
modelSettings: { temperature: 0.7 },
providerOptions: { openai: { reasoningEffort: 'low' } },
},
],
});
Precedence:
modelSettings and providerOptions: per-fallback entry > call-time stream() / generate() options > agent defaultOptions. modelSettings shallow-merges by key; providerOptions deep-merges recursively, preserving sibling and nested keys.headers: call-time modelSettings.headers > per-fallback headers > model-router-extracted headers. This preserves the existing Mastra contract from #11275, where runtime headers (typically tracing, auth, tenancy) intentionally override model-level headers.Added activateAfterIdle setting for observational memory so buffered observations can activate after idle time before the next prompt. (#15365)
Example
Set activateAfterIdle: 300_000 (or "5m") on the observationalMemory config to activate buffered context after 5 minutes of inactivity.
This helps long-running threads reuse compressed context after prompt cache TTLs expire instead of sending a larger raw message window on the next request.
You can now opt into parent-agent reuse for the separate structured-output pass with structuredOutput: { schema, model, useAgent: true }, which lets the structuring request reuse the parent agent config, including memory. (#15318)
Added unique IDs (logId, metricId, scoreId, feedbackId) to all observability signals, generated automatically at emission time for de-duplication across the framework pipeline and cross-system correlation. User-facing APIs (logger.info(), metrics.emit(), addScore(), addFeedback()) are unchanged. (#15242)
For existing ClickHouse and DuckDB observability signal tables, run npx mastra migrate before initializing the store so the new signal-ID schema is applied.
Processor traces now store hook-specific inputs and only include changed outputs, reducing payload size while keeping traces more replayable. If you consume PROCESSOR_RUN payloads directly, update any dashboards or parsers that depend on the previous shape. (#15493)
Fixed CompositeAuth types so typed auth providers, such as SimpleAuth<MyUser> or MastraAuthClerk, can be combined without casts. (#15556)
Update provider registry and model documentation with latest models and providers (3d83d06)
Fixed browser context reminders breaking prompt cache. Browser reminders are now added as new user messages instead of modifying existing message history. (#15417)
Fixed Harness subagent tracing so delegated runs keep the parent tracing context and show up in the same trace in observability exporters. Fixes #15461. (#15473)
Refactored how assistant messages are constructed during streaming. Messages are now built from the complete chunk sequence after each step instead of being assembled mid-stream. This fixes duplicate OpenAI item IDs (rs_*, msg_*), eliminates empty text parts from streaming artifacts, and ensures provider metadata is correctly attributed. (#15454)
Fixed nested workflows dropping resourceId when executed as a step of a parent workflow. Child workflow snapshots now preserve the parent run's resource association, so tenant-scoped persistence works end-to-end. Closes #15246. (#15447)
const run = await parent.createRun({
runId: 'run-1',
resourceId: 'workspace-1',
});
await run.start({ inputData: { ok: true } });
// Before: child snapshots persisted with resourceId: undefined
// After: child snapshots persisted with resourceId: 'workspace-1'
Fixed a security issue where several parsing and tracing paths could slow down on malformed or attacker-crafted input. Normal behavior is unchanged, and these packages now handle pathological input in linear time. (#15566)
Fixed messages not being persisted when multiple memory processors are used together. Processor state is now correctly passed between chained workflow steps, ensuring all messages are saved. (#14884)
Fix prototype pollution in setNestedValue (@mastra/core/utils) and generateOpenAPIDocument (@mastra/server). (#15565)
setNestedValue now rejects dot-path segments named __proto__, constructor, or prototype, preventing attacker-controlled field paths passed to selectFields from polluting Object.prototype. generateOpenAPIDocument builds its paths map with Object.create(null) so a route path of __proto__ cannot poison the prototype chain.
Fixed assistant model attribution so provider and model information is preserved more reliably in stored assistant messages. (#15462)
Loop runs now keep the resolved model on the first step-start, already-attributed step-start parts are left alone, and post-tool assistant continuations preserve their incoming metadata when they merge into an existing assistant message.
This keeps downstream features working with the correct model identity instead of falling back to incomplete metadata or losing it during merge.
Fixed channel webhook handling in Node.js when no execution context is available. (#15441)
Recalled V4 messages now preserve data-* message parts (e.g. data-tool-call-suspended) after a page refresh, so suspended HITL workflows can resume correctly. (#14211)
Fixed structured output to keep persisted assistant text behavior aligned with existing memory recall paths. (#15318)
Fixed processOutputStep not receiving token usage data. Output processors now receive usage (inputTokens, outputTokens, totalTokens) for the current LLM step, enabling per-step cost tracking and token budget enforcement. (#15068)
Fixed requireApproval on tools to accept a function in addition to a boolean. Previously, passing a function for requireApproval on a tool created with createTool was silently ignored and approval was never required. (#15346)
import { createTool } from '@mastra/core/tools';
import { z } from 'zod';
createTool({
id: 'delete-file',
description: 'Delete a file',
inputSchema: z.object({ path: z.string() }),
// Now works: only require approval for paths outside /tmp
requireApproval: input => !input.path.startsWith('/tmp/'),
execute: async ({ context }) => {
// ...
},
});
Fixed resume errors for suspended agent runs: resumeStream() and resumeGenerate() now return a clear message when storage is missing or the runId is invalid. (#15514)
Fixed OpenAI tool strict mode when requests pass through the model router. strict: true on function tools now survives compatibility prep, so OpenAI Responses models receive strict tool definitions instead of silently downgrading them to non-strict. (#15397)
Added multi-select choices to the Harness ask_user tool. (#15485)
Fixed noisy browser reminders being added to non-browser turns. Browser reminders are now added only when browser context exists (for example, current page URL or title). (#15416)
Fixed dataset.startExperiment hanging forever when targetType is 'workflow'. Workflow experiments now complete normally, honour itemTimeout, and surface failures. Fixes #15453. (#15570)
Fixed PrefillErrorHandler to recover from Qwen/llama.cpp prefill rejections with enable_thinking, so agents retry with a continue reminder instead of failing after skill/tool turns. (#15518)
Add background task execution for agents. Agents can dispatch slow tool calls to run asynchronously while the conversation keeps streaming, and results are injected back into the loop when they complete. (#15307)
Fixed fallback model attribution in agent traces. When an agent fell back after the primary model failed, token usage and cost were reported against the primary model instead of the fallback that actually served the response (e.g. in Langfuse). Fixes #13547. (#15503)
Fixed agent stream errors when providers end a stream without an error payload. (#15435)
Fixed provider-defined tools with custom execute callbacks (e.g. openai.tools.applyPatch) being incorrectly skipped during execution. Previously, all provider-defined tools were assumed to be provider-executed, which meant user-supplied execute functions were never called. Now, provider tools with a custom execute are correctly identified as client-executed. (#14819)
Added model metadata to step-start parts so model changes can be detected across steps, including within a single assistant message. (#15420)
Fixed message serialization to preserve millisecond precision in createdAt timestamps. (#15500)
Fixed workflow streaming in @mastra/ai-sdk so intermediate data-workflow parts stop repeating every completed step output. Added data-workflow-step parts with the full payload for the step that just changed, which reduces stream size for long-running workflows while preserving final workflow outputs. (#15218)
If your UI reads live step outputs during workflow execution, it should now consume data-workflow-step parts in addition to data-workflow. Final workflow snapshots still include the full step outputs.
Fix AI SDK v6 approval replay so ordinary user follow-up turns do not resume stale approval responses. (#15480)
Fixed tool call approvals in AI SDK v6: handleChatStream now automatically routes to resumeStream when the AI SDK v6 native approval flow is used on the client (no extra server-side wiring required). The v6 stream now emits native tool-approval-request parts so useChat can surface approval UI and call addToolApprovalResponse(), while also emitting the existing data-tool-call-approval chunk for backwards compatibility. (#15345)
Fixed AI SDK v6 tool approval streams so requireApproval works with handleChatStream and AssistantChatTransport. (#15345)
Added unique IDs (logId, metricId, scoreId, feedbackId) to all observability signals, generated automatically at emission time for de-duplication across the framework pipeline and cross-system correlation. User-facing APIs (logger.info(), metrics.emit(), addScore(), addFeedback()) are unchanged. (#15242)
For existing ClickHouse and DuckDB observability signal tables, run npx mastra migrate before initializing the store so the new signal-ID schema is applied.
Add BackgroundTasksStorage domain implementation so @mastra/core background task execution works with any storage adapter. (#15307)
Added getTraceLight method to the observability storage, returning only lightweight span fields needed for timeline rendering. This avoids transferring heavy fields like input, output, attributes, and metadata when they are not needed. (#15574)
Added forEachIndex option to run.resume(), run.resumeAsync(), and run.resumeStream(). Use it to resume a single iteration of a suspended .foreach() step while leaving the other iterations suspended. (#15563)
await client
.getWorkflow('myWorkflow')
.createRun(runId)
.resume({
step: 'approve',
resumeData: { ok: true },
forEachIndex: 1, // only resume the second iteration
});
Add /api/background-tasks routes (SSE stream, list with filters + pagination, get by ID) and matching MastraClient methods (listBackgroundTasks, getBackgroundTask, streamBackgroundTasks). (#15307)
Fixed @mastra/client-js to re-export RequestContext so client SDK users can import it from @mastra/client-js. (#15413)
Added observabilityRuntimeStrategy to GetSystemPackagesResponse so clients can read the active observability tracing strategy (realtime, batch-with-updates, insert-only, or event-sourced) reported by the server. (#15512)
BackgroundTasksStorage domain implementation so @mastra/core background task execution works with any storage adapter. (#15307)BackgroundTasksStorage domain implementation so @mastra/core background task execution works with any storage adapter. (#15307)BackgroundTasksStorage domain implementation so @mastra/core background task execution works with any storage adapter. (#15307)Added target option to NetlifyDeployer for deploying as Netlify Edge Functions. (#13103)
export const mastra = new Mastra({
deployer: new NetlifyDeployer({
target: 'edge',
}),
});
Edge functions run on Deno at the network edge, closer to users, with no hard execution timeout (only a CPU time limit). This makes them a better fit for longer-running AI workflows that may exceed the 60s serverless function timeout.
The default target remains 'serverless', so existing usage is unaffected.
Added @mastra/docker, a Docker container sandbox provider for Mastra workspaces. Executes commands inside local Docker containers using long-lived containers with docker exec. Supports bind mounts, environment variables, container reconnection by label, custom images, and network configuration. Targets local development, CI/CD, air-gapped deployments, and cost-sensitive scenarios where cloud sandboxes are unnecessary. (#14500)
Usage
import { Agent } from '@mastra/core/agent';
import { Workspace } from '@mastra/core/workspace';
import { DockerSandbox } from '@mastra/docker';
const workspace = new Workspace({
sandbox: new DockerSandbox({
image: 'node:22-slim',
timeout: 60_000,
}),
});
const agent = new Agent({
name: 'dev-agent',
model: 'anthropic/claude-opus-4-6',
workspace,
});
Added unique IDs (logId, metricId, scoreId, feedbackId) to all observability signals, generated automatically at emission time for de-duplication across the framework pipeline and cross-system correlation. User-facing APIs (logger.info(), metrics.emit(), addScore(), addFeedback()) are unchanged. (#15242)
For existing ClickHouse and DuckDB observability signal tables, run npx mastra migrate before initializing the store so the new signal-ID schema is applied.
Fixed DuckDB installs by using a resolvable @duckdb/node-api version range. (#15419)
Added getTraceLight method to the observability storage, returning only lightweight span fields needed for timeline rendering. This avoids transferring heavy fields like input, output, attributes, and metadata when they are not needed. (#15574)
BackgroundTasksStorage domain implementation so @mastra/core background task execution works with any storage adapter. (#15307)nack support and deliveryAttempt tracking on the subscriber callback, and enable exactly-once delivery on grouped subscriptions. (#15307)BackgroundTasksStorage domain implementation so @mastra/core background task execution works with any storage adapter. (#15307)Added new attribute mappings to the Langfuse exporter so more Mastra attributes are filterable in Langfuse's UI. (#15445)
Observation-level metadata — gen_ai.agent.id, gen_ai.agent.name, mastra.span.type, and gen_ai.operation.name are now mapped to langfuse.observation.metadata.*, making them top-level filterable keys on each observation. This lets you scope Langfuse evaluators to specific agents or span types.
Trace-level attributes — mastra.metadata.traceName and mastra.metadata.version are now mapped to langfuse.trace.name and langfuse.trace.version, enabling custom trace names and version-based filtering.
Fixed a security issue where several parsing and tracing paths could slow down on malformed or attacker-crafted input. Normal behavior is unchanged, and these packages now handle pathological input in linear time. (#15566)
Improved Langfuse trace batching for streamed runs by adding flushAt and flushInterval controls. (#15460)
Use DiskANN vector_top_k() index for faster vector queries when available (#14913)
LibSQLVector.query() now automatically uses the existing DiskANN index for approximate nearest neighbor search instead of brute-force full table scans, providing 10-25x query speedups on larger datasets. Falls back to brute-force when no index exists.
Add BackgroundTasksStorage domain implementation so @mastra/core background task execution works with any storage adapter. (#15307)
Added getTraceLight method to the observability storage, returning only lightweight span fields needed for timeline rendering. This avoids transferring heavy fields like input, output, attributes, and metadata when they are not needed. (#15574)
Fixed MCP tool strict mode propagation. MCP servers now expose Mastra tool strictness in MCP metadata, and the MCP client restores that flag when rebuilding tools so strict OpenAI tool calling works for MCP-backed tools too. (#15397)
Fixed MCP tools with recursive JSON Schema refs so they stay serializable when loaded. (#15400)
Added activateAfterIdle setting for observational memory so buffered observations can activate after idle time before the next prompt. (#15365)
Example
Set activateAfterIdle: 300_000 (or "5m") on the observationalMemory config to activate buffered context after 5 minutes of inactivity.
This helps long-running threads reuse compressed context after prompt cache TTLs expire instead of sending a larger raw message window on the next request.
Added activateOnProviderChange so observational memory can activate buffered observations and reflections before switching to a different provider or model. (#15420)
const memory = new Memory({
options: {
observationalMemory: {
model: 'google/gemini-2.5-flash',
activateOnProviderChange: true,
},
},
});
This helps keep prompt-cache savings when the next step cannot reuse the previous provider's cache.
Fixed early observational memory activations so buffered reflections are only activated when they still leave a healthy active observation set. (#15462)
Before this change, idle-timeout (activateAfterIdle) and model/provider-change (activateOnProviderChange) activations could swap in a buffered reflection too early. In bad cases, that replaced a large raw observation tail with a much smaller mostly-compressed result, which hurt reflection quality.
Early activations now stay buffered unless both of these checks pass:
This update also fixes false provider_change activations when older persisted messages only contain a bare model id like gpt-5.4 while newer turns use the fully qualified provider/modelId form.
Fixed a security issue where several parsing and tracing paths could slow down on malformed or attacker-crafted input. Normal behavior is unchanged, and these packages now handle pathological input in linear time. (#15566)
Fixed other-thread context filtering falling back to the observational memory record timestamp when thread metadata is missing. (#15269)
Add BackgroundTasksStorage domain implementation so @mastra/core background task execution works with any storage adapter. (#15307)
Added getTraceLight method to the observability storage, returning only lightweight span fields needed for timeline rendering. This avoids transferring heavy fields like input, output, attributes, and metadata when they are not needed. (#15574)
Add BackgroundTasksStorage domain implementation so @mastra/core background task execution works with any storage adapter. (#15307)
Added getTraceLight method to the observability storage, returning only lightweight span fields needed for timeline rendering. This avoids transferring heavy fields like input, output, attributes, and metadata when they are not needed. (#15574)
Changed MODEL_CHUNK tool-result span output handling. (#15495)
What changed
MODEL_CHUNK spans for tool-result now omit output for locally executed tools.TOOL_CALL remains the canonical span for locally executed tool result payloads.MODEL_CHUNK spans for provider-executed tool-result chunks still include output.MODEL_CHUNK metadata still includes toolCallId, toolName, and providerExecuted.Why
This reduces duplicate tool result payloads in traces without dropping provider-emitted tool results that may not have a matching TOOL_CALL span.
Added unique IDs (logId, metricId, scoreId, feedbackId) to all observability signals, generated automatically at emission time for de-duplication across the framework pipeline and cross-system correlation. User-facing APIs (logger.info(), metrics.emit(), addScore(), addFeedback()) are unchanged. (#15242)
For existing ClickHouse and DuckDB observability signal tables, run npx mastra migrate before initializing the store so the new signal-ID schema is applied.
Fixed span serialization replacing tool parameter JSON schemas with lossy summaries like "unknown (required)". JSON schemas in span data are now preserved as-is, keeping full type information for debugging in observability tools like Datadog. Also fixed MODEL_STEP span input showing only a keys summary instead of actual messages for AI SDK v5 providers. (#15404)
Fixed CloudExporter to default to observability.mastra.ai for Mastra platform exports. (#15418)
Improved tracing overhead when filtering spans. Spans dropped by excludeSpanTypes or the internal-span filter (includeInternalSpans: false) now skip payload serialization and retention entirely instead of paying the cost and discarding at export time. (#15487)
undefined from OtelBridge.createSpan when no OpenTelemetry SDK is registered, so core generates valid span/trace IDs instead of reusing the OTEL no-op all-zero IDs. This prevents downstream trace exporters from dropping spans and stops the infinite-loop CPU spike in parent-matching. Fixes #15589. (#15591)Add BackgroundTasksStorage domain implementation so @mastra/core background task execution works with any storage adapter. (#15307)
Added getTraceLight method to the observability storage, returning only lightweight span fields needed for timeline rendering. This avoids transferring heavy fields like input, output, attributes, and metadata when they are not needed. (#15574)
Added ErrorBoundary component to catch and display runtime errors in the studio. Wraps routes in the local playground so a crash on one page (e.g. an agent editor referencing an unresolved workspace skill) surfaces a friendly recovery UI with Try again (in-place React reset), Reload page (full browser refresh), and Report issue (opens the Mastra GitHub issues page in a new tab) actions, plus a collapsible stack trace — instead of a blank screen. (#15561)
The fallback is spatially aware: it fills its parent and the icon, heading, and body text scale up on wider containers via Tailwind container queries. Scope the boundary to a single widget to keep the rest of the UI interactive while one panel fails.
Usage
import { ErrorBoundary } from '@mastra/playground-ui';
import { useLocation } from 'react-router';
// Route-level: wrap the router outlet, reset when the path changes
function Layout({ children }) {
const { pathname } = useLocation();
return <ErrorBoundary resetKeys={[pathname]}>{children}</ErrorBoundary>;
}
// Scoped: contain the crash to one panel, leave the rest of the tree alone
<ErrorBoundary variant="inline" title="The editor failed to render">
<AgentEditor />
</ErrorBoundary>;
Props: fallback (node or render prop with { error, errorInfo, reset }), onError for reporting, resetKeys for automatic reset, variant ('section' — fills available space, default; 'inline' — stays compact), and title / description overrides.
Added BrandLoader, a branded pulse-wave loader component for brand moments like app boot or agent thinking. Complements Spinner, which remains the inline utility loader. (#15490)
Added new Logo component to the playground-ui design system. Supports two sizes (sm, md), uses currentColor for theming, and includes an optional outline-on-hover animation that respects prefers-reduced-motion. (#15513)
Added a dedicated trace details page at /traces/:traceId, plus the design-system changes that support it: (#15392)
Button: new link variant (inline, no padding/background/border).DataKeysAndValues: numOfCol now accepts 3.DataPanel.Header: minimum height so heading-only headers match the height of ones with button actions.Fix unhandled TypeError in getFileContentType when the URL is relative (#15433)
or malformed. The catch block now falls back to inferring the MIME type
from the raw string's file extension and strips query/hash fragments so
inputs like /files/report.pdf, https://x.dev/a.pdf?token=1, and
/files/report.pdf#page=2 all resolve to application/pdf instead of
rejecting.
Closes #15432.
Refactored DataKeysAndValues.ValueLink to use the standard as prop for custom link components, replacing the previous LinkComponent prop (#15391)
Added a Foundations/Tokens page to the @mastra/playground-ui Storybook so you can browse all typography, color, spacing, radius, shadow, and animation tokens in one place. (#15475)
New filter UX on the studio's Traces and Logs pages. Click + Add Filter to pick a property and narrow by value; active filters render as editable pills. Filter state lives in the URL so filtered views survive reloads and can be shared by link. Save filters for next time remembers a default; Clear and Remove all filters are one click away. (#15512)
Align BrandLoader geometry with the Mastra logo: match disk positions to the logo path, introduce per-size stroke widths and bubble radii (sm/md/lg), and rebalance the gooey filter for rounder ridge↔disk fillets. Shift the size scale so sm stays, md is now w-8, lg is now w-10, and the old w-16 size is removed. (#15531)
Added ScoresDataList for rendering lists of score evaluation results. (#15339)
Updated PageHeader.Description styling to use text color (neutral2) and simplified top margin (#15389)
Improved visual consistency across Chip, DropdownMenu, Notification, Popover, and toast components — unified radius and border scale. Deduplicated dropdown menu item classes and added max-height scroll handling for long menus. (#15440)
Add Redis storage provider (#11795)
Introduces @mastra/redis, a Redis-backed storage implementation for Mastra built on the official redis (node-redis) client.
Includes support for the core storage domains (memory, workflows, scores) and multiple connection options: connectionString, host/port/db/password, or injecting a pre-configured client for advanced setups (e.g. custom socket/retry settings, Sentinel/Cluster via custom client).
Fixed MCP tool validation failures when tools use JSON Schema draft 2020-12. Tools from providers like Firecrawl that declare $schema: "https://json-schema.org/draft/2020-12/schema" now validate correctly instead of throwing "no schema with key or ref" errors. (#14530)
Fixed MCP tools with recursive JSON Schema refs so they stay serializable when loaded. (#15400)
You can now tag spans and redact sensitive input or output per request by passing tags, hideInput, or hideOutput in tracingOptions when calling an agent or workflow. (#15512)
Added a lightweight trace endpoint (GET /observability/traces/:traceId/light) that returns only timeline-relevant span fields, dramatically reducing payload size when rendering trace timelines. Also added a dedicated span endpoint (GET /observability/traces/:traceId/spans/:spanId) to fetch full span details on demand.
Added forEachIndex to the workflow resume request body schema. The /workflows/:workflowId/resume, /resume-async, and /resume-stream endpoints (including their agent-builder equivalents) now accept an optional zero-based forEachIndex so clients can target a specific iteration of a suspended .foreach() step. (#15563)
// POST /workflows/:workflowId/resume
// body
{
step: 'approve',
resumeData: { ok: true },
forEachIndex: 1, // resume only the second iteration; others stay suspended
}
Add /api/background-tasks routes (SSE stream, list with filters + pagination, get by ID) and matching MastraClient methods (listBackgroundTasks, getBackgroundTask, streamBackgroundTasks). (#15307)
Added support for versions field in agent generate and stream request bodies, enabling per-request sub-agent version overrides that propagate through delegation. (#15373)
Fix prototype pollution in setNestedValue (@mastra/core/utils) and generateOpenAPIDocument (@mastra/server). (#15565)
setNestedValue now rejects dot-path segments named __proto__, constructor, or prototype, preventing attacker-controlled field paths passed to selectFields from polluting Object.prototype. generateOpenAPIDocument builds its paths map with Object.create(null) so a route path of __proto__ cannot poison the prototype chain.
Fixed noisy 'Background task manager not available' error log in studio when background tasks are not enabled. The list endpoint now returns an empty list, the get-by-id endpoint returns 404, and the SSE stream endpoint returns an empty stream that closes on disconnect — instead of throwing an HTTP 400 that gets logged as an error. (#15600)
Added unique IDs (logId, metricId, scoreId, feedbackId) to all observability signals, generated automatically at emission time for de-duplication across the framework pipeline and cross-system correlation. User-facing APIs (logger.info(), metrics.emit(), addScore(), addFeedback()) are unchanged. (#15242)
For existing ClickHouse and DuckDB observability signal tables, run npx mastra migrate before initializing the store so the new signal-ID schema is applied.
@mastra/tavily integration with first-class Mastra tools for Tavily web search, extract, crawl, and map APIs, and migrated mastracode's web search tools to use it. (#15448)Add BackgroundTasksStorage domain implementation so @mastra/core background task execution works with any storage adapter. (#15307)
Fixed slow Upstash message saves by using the message index and treating unindexed messages as new, avoiding full database scans. Also adds index-first lookups to updateMessages. Addresses #15386. (#15393)
The following packages were updated with dependency changes only:
Fetched April 28, 2026
Highlights Azure Blob Storage workspace + blob store ( ) New adds an Azure Blob Storage provider ( ) and an content-addressable store for s…
Highlights End-to-end RAG Tracing + New RAG/Graph Span Types RAG ingestion and query operations are now visible in Mastra tracing with new…