2025-10-21
feat(otel-exporter): Add customizable 'exporter' constructor parameter
You can now pass in an instantiated TraceExporter inheriting class into OtelExporter.
This will circumvent the default package detection, no longer instantiating a TraceExporter
automatically if one is instead passed in to the OtelExporter constructor.
feat(arize): Initial release of @mastra/arize observability package
The @mastra/arize package exports an ArizeExporter class that can be used to easily send AI
traces from Mastra to Arize AX, Arize Phoenix, or any OpenInference compatible collector.
It sends traces uses BatchSpanProcessor over OTLP connections.
It leverages the @mastra/otel-exporter package, reusing OtelExporter for transmission and
span management.
See the README in observability/arize/README.md for more details (#8827)
fix(observability): Add ParentSpanContext to MastraSpan's with parentage (#9085)
Update peerdeps to 0.23.0-0 (#9043)
Rename LLM span types and attributes to use Model prefix
BREAKING CHANGE: This release renames AI tracing span types and attribute interfaces to use the "Model" prefix instead of "LLM":
AISpanType.LLM_GENERATION → AISpanType.MODEL_GENERATIONAISpanType.LLM_STEP → AISpanType.MODEL_STEP
AISpanType.LLM_CHUNK → AISpanType.MODEL_CHUNK
LLMGenerationAttributes → ModelGenerationAttributes
LLMStepAttributes → ModelStepAttributes
LLMChunkAttributes → ModelChunkAttributes
InternalSpans.LLM → InternalSpans.MODEL
This change better reflects that these span types apply to all AI models, not just Large Language Models.
Migration guide:
Update all imports: import { ModelGenerationAttributes } from '@mastra/core/ai-tracing'
Update span type references: AISpanType.MODEL_GENERATION
Update InternalSpans usage: InternalSpans.MODEL (#9105)
Update provider registry and model documentation with latest models and providers (c67ca32)
Update provider registry and model documentation with latest models and providers (efb5ed9)
Add deprecation warnings for format:ai-sdk (#9018)
network routing agent text delta ai-sdk streaming (#8979)
Support writing custom top level stream chunks (#8922)
Consolidate streamVNext logic into stream, move old stream function into streamLegacy (#9092)
Fix incorrect type assertions in Tool class. Created MastraToolInvocationOptions type to properly extend AI SDK's ToolInvocationOptions with Mastra-specific properties (suspend, resumeData, writableStream). Removed unsafe type assertions from tool execution code. (#8510)
fix(core): Fix Gemini message ordering validation errors (#7287, #8053)
Fixes Gemini API "single turn requests" validation error by ensuring the first non-system message is from the user role. This resolves errors when:
Messages start with assistant role (e.g., from memory truncation)
Tool-call sequences begin with assistant messages
Breaking Change: Empty or system-only message lists now throw an error instead of adding a placeholder user message, preventing confusing LLM responses.
This fix handles both issue #7287 (tool-call ordering) and #8053 (single-turn validation) by inserting a placeholder user message when needed. (#7287)
Add support for external trace and parent span IDs in TracingOptions. This enables integration with external tracing systems by allowing new AI traces to be started with existing traceId and parentSpanId values. The implementation includes OpenTelemetry-compatible ID validation (32 hex chars for trace IDs, 16 hex chars for span IDs). (#9053)
Updated watch and watchAsync methods to use proper function overloads instead of generic conditional types, ensuring compatibility with the base Run class signatures. (#9048)
Fix tracing context propagation to agent steps in workflows
When creating a workflow step from an agent using createStep(myAgent), the tracing context was not being passed to the agent's stream() and streamLegacy() methods. This caused tracing spans to break in the workflow chain.
This fix ensures that tracingContext is properly propagated to both agent.stream() and agent.streamLegacy() calls, matching the behavior of tool steps which already propagate tracingContext correctly. (#9074)
Fixes how reasoning chunks are stored in memory to prevent data loss and ensure they are consolidated as single message parts rather than split into word-level fragments. (#9041)
fixes an issue where input processors couldn't add system or assistant messages. Previously all messages from input processors were forced to be user messages, causing an error when trying to add other role types. (#8835)
fix(core): Validate structured output at text-end instead of flush
Fixes structured output validation for Bedrock and LMStudio by moving validation from flush() to text-end chunk. Eliminates finishReason heuristics, adds special token extraction for LMStudio, and validates at the correct point in stream lifecycle. (#8934)
fix model.loop.test.ts tests to use structuredOutput.schema and add assertions (#8926)
Add initialState as an option to .streamVNext() (#9071)
added resourceId and runId to workflow_run metadata in ai tracing (#9031)
When using OpenAI models with JSON response format, automatically enable strict schema validation. (#8924)
Fix custom metadata preservation in UIMessages when loading threads. The getMessagesHandler now converts messagesV2 (V2 format with metadata) instead of messages (V1 format without metadata) to AIV5.UI format. Also updates the abstract MastraMemory.query() return type to include messagesV2 for proper type safety. (#8938)
Fix TypeScript type errors when using provider-defined tools from external AI SDK packages.
Agents can now accept provider tools like google.tools.googleSearch() without type errors. Creates new @internal/external-types package to centralize AI SDK type re-exports and adds ProviderDefinedTool structural type to handle tools from different package versions/instances due to TypeScript's module path discrimination. (#8940)
feat(ai-tracing): Add automatic metadata extraction from RuntimeContext to spans
Enables automatic extraction of RuntimeContext values as metadata for AI tracing spans across entire traces.
Key features:
Configure runtimeContextKeys in TracingConfig to extract specific keys from RuntimeContext
Add per-request keys via tracingOptions.runtimeContextKeys for trace-specific additions
Supports dot notation for nested values (e.g., 'user.id', 'session.data.experimentId')
TraceState computed once at root span and inherited by all child spans
Explicit metadata in span options takes precedence over extracted metadata
Example:
const mastra = new Mastra({
observability: {
configs: {
default: {
runtimeContextKeys: ['userId', 'environment', 'tenantId']
}
}
}
});
await agent.generate({
messages,
runtimeContext,
tracingOptions: {
runtimeContextKeys: ['experimentId'] // Adds to configured keys
}
});
(#9072)
Fix provider tools for popular providers and add support for anthropic/claude skills. (#9038)
Refactor workflowstream into workflow output with fullStream property (#9048)
Added the ability to use model router configs for embedders (eg "openai/text-embedding-ada-002") (#8992)
Always set supportsStructuredOutputs true for openai compatible provider. (#8933)
Support for custom resume labels mapping to step to be resumed (#8941)
added tracing of LLM steps & chunks (#9058)
Fixed an issue where a custom URL in model router still validated unknown providers against the known providers list. Custom URL means we don't necessarily know the provider. This allows local providers like Ollama to work properly (#8989)
Show agent tool output better in playground (#9021)
feat: inject schema context into main agent for processor mode structured output (#8886)
Added providerOptions types to generate/stream for main builtin model router providers (openai/anthropic/google/xai) (#8995)
Generate a title for Agent.network() threads (#8853)
Fix nested workflow events and networks (#9132)
Update provider registry and model documentation with latest models and providers (f743dbb)
Add tool call approval (#8649)
Fix error handling and serialization in agent streaming to ensure errors are consistently exposed and preserved. (#9192)
Rename LLM span types and attributes to use Model prefix
BREAKING CHANGE: This release renames AI tracing span types and attribute interfaces to use the "Model" prefix instead of "LLM":
AISpanType.LLM_GENERATION → AISpanType.MODEL_GENERATIONAISpanType.LLM_STEP → AISpanType.MODEL_STEP
AISpanType.LLM_CHUNK → AISpanType.MODEL_CHUNK
LLMGenerationAttributes → ModelGenerationAttributes
LLMStepAttributes → ModelStepAttributes
LLMChunkAttributes → ModelChunkAttributes
InternalSpans.LLM → InternalSpans.MODEL
This change better reflects that these span types apply to all AI models, not just Large Language Models.
Migration guide:
Update all imports: import { ModelGenerationAttributes } from '@mastra/core/ai-tracing'
Update span type references: AISpanType.MODEL_GENERATION
Update InternalSpans usage: InternalSpans.MODEL (#9105)
watch and watchAsync methods to use proper function overloads instead of generic conditional types, ensuring compatibility with the base Run class signatures. (#9048)@lancedb/lancedb@^0.22.2 ↗︎ (from ^0.21.2, in dependencies) (#8693)Update peerdeps to 0.23.0-0 (#9043)
Rename LLM span types and attributes to use Model prefix
BREAKING CHANGE: This release renames AI tracing span types and attribute interfaces to use the "Model" prefix instead of "LLM":
AISpanType.LLM_GENERATION → AISpanType.MODEL_GENERATIONAISpanType.LLM_STEP → AISpanType.MODEL_STEP
AISpanType.LLM_CHUNK → AISpanType.MODEL_CHUNK
LLMGenerationAttributes → ModelGenerationAttributes
LLMStepAttributes → ModelStepAttributes
LLMChunkAttributes → ModelChunkAttributes
InternalSpans.LLM → InternalSpans.MODEL
This change better reflects that these span types apply to all AI models, not just Large Language Models.
Migration guide:
Update all imports: import { ModelGenerationAttributes } from '@mastra/core/ai-tracing'
Update span type references: AISpanType.MODEL_GENERATION
Update InternalSpans usage: InternalSpans.MODEL (#9105)
Update peerdeps to 0.23.0-0 (#9043)
Rename LLM span types and attributes to use Model prefix
BREAKING CHANGE: This release renames AI tracing span types and attribute interfaces to use the "Model" prefix instead of "LLM":
AISpanType.LLM_GENERATION → AISpanType.MODEL_GENERATIONAISpanType.LLM_STEP → AISpanType.MODEL_STEP
AISpanType.LLM_CHUNK → AISpanType.MODEL_CHUNK
LLMGenerationAttributes → ModelGenerationAttributes
LLMStepAttributes → ModelStepAttributes
LLMChunkAttributes → ModelChunkAttributes
InternalSpans.LLM → InternalSpans.MODEL
This change better reflects that these span types apply to all AI models, not just Large Language Models.
Migration guide:
Update all imports: import { ModelGenerationAttributes } from '@mastra/core/ai-tracing'
Update span type references: AISpanType.MODEL_GENERATION
Update InternalSpans usage: InternalSpans.MODEL (#9105)
feat(otel-exporter): Add customizable 'exporter' constructor parameter
You can now pass in an instantiated TraceExporter inheriting class into OtelExporter.
This will circumvent the default package detection, no longer instantiating a TraceExporter
automatically if one is instead passed in to the OtelExporter constructor.
feat(arize): Initial release of @mastra/arize observability package
The @mastra/arize package exports an ArizeExporter class that can be used to easily send AI
traces from Mastra to Arize AX, Arize Phoenix, or any OpenInference compatible collector.
It sends traces uses BatchSpanProcessor over OTLP connections.
It leverages the @mastra/otel-exporter package, reusing OtelExporter for transmission and
span management.
See the README in observability/arize/README.md for more details (#8827)
fix(observability): Add ParentSpanContext to MastraSpan's with parentage (#9085)
Update peerdeps to 0.23.0-0 (#9043)
Rename LLM span types and attributes to use Model prefix
BREAKING CHANGE: This release renames AI tracing span types and attribute interfaces to use the "Model" prefix instead of "LLM":
AISpanType.LLM_GENERATION → AISpanType.MODEL_GENERATIONAISpanType.LLM_STEP → AISpanType.MODEL_STEP
AISpanType.LLM_CHUNK → AISpanType.MODEL_CHUNK
LLMGenerationAttributes → ModelGenerationAttributes
LLMStepAttributes → ModelStepAttributes
LLMChunkAttributes → ModelChunkAttributes
InternalSpans.LLM → InternalSpans.MODEL
This change better reflects that these span types apply to all AI models, not just Large Language Models.
Migration guide:
Update all imports: import { ModelGenerationAttributes } from '@mastra/core/ai-tracing'
Update span type references: AISpanType.MODEL_GENERATION
Update InternalSpans usage: InternalSpans.MODEL (#9105)
watch and watchAsync methods to use proper function overloads instead of generic conditional types, ensuring compatibility with the base Run class signatures. (#9048)getMessagesHandler now converts messagesV2 (V2 format with metadata) instead of messages (V1 format without metadata) to AIV5.UI format. Also updates the abstract MastraMemory.query() return type to include messagesV2 for proper type safety. (#8938)@google-cloud/text-to-speech@^6.3.1 ↗︎ (from ^6.3.0, in dependencies) (#8936)Fetched April 7, 2026