5552999]:
5552999]:
#10527 3408008 Thanks @pawel-twardziak! - fix(langchain): export createAgent and prebuilt middleware from browser entry point
#10545 68e0a19 Thanks @JadenKim-dev! - fix(langchain): revert zod import in utils.ts to fix v3/v4 interop
Updated dependencies [6933769, 50d5f32, 5552999, 8331833]:
#10475 3d35eb1 Thanks @hntrl! - fix(langchain): add "aws" alias to MODEL_PROVIDER_CONFIG so hub/node auto-detects ChatBedrockConverse from Python-serialized prompts
#10258 ae4122f Thanks @irfiacre! - Align Zod Importation For 'libs/langchain/src/agents/nodes/utils.ts'
Updated dependencies [bbbfea1]:
#10481 478652c Thanks @hnustwjj! - fix(openai): detect DeepSeek context overflow errors as ContextOverflowError
DeepSeek returns maximum context length in 400 error messages when the context limit is exceeded. These are now recognized by wrapOpenAIClientError, so downstream code (e.g. summarization middleware fallback) can handle them correctly.
#10507 52e501b Thanks @App-arently! - fix(openai): guard JSON.parse in streaming json_schema when text is empty
Updated dependencies [bbbfea1]:
#10512 bbbfea1 Thanks @hntrl! - fix(core): fix streaming chunk merge for providers without index on tool call deltas
_mergeLists now falls back to id-based matching when items don't have an index field. Previously, providers routing through the OpenAI-compatible API without index on streaming tool call deltas (e.g. Anthropic models via ChatOpenAI) would accumulate hundreds of individual raw deltas in tool_call_chunks and additional_kwargs.tool_calls instead of merging them into a single entry per tool call. In a real trace with 3 concurrent subagents, this caused a single AI message to balloon from ~4KB to 146KB -- with 826 uncollapsed streaming fragments carrying a few bytes each.
Also fixes SystemMessage.concat() which used ...this to spread all instance properties (including lc_kwargs) into the new constructor, causing each chained concat() call to nest one level deeper. After 7 middleware concat() calls (typical in deepagents), a 7KB system prompt would serialize to 81KB due to content being duplicated at every nesting level.
#10327 5dc11b5 Thanks @hntrl! - fix(core): replace exported zod type references with structural duck-type interfaces to fix TypeScript OOM
Replaces all exported Zod type references (z3.ZodType, z4.$ZodType, etc.) in @langchain/core's public API with minimal structural ("duck-type") interfaces. This prevents TypeScript from performing expensive deep structural comparisons (~3,400+ lines of mutually recursive generics) when downstream packages resolve a different Zod version than @langchain/core, which was causing OOM crashes and unresponsive language servers in monorepo setups.
#10433 7af0b65 Thanks @tanushree-sharma! - feat: Add LangSmith integration metadata to createAgent and initChatModel
#10489 21094f3 Thanks @maahir30! - support structured output (providerStrategy) for Google Gemini models in createAgent
#10433 7af0b65 Thanks @tanushree-sharma! - feat: Add LangSmith integration metadata to createAgent and initChatModel
Updated dependencies [5dc11b5, 7af0b65]:
#10471 8f15dd1 Thanks @pawel-twardziak! - fix(@langchain/google): pass abort signal to fetch in non-streaming invoke
signal: options.signal to the Request constructor in _generate's non-streaming branch, mirroring what _streamResponseChunks already does#10493 63b7268 Thanks @afirstenberg! - Undo regression introduced in #10397 in legacy content processing path.
Fixes issues with a false duplicate functionCall sent in response (#10474).
bfb7944 Thanks @jacoblee93! - feat(core): Add all invocation params as part of metadata#10443 ff6822e Thanks @christian-bromann! - fix(langchain): respect version:"v1" in afterModel router's pending tool call path
#10446 888224c Thanks @hntrl! - fix(agents): propagate store and configurable to ToolNode middleware runtime
#10444 82d56cb Thanks @christian-bromann! - fix(langchain/agents): dispatch tool calls via Send in afterModel router for version:"v2"
Breaking change for version: "v2" + afterModel middleware users.
Previously, when afterModel middleware was present, createAgent always routed all tool calls from an AIMessage to a single ToolNode invocation — regardless of the version option. This meant version: "v2" silently behaved like version: "v1" (parallel via Promise.all in one node) whenever afterModel middleware was used.
#createAfterModelRouter now correctly respects #toolBehaviorVersion:
version: "v1" — routes the full AIMessage to a single ToolNode invocation; all tool calls run concurrently via Promise.all (unchanged behaviour).version: "v2" — dispatches each tool call as a separate Send task, matching the behaviour of #createModelRouter when no afterModel middleware is present, and matching Python LangGraph's post_model_hook_router.Migration: If you use version: "v2" (the default) together with afterModel middleware and rely on the previous single-node parallel execution, switch to version: "v1" to preserve that behaviour. See the version JSDoc on CreateAgentParams for guidance on which option to choose.