Codex shipped substantial infrastructure around plugins, realtime voice, and agent composition over the last 90 days. The plugin system graduated from experimental to first-class—developers can now scaffold, test, and install plugins locally or from marketplaces, with product-scoped sync at startup and clearer auth handling. Realtime voice defaulted to WebRTC v2 with native TUI media support and configurable transport, while MCP support expanded to include resource reads, tool-call metadata, and custom server elicitations. Sub-agents gained path-based addressing like /root/agent_a with structured messaging, and the app-server layer grew filesystem RPCs and Python SDK bindings to support these flows. In parallel, model availability shifted—GPT-5.4 and GPT-5.3-Codex launched as the flagship agentic models with 1M context and native computer-use, GPT-5.3-Codex-Spark arrived for real-time coding at 1000+ tokens/second, and older model variants were deprecated for ChatGPT sign-in users. The Codex app itself expanded to Windows with native sandbox support, added thread search, mid-turn steering, conversation forking, and theme customization, while the CLI introduced plugin marketplaces, memory controls with reset/deletion, prompt history with Ctrl+R, and dynamic bearer token refresh for custom model providers.
Shipped GPT-5.4 and GPT-5.4 mini models, expanding the inference options available to developers. CLI releases moved through versions 0.112 to 0.118 with steady cadence, while plugin support graduated to a buildable, installable feature. App updates shipped continuously across the month alongside the model rollouts.