@mentions and skill mentions in inline review comments.Today, we're releasing a research preview of GPT-5.3-Codex-Spark, a smaller version of GPT-5.3-Codex and our first model designed for real-time coding. Codex-Spark is optimized to feel near-instant, delivering more than 1000 tokens per second while remaining highly capable for real-world coding tasks.
Codex-Spark is available in research preview for ChatGPT Pro users in the latest Codex app, CLI, and IDE extension. This release also marks the first milestone in our partnership with Cerebras.
At launch, Codex-Spark is text-only with a 128k context window. During the research preview, usage has separate model-specific limits and doesn't count against standard Codex limits. During high demand, access may slow down or queue while we balance reliability across users.
To switch to GPT-5.3-Codex-Spark:
codex --model gpt-5.3-codex-spark
Or use /model during a session.If you don't see GPT-5.3-Codex-Spark yet, update the CLI, IDE extension, or Codex app to the latest version.
GPT-5.3-Codex-Spark isn't available in the API at launch.
For API-key workflows, continue using gpt-5.2-codex.
Alpha testing for the Codex app on Windows is also starting. Sign up here to be a potential alpha tester.
plan in the composer.Starting today, GPT-5.3-Codex is available natively in Cursor and VS Code.
API access is starting with a small set of customers as part of a phased release.
This is the first model treated as a high security capability under the Preparedness Framework.
Safety controls will continue to scale, and API access will expand over the next few weeks.
Today we're releasing GPT-5.3-Codex, the most capable agentic coding model to date for complex, real-world software engineering.
GPT-5.3-Codex combines the frontier coding performance of GPT-5.2-Codex with stronger reasoning and professional knowledge capabilities, and runs 25% faster for Codex users. It's also better at collaboration while the agent is working—delivering more frequent progress updates and responding to steering in real time.
GPT-5.3-Codex is available with paid ChatGPT plans everywhere you can use Codex: the Codex app, the CLI, the IDE extension, and Codex Cloud on the web. API access for the model will come soon.
To switch to GPT-5.3-Codex:
codex --model gpt-5.3-codex
Or use /model during a session.For API-key workflows, continue using gpt-5.2-codex while API support rolls
out.
The Codex app for macOS is a desktop interface for running agent threads in parallel and collaborating with agents on long-running tasks. It includes a project sidebar, thread list, and review pane for tracking work across projects.
Key features:
For a limited time, ChatGPT Free and Go include Codex, and Plus, Pro, Business, Enterprise, and Edu plans get double rate limits. Those higher limits apply in the app, the CLI, your IDE, and the cloud.
Learn more in the Introducing the Codex app blog post.
Check out the Codex app documentation for more.
Codex now enables web search for local tasks in the Codex CLI and IDE Extension.
By default, Codex uses a web search cache, which is an OpenAI-maintained index of web results. Cached mode returns pre-indexed results instead of fetching live pages, while live mode fetches the most recent data from the web. If you are using --yolo or another full access sandbox setting, web search defaults to live results. To disable this behavior or switch modes, use the web_search configuration option:
web_search = "cached" (default; serves results from the web search cache)web_search = "live" (fetches the most recent data from the web; same as --search)web_search = "disabled" to remove the toolTo learn more, check out the configuration documentation.
Team Config groups the files teams use to standardize Codex across repositories and machines. Use it to share:
config.toml defaultsrules/ for command controls outside the sandboxskills/ for reusable workflowsCodex loads these layers from .codex/ folders in the current working directory, parent folders, and the repo root, plus user (~/.codex/) and system (/etc/codex/) locations. Higher-precedence locations override lower-precedence ones.
Admins can still enforce constraints with requirements.toml, which overrides defaults regardless of location.
Learn more in Team Config.
Custom prompts are now deprecated. Use skills for reusable instructions and workflows instead.
GPT-5.2-Codex is now available in the API and for users who sign into Codex with the API.
To learn more about using GPT-5.2-Codex check out our API documentation.