OpenAI Codex April 2026 Update: GPT-5.5, Browser Use, and CLI 0.124–0.125
What shipped in OpenAI Codex during April 2026: GPT-5.5 as the new frontier coding model, in-app browser use, automatic approval reviews, and Codex CLI 0.124.0/0.125.0.
Editorial Team
The AI Coding Tools Directory editorial team researches and reviews AI-powered development tools to help developers find the best solutions for their workflows.
OpenAI shipped a wave of Codex updates in April 2026, headlined by GPT-5.5 — the newest frontier model — landing in Codex as the recommended default for most coding and knowledge-work tasks. The Codex app added in-app browser use for local development servers, automatic approval reviews routed through a reviewer agent, and the Codex CLI moved through 0.124.0 and 0.125.0 with quick reasoning controls and reasoning-token reporting in codex exec --json.
TL;DR
- GPT-5.5 is OpenAI's newest frontier model and is now selectable in Codex for complex coding, computer use, knowledge work, and research workflows.
- OpenAI recommends GPT-5.5 for most Codex tasks when visible in the model picker — especially implementation, refactors, debugging, testing, validation, and knowledge-work artifacts.
- Switch with
codex --model gpt-5.5or/modelin the CLI; use the model selector in the IDE extension and Codex app.- Browser use in the Codex app drives local dev servers and file-backed pages for visual bug reproduction and fix verification, managed via a bundled Browser plugin with allowed/blocked website lists.
- Automatic approval reviews route eligible approvals through a reviewer agent before execution.
- Codex CLI 0.125.0:
codex exec --jsonreports reasoning-token usage.- Codex CLI 0.124.0: quick reasoning controls; model upgrades reset reasoning to the model default; eligible ChatGPT plans default to the Fast service tier unless opted out.
Quick Answer
If you use OpenAI Codex, open the model picker and switch to GPT-5.5 for your next implementation, refactor, or debugging task. From the CLI, run codex --model gpt-5.5 or use /model mid-session. If you work against a local dev server, enable browser use in the Codex app to let the agent reproduce visual bugs and verify fixes against localhost.
Cloud coding agent with GPT-5.5 frontier model, 1M+ developers, Desktop App, in-app browser use, and parallel sandboxed environments
What Shipped in April 2026
| Update | Where it lives | Notes |
|---|---|---|
| GPT-5.5 in Codex | CLI, IDE extension, Codex app | Newest frontier model; recommended default |
| Browser use | Codex app | Local dev servers and file-backed pages; bundled Browser plugin |
| Automatic approval reviews | Codex app | Reviewer agent gates eligible approvals |
| Codex CLI 0.125.0 | CLI | Reasoning-token usage in codex exec --json |
| Codex CLI 0.124.0 | CLI | Quick reasoning controls; reasoning resets on model upgrade; Fast tier default for eligible ChatGPT plans |
GPT-5.5: When and How to Use It
GPT-5.5 is OpenAI's new frontier model. In Codex, OpenAI calls it out as the right choice for the bulk of day-to-day coding agent work and longer-horizon knowledge tasks:
- Implementation of new features across multiple files
- Refactors that need to hold a larger plan in mind
- Debugging — including the kind that requires reading test failures and reasoning across modules
- Testing and validation workflows
- Knowledge-work artifacts like specs, design docs, and migration plans
Switching is the same wherever you run Codex:
- CLI:
codex --model gpt-5.5to launch a session on the model, or/modelinside a running session to switch - IDE extension: model selector
- Codex app: model picker
If GPT-5.5 is not yet visible in your picker, it is rolling out — keep your CLI and extension current.
Browser Use in the Codex App
The Codex app gained the ability to drive an in-app browser against your local development server and file-backed pages. Practical uses:
- Visual bug reproduction — describe a UI bug and let Codex open the page and see what you see.
- Verifying local fixes — after Codex edits a component, it can reload the dev server and confirm the change.
- Smoke-testing flows that depend on a real DOM or rendered output, not just unit tests.
Browser use is managed through the bundled Browser plugin with allowed and blocked websites so you control where the agent can navigate. For agent workflows that already touch parallel sandboxes, this closes the loop between code edits and what the user actually sees.
Automatic Approval Reviews
Where configured, Codex now routes eligible approval prompts through an automatic reviewer agent before they reach you. The reviewer evaluates the proposed action and surfaces its assessment alongside the approval, so high-confidence operations can clear faster while risky ones still escalate.
Codex CLI: 0.124.0 and 0.125.0
The CLI moved fast in late April:
- 0.125.0 —
codex exec --jsonnow reports reasoning-token usage, which makes it easier to budget runs and tune reasoning effort in scripts and CI. - 0.124.0 — adds quick reasoning controls for changing reasoning effort mid-session; model upgrades reset reasoning to the model default so you don't carry a stale setting across versions; and eligible ChatGPT plans default to the Fast service tier unless you opt out.
If you script Codex from CI, the --json reasoning-token field is the upgrade you actually want — it makes cost and latency observable per run.
How This Compares to Other Agents
| Tool | Newest model (Apr 2026) | Local browser use | Where it runs |
|---|---|---|---|
| OpenAI Codex | GPT-5.5 | Yes (Codex app) | OpenAI sandboxes + local |
| Claude Code | Claude Opus 4.6 / Sonnet 4.6 | Via tools/MCP | Terminal + cloud |
| Cursor | Multi-model (BYO) | Inside IDE | Local + APIs |
For deeper background see our OpenAI Codex desktop app guide, the AI coding agents explainer, and the OpenAI Codex tool page.
Anthropic's terminal-based AI coding agent with /ultraplan, Monitor tool, /autofix-pr, Agent Teams, and 80.9% SWE-bench
Anthropic's frontier reasoning model: 80.9% SWE-bench record, 1M token beta context, and adaptive thinking
Sources
- OpenAI Codex changelog (official): developers.openai.com/codex/changelog
- OpenAI Codex product page: developers.openai.com/codex
- OpenAI Codex CLI: github.com/openai/codex
For broader model context, see our Claude Opus 4.6 vs GPT-5.1 Codex Max vs Gemini 3 Pro comparison and the beginner's guide to AI models.
Tools Mentioned in This Article
Claude Code
Anthropic's terminal-based AI coding agent with /ultraplan, Monitor tool, /autofix-pr, Agent Teams, and 80.9% SWE-bench
SubscriptionClaude Opus 4.6
Anthropic's frontier reasoning model: 80.9% SWE-bench record, 1M token beta context, and adaptive thinking
Pay-per-useCursor
The AI-native code editor with $1B+ ARR, 25+ models, and background agents on dedicated VMs
FreemiumGPT-5
OpenAI's first unified reasoning model: 70.1% SWE-bench, 400K context, and $1.25/$10 per MTok
Pay-per-useOpenAI Codex
Cloud coding agent with GPT-5.5 frontier model, 1M+ developers, Desktop App, in-app browser use, and parallel sandboxed environments
FreemiumFree Resource
2026 AI Coding Tools Comparison Chart
Side-by-side comparison of features, pricing, and capabilities for every major AI coding tool.
No spam, unsubscribe anytime.
Workflow Resources
Cookbook
AI-Powered Code Review & Quality
Automate code review and enforce quality standards using AI-powered tools and agentic workflows.
Cookbook
Building AI-Powered Applications
Build applications powered by LLMs, RAG, and AI agents using Claude Code, Cursor, and modern AI frameworks.
Cookbook
Building APIs & Backends with AI Agents
Design and build robust APIs and backend services with AI coding agents, from REST to GraphQL.
Cookbook
Debugging with AI Agents
Systematically debug complex issues using AI coding agents with structured workflows and MCP integrations.
MCP Server
AWS MCP Server
Interact with AWS services including S3, Lambda, CloudWatch, and ECS from your AI coding assistant.
MCP Server
Context7 MCP Server
Fetch up-to-date library documentation and code examples directly into your AI coding assistant.
MCP Server
Docker MCP Server
Manage Docker containers, images, and builds directly from your AI coding assistant.
MCP Server
Figma MCP Server
Access Figma designs, extract design tokens, and generate code from your design files.
Frequently Asked Questions
Is GPT-5.5 available in Codex?
When should I use GPT-5.5 in Codex?
What is browser use in the Codex app?
What did Codex CLI 0.124.0 and 0.125.0 add?
Related Articles
Claude Code Week 15 (April 6–10, 2026): /ultraplan, Monitor Tool, /autofix-pr, /team-onboarding
Anthropic's Claude Code Week 15 update added /ultraplan cloud planning, the Monitor tool with self-pacing /loop, /autofix-pr from the terminal, and /team-onboarding.
Read more →NewsClaude Opus 4.5 Released: What Shipped and When to Use It (Dec 2025)
A concise record of Anthropic's Claude Opus 4.5 release: pricing, model ID, availability, capabilities, and how it compared to Sonnet 4.5 at launch.
Read more →ComparisonWindsurf vs Cursor: Which AI IDE in 2026?
A practical comparison of Windsurf and Cursor in 2026: pricing, Cascade vs Composer workflows, credit systems, and when to choose each AI IDE.
Read more →