News

OpenAI Codex April 2026 Update: GPT-5.5, Browser Use, and CLI 0.124–0.125

What shipped in OpenAI Codex during April 2026: GPT-5.5 as the new frontier coding model, in-app browser use, automatic approval reviews, and Codex CLI 0.124.0/0.125.0.

By AI Coding Tools Directory2026-04-275 min read
Last reviewed: 2026-04-27
ACTD
AI Coding Tools Directory

Editorial Team

The AI Coding Tools Directory editorial team researches and reviews AI-powered development tools to help developers find the best solutions for their workflows.

OpenAI shipped a wave of Codex updates in April 2026, headlined by GPT-5.5 — the newest frontier model — landing in Codex as the recommended default for most coding and knowledge-work tasks. The Codex app added in-app browser use for local development servers, automatic approval reviews routed through a reviewer agent, and the Codex CLI moved through 0.124.0 and 0.125.0 with quick reasoning controls and reasoning-token reporting in codex exec --json.

TL;DR

  • GPT-5.5 is OpenAI's newest frontier model and is now selectable in Codex for complex coding, computer use, knowledge work, and research workflows.
  • OpenAI recommends GPT-5.5 for most Codex tasks when visible in the model picker — especially implementation, refactors, debugging, testing, validation, and knowledge-work artifacts.
  • Switch with codex --model gpt-5.5 or /model in the CLI; use the model selector in the IDE extension and Codex app.
  • Browser use in the Codex app drives local dev servers and file-backed pages for visual bug reproduction and fix verification, managed via a bundled Browser plugin with allowed/blocked website lists.
  • Automatic approval reviews route eligible approvals through a reviewer agent before execution.
  • Codex CLI 0.125.0: codex exec --json reports reasoning-token usage.
  • Codex CLI 0.124.0: quick reasoning controls; model upgrades reset reasoning to the model default; eligible ChatGPT plans default to the Fast service tier unless opted out.

Quick Answer

If you use OpenAI Codex, open the model picker and switch to GPT-5.5 for your next implementation, refactor, or debugging task. From the CLI, run codex --model gpt-5.5 or use /model mid-session. If you work against a local dev server, enable browser use in the Codex app to let the agent reproduce visual bugs and verify fixes against localhost.

OpenAI Codex logo
OpenAI CodexFreemium

Cloud coding agent with GPT-5.5 frontier model, 1M+ developers, Desktop App, in-app browser use, and parallel sandboxed environments

What Shipped in April 2026

Update Where it lives Notes
GPT-5.5 in Codex CLI, IDE extension, Codex app Newest frontier model; recommended default
Browser use Codex app Local dev servers and file-backed pages; bundled Browser plugin
Automatic approval reviews Codex app Reviewer agent gates eligible approvals
Codex CLI 0.125.0 CLI Reasoning-token usage in codex exec --json
Codex CLI 0.124.0 CLI Quick reasoning controls; reasoning resets on model upgrade; Fast tier default for eligible ChatGPT plans

GPT-5.5: When and How to Use It

GPT-5.5 is OpenAI's new frontier model. In Codex, OpenAI calls it out as the right choice for the bulk of day-to-day coding agent work and longer-horizon knowledge tasks:

  • Implementation of new features across multiple files
  • Refactors that need to hold a larger plan in mind
  • Debugging — including the kind that requires reading test failures and reasoning across modules
  • Testing and validation workflows
  • Knowledge-work artifacts like specs, design docs, and migration plans

Switching is the same wherever you run Codex:

  • CLI: codex --model gpt-5.5 to launch a session on the model, or /model inside a running session to switch
  • IDE extension: model selector
  • Codex app: model picker

If GPT-5.5 is not yet visible in your picker, it is rolling out — keep your CLI and extension current.

Browser Use in the Codex App

The Codex app gained the ability to drive an in-app browser against your local development server and file-backed pages. Practical uses:

  • Visual bug reproduction — describe a UI bug and let Codex open the page and see what you see.
  • Verifying local fixes — after Codex edits a component, it can reload the dev server and confirm the change.
  • Smoke-testing flows that depend on a real DOM or rendered output, not just unit tests.

Browser use is managed through the bundled Browser plugin with allowed and blocked websites so you control where the agent can navigate. For agent workflows that already touch parallel sandboxes, this closes the loop between code edits and what the user actually sees.

Automatic Approval Reviews

Where configured, Codex now routes eligible approval prompts through an automatic reviewer agent before they reach you. The reviewer evaluates the proposed action and surfaces its assessment alongside the approval, so high-confidence operations can clear faster while risky ones still escalate.

Codex CLI: 0.124.0 and 0.125.0

The CLI moved fast in late April:

  • 0.125.0codex exec --json now reports reasoning-token usage, which makes it easier to budget runs and tune reasoning effort in scripts and CI.
  • 0.124.0 — adds quick reasoning controls for changing reasoning effort mid-session; model upgrades reset reasoning to the model default so you don't carry a stale setting across versions; and eligible ChatGPT plans default to the Fast service tier unless you opt out.

If you script Codex from CI, the --json reasoning-token field is the upgrade you actually want — it makes cost and latency observable per run.

How This Compares to Other Agents

Tool Newest model (Apr 2026) Local browser use Where it runs
OpenAI Codex GPT-5.5 Yes (Codex app) OpenAI sandboxes + local
Claude Code Claude Opus 4.6 / Sonnet 4.6 Via tools/MCP Terminal + cloud
Cursor Multi-model (BYO) Inside IDE Local + APIs

For deeper background see our OpenAI Codex desktop app guide, the AI coding agents explainer, and the OpenAI Codex tool page.

Claude Code logo
Claude CodeSubscription

Anthropic's terminal-based AI coding agent with /ultraplan, Monitor tool, /autofix-pr, Agent Teams, and 80.9% SWE-bench

Claude Opus 4.6 logo
Claude Opus 4.6Pay-per-use

Anthropic's frontier reasoning model: 80.9% SWE-bench record, 1M token beta context, and adaptive thinking

Sources


For broader model context, see our Claude Opus 4.6 vs GPT-5.1 Codex Max vs Gemini 3 Pro comparison and the beginner's guide to AI models.

Free Resource

2026 AI Coding Tools Comparison Chart

Side-by-side comparison of features, pricing, and capabilities for every major AI coding tool.

No spam, unsubscribe anytime.

Frequently Asked Questions

Is GPT-5.5 available in Codex?
Yes. GPT-5.5 is OpenAI's newest frontier model and is available in Codex via the CLI, IDE extension, and Codex app model picker. Use `codex --model gpt-5.5` or `/model` in the CLI.
When should I use GPT-5.5 in Codex?
OpenAI recommends GPT-5.5 for most Codex tasks when it appears in the model picker, especially complex coding, refactors, debugging, testing, validation, and knowledge-work artifacts.
What is browser use in the Codex app?
The Codex app can drive an in-app browser against local development servers and file-backed pages, useful for visual bug reproduction and verifying local fixes. It is managed through a bundled Browser plugin with allowed and blocked websites.
What did Codex CLI 0.124.0 and 0.125.0 add?
0.124.0 added quick reasoning controls, reset reasoning to model default on model upgrades, and defaults eligible ChatGPT plans to the Fast service tier. 0.125.0 added reasoning-token usage reporting in `codex exec --json`.