GPT 5.2 is rolling out right now!

Introducing GPT-5.2

The most advanced frontier model for professional work and long-running agents.

GPT-5.2 brings stronger performance on complex, multi-step tasks. It is better at building spreadsheets and presentations, writing code, interpreting images, and working with long contexts.

Agentic coding takes a major step forward, making GPT-5.2 the leading model in its price range and the new default for tools like Windsurf.

The Thinking variant is more reliable, with about 30 percent fewer factual errors. It hallucinates less, which makes it more dependable for research and analysis.

Long-context reasoning reaches a new high. GPT-5.2 nearly solves the 4-needle MRCR benchmark and clearly outperforms GPT-5.1 when analyzing very long documents.

Edit:
Benchmark results:

20 Likes

Missing -mini price on https://platform.openai.com/docs/pricing
The price is only present for 5.2, but no 5.2-mini

1 Like

5.2-mini is not yet available but will be soon!

1 Like

A few more interesting details:

2 Likes

Just added the benchmark results to the top post.

Impressive.

1 Like

Cherry on top:

Little Shipmas confirmed!

5 Likes

we are looking forward to using GPT-5.3

Can GPT-5.2 output images???

1 Like

maybe their latest flagship model (gpt-5.2) gets released to the general public (free account users) on dec 25th?

edit: nevermind, for free users gpt-5.2 is tmrw:

2 Likes


okay… has been fixed

1 Like

API models are out, including gpt-5.2-pro-2025-12-11 and a gpt-5.2-chat-latest.

40% price increase:

reasoning.effort - Supported values are:
[‘none’, ‘low’, ‘medium’, ‘high’, and ‘xhigh’]

There is no API facility for informing the AI what you want to receive nor any events for other than image tool use. So it cannot output images.

While it was indicated that the gpt-5.1 model can generate images in one AMA response … so can gpt-4o. That modality is not exposed except by a special trained model gpt-image-1 completely wrapped in tools and image endpoints and safety.

GPT-5.2 - More vision token billing issues.

  • detail:low not working on any endpoint
  • different billing between Chat Completions and Responses
model (low) chat completions responses
gpt-5-2025-08-07 70 353
gpt-5.1-2025-11-13 70 353
gpt-5.2-2025-12-11 273 327

(prior model overbilling on Responses still happening above)

model (high) chat completions responses
gpt-5-2025-08-07 350 350
gpt-5.1-2025-11-13 350 350
gpt-5.2-2025-12-11 273 327

Likely cause

  • Using “patches” algorithm; not “tiles” of any non-mini model before
  • Using a 1.2x cost multiplier on Responses

Demonstration of “patches” symptom

  • gpt-5.2 cannot see a patches-aligned checkerboard image

gpt-5.1

It’s a simple black-and-white checkerboard pattern: a grid of equally sized squares alternating between black and white in both rows and columns.

gpt-5.2

The image appears to be completely black—no visible objects, text, or details.

Documentation

FALSE ADVERTISING

has been fixed

40% price increase

Ugh… I know - credit card just got hit for testing gpt-5.2 for a couple of hours today.

1 Like

So… it really was garlic :garlic:!!

1 Like

I have a question about GPT-5.2 Pro on the official website. The UI only shows two reasoning/thinking options (e.g., “Thinking time: Standard” and “Extended”). Which API thinking_effort levels do these correspond to—medium + high, or high + xhigh?

At the moment I can choose between all four thinking modes for GPT 5.2 pro on the website.

Click for screenshot

1 Like

Turn off adblock and your own blocks that refuse a bunch of feature gates and tracking. Then hard-refresh. Results in the clear “Pro” designation seen in the ChatGPT message input area as reported.

You also might get a terrible non-working modal popup for thinking when the model starts creating a response, blocking the entire UI.

API reasoning effort values supported for gpt-5.2-pro, with internal map, are ‘medium:64’, ‘high:256’, and ‘xhigh:768’. Values which are not affected by the use of pro/non-pro.

Not supported is service_tier: “flex”.

What it corresponds to in ChatGPT is not worth asking (“cough” 512/768), as OpenAI can make ChatGPT and test-time compute and the model itself dynamic, to respond to computation needs.

Thanks a lot!
Is there any difference between “extended” and “heavy”?

1 Like

I cannot say for certain, as the available options differ across my devices and browsers. This could be due to A/B testing, the rollout itself, or a browser-related issue, as mentioned above.

If I were to test this, I would likely send a few requests via the API using different settings and compare the thinking time with ChatGPT. That said, I cannot make a definitive statement at this point.

1 Like