[Fixed] chatgpt-4o-latest broken with responses API

[Edit: This seems fixed now.]

The following breaks with an absolutely unfathomable error message when the model is chatgpt-4o-latest, but works with other models e.g. gpt-4o

import json
import requests
api_key = "REDACTED"
url = "https://api.openai.com/v1/responses"
headers = {
	"Content-Type": "application/json",
	"Authorization": f"Bearer {api_key}"
}
data = {
	"model": "chatgpt-4o-latest",
	"input": "Hello! This is just a test of the new API."
}
response = requests.post(url, headers = headers, json = data)
print(json.dumps(response.json(), indent = 4))
1 Like

Oh, I checked with the official SDK, same issue.

Switching the API site playground from Chat Completions to Responses, you will notice the chatgpt-4o-latest model becomes absent from the model list.

I wish I could link to the model page in the new models features site in documentation, where it might show only chat completions, but the chat model is missing. This means you also can’t see the very small usage limit given to experiment with.

Until they block the model names (like audio without modality support) or rewrite so that internal context inputs are sent properly, this is a good test of your “error” and “response.failed” event handling…

Yeah, it seems like they are not really supporting the chatgpt-4o-latest model as much, I’ve consistently noticed that even with the Chat Completions API, it’s still not given the same permissions as their ChatGPT frontend.

  • It’s nowhere to be seen on their new models page.
  • In pricing, it is not listed with gpt-4o, it is buried at the very bottom of the page in “Other Models” with gpt-4-turbo, which came out November 2023 and is similarly not as supported as gpt-4o.
  • It randomly doesn’t support API features with no documentation about it. For example, it cannot use tool calling in the Chat Completions API. Why are they not allowing the “latest gpt-4o used in ChatGPT”, which can clearly use tools on their frontend, to use tools via API?

It’s a shame, because it is a newer version of their gpt-4o model they even announced on their releases page as having a later knowledge cutoff (November 2023 → June 2024), better performance with vision capabilities, and better performance with STEM questions. As of today, this specific snapshot is ranked 3rd on the Chatbot Arena (formerly LMSYS). The current gpt-4o snapshot gpt-4o-2024-08-06 is ranked 31st.

I wish they’d just release a new gpt-4o snapshot to match the latest release.

This seems to have been fixed, I can get chatgpt-4o-latest via the responses API now.