Announcing GPT-4o in the API!

I’m curious, the gpt-4-turbo model has a training cutoff of Dec '23, but the new model is Oct '23 - why would this be?

That being said, the new model seems to handle queries regarding its cutoff better, but still occasionally tells me Apr '21 is the cutoff (for some reason it helps to provide the current date (month and year), resulting in less erroneous output?).


There might be a lag between model development and deployment. Oct '23 model likely refers to training completion, while Dec '23 reflects actual release with updated knowledge.


Yes, they have to compile the knowledge base into the language model itself. The knowledge base is not something external the model accesses, it is the model. So if it takes time to finish and test a language model, like making sure it is censored enough and there is no way users can circumvent that, so they don’t get in trouble, then the knowledge base will have a training cutoff between when the training data was compiled and the model can be released. 4o just seems to have taken longer, because it had new functionality, as opposed to being a recompilation of something tested to which they added more data.

I find recent model sufficiently up to date. As long as it’s 2023 it’s useful enough. The 2021 cutoff was a little far away, because it feels the world has changed since then.

But since every new model becomes more censored and has reduced “copyrighted” training data, in a sense the newer models have more impoverished knowledge bases.


Well, in terms of pricing… that’s an impressive move.

For now we are on hold…

There is still no way to increase speech speed when ChatGPT reads aloud text. Is this ever going to be released? It’s relatively slow.

Isn’t chatGPT is super slow right now ?

1 Like

I`ve visit this site for a lot of times but failed,is it truely accessible?

This link works fine for me?

1 Like

Using the ChatGPT GPT-4o model, as this topic would indicate, I generate the max of 2047 tokens (before “continue” appears, a bump up from prior 1536) in 45 and then 40 seconds. about 45-50 tokens/s. Seems pretty good.

Curious behavior: the AI didn’t need a continue when it started looping the same character halfway in, the content moderator that spits out big chunks perhaps supervising. It auto-restarted the task in the same response. Also a premature end as one might have already expected from past models.

1 Like

This GPT-4o model seems to generate better responses with fewer computational resources, possibly due to the efficiency of its tokenizer.
I haven’t tested its responses in non-English languages enough yet.

At the very least, it’s certain that the speed of response generation doesn’t mean the generated text is inferior in quality.

1 Like

How is the model performing for all of you? Have any of you tried accessing it through an API? I tried a simple storytelling prompt through their playground and gpt-o was amazing! but the moment I try it from Google colab, Here are the times for my story (to be fair, I am asking for a 700 word story):

  1. GPT 3.5 turbo - 15s - 16s (over multiple runs)
  2. GPT 4 turbo - 55 - 65 s
  3. GPT o - 66 - 75s.

This sounds promising.

"Developers can also now access GPT-4o in the API as a text and vision model. GPT-4o is 2x faster, half the price, and has 5x higher rate limits compared to GPT-4 Turbo.

We plan to launch support for GPT-4o’s new audio and video capabilities to a small group of trusted partners in the API in the coming weeks."

Thank you very much for API. it s very faster. response time in 6 or 8 seconds without streaming. gpt4 turbo average 20 to 30 s.
Codage UTF8 is better . no testing the JSON format.
Vision is faster , assistant too. all is good for me at this time… :innocent:
wait audio and exit the android speech in my apps if it s better

hi, for me with assisant with file 20Moand API, it s better in time. gpt4 is 50 seconds
and gpt-4o is in 10 or 12.without recompile my app. with same code to manage.
I use many thread for timing synchronize experienc user and gpt. now I prefer.
to be continued…

Anyone knows how new gpt-4o model should works with image inputs on assistant api. Introduction to gpt-4o | OpenAI Cookbook this only deals with the chat completions API? How images and image uploading works on assistant?

It doesnt seem like GPT-4o can take in assistant generated images in the conversation history. When I move from gpt-4-turbo to gpt-4o I get: Image URLs are only allowed for messages with role ‘user’, but this message with role ‘assistant’ contains an image URL.", ‘type’: ‘invalid_request_error’

Moving back to gpt-4-turbo eliminates these errors. Anyone else running into / solve this?

According to gpt-4o vision documentation: " Images can be passed in the user , system and assistant messages"

1 Like

Very impressed with the presentation y’day. however, in my experience both via chatgpt and the new API, this 4.o model is hallucinating way more and it’s much less precise than the older 4-model(s) were. all in all, the scripts we use at our company keep on working fine with the previous model, but are useless with 4.o model.


Absolutely astonishing and magical it is. Unfortunately there is too much traffic to try the interactive voice feature. But thank you for releasing this in the wild. I will continue to pay for the upgraded version though. I’m not sure I really do need the extra queri space.

1 Like

I have a GPT that I’ve been using for a couple of months to help with coding stuff on MAME. Will it use the new version with the new features or do i need to make a new one?

1 Like