More info on gpt-4o in the API?

As the title suggests just looking for more info of when that’ll drop and if that’ll be added to azure as well?

It is available in the playground (it is about 10x the speed of GPT-4-Turbo):
And pricing is available here (it is half the price of GPT-4-Turbo):

1 Like

meaning we can use it in the API or we can only use it in the playground?

also is this releasing on azure as well

1 Like

You can use it right now using the API.

I am not an Azure user so I can not confirm.


that will be interesting - but based on experience it will likely take a few weeks at least. I hope they are rolling it out fast.

I typically check this page for updates: What's new in Azure OpenAI Service? - Azure AI services | Microsoft Learn

1 Like

Can confirm a lot faster on API…

ETA: Faster than GPT-3.5-instruct it seems…


Here’s the forum announcement! Announcing GPT-4o in the API!


Will the openai bemyeye feature work with the API?

As an Azure user, we can sit and wait for today’s new updates. Azure hasn’t even released the v2 of the assistant API yet. Nor the file_search tool, so…

1 Like

Did gpt-4o fix the problem of large number of context token when we use assistant API?

When using GPT-4 Turbo Assistant before, I encountered a large number of context tokens being billed, with context tokens often being ten times the number of generated tokens. Does this situation also occur in GPT-4o? The large context billing makes it difficult for me to use this assistant.

1 Like

“Be My Eyes” is a independent early partner of OpenAI, having access to GPT-4 computer vision for many months before it was available by API.

Which means our current point in time is after vision AI models were released for API users in November (in “turbo” form).

Switching an AI model employed by Assistants from any of the others you can use to this one doesn’t fundamentally change how Assistants operates. The excessive input context that Assistants can use with multiple internal calls and context loading outside your control is mitigated by a lower cost, if the new model can still perform the task.

Unusually though, gpt-4o is available on Azure OpenAI. Its just, in the typical fashion they haven’t specified in which two US regions. And believe me there is no other way to find this out than deploying to each and every US region till you find gpt-4o.

Introducing GPT-4o

1 Like

Did it work for you, i.e. could you actually access it? I’ve tried just now but to no avail.

No luck till now :smiling_face_with_tear: I had the services deployed in swedencentral assuming it gets newer versions first, however, thats not the case this time. Hopefully, we’ll be able to find the correct regions soon.


We now have some clarity here GPT-4o early access playground


Hi, the function calling is still very slow

When the model has to send a lot of text to the function arguments it might seem slow. Do you think that might be the case for you?


thank you for response. I do not think that is the case for me.
The input is relatively easy for example:
"I need to create a presentation about AI, due date is 14.5. 8 pm, set reminder a day before "

Then it use function call to create a task.

Thanks for researching. From experience, do you know how long it takes Azure to incorporate the new models?

yeahhhh I’m probably gonna catch slack for saying this but I switched back from azure to OpenAI cuz I was just sick of their ecosystem. I also have better limits with OpenAI. But they’re heavily limiting how much you can use 4o right now. It’s easy enough to switch back to OpenAI

No idea mate. It might take couple of weeks, however, seems it’ll still be a bit restrictive in rate limits.