Heres me making a blank post because OpenAI mods REFUSE to delete them themselves. Delted post. deleted post.
It is available in the playground (it is about 10x the speed of GPT-4-Turbo):
https://platform.openai.com/playground/chat?models=gpt-4o
And pricing is available here (it is half the price of GPT-4-Turbo):
openai.com/pricing
You can use it right now using the API.
I am not an Azure user so I can not confirm.
that will be interesting - but based on experience it will likely take a few weeks at least. I hope they are rolling it out fast.
I typically check this page for updates: What's new in Azure OpenAI Service? - Azure AI services | Microsoft Learn
Here’s the forum announcement! Announcing GPT-4o in the API!
Will the openai bemyeye feature work with the API?
As an Azure user, we can sit and wait for today’s new updates. Azure hasn’t even released the v2 of the assistant API yet. Nor the file_search tool, so…
Did gpt-4o fix the problem of large number of context token when we use assistant API?
When using GPT-4 Turbo Assistant before, I encountered a large number of context tokens being billed, with context tokens often being ten times the number of generated tokens. Does this situation also occur in GPT-4o? The large context billing makes it difficult for me to use this assistant.
“Be My Eyes” is a independent early partner of OpenAI, having access to GPT-4 computer vision for many months before it was available by API.
Which means our current point in time is after vision AI models were released for API users in November (in “turbo” form).
Switching an AI model employed by Assistants from any of the others you can use to this one doesn’t fundamentally change how Assistants operates. The excessive input context that Assistants can use with multiple internal calls and context loading outside your control is mitigated by a lower cost, if the new model can still perform the task.
Unusually though, gpt-4o is available on Azure OpenAI. Its just, in the typical fashion they haven’t specified in which two US regions. And believe me there is no other way to find this out than deploying to each and every US region till you find gpt-4o.
Did it work for you, i.e. could you actually access it? I’ve tried just now but to no avail.
No luck till now I had the services deployed in swedencentral assuming it gets newer versions first, however, thats not the case this time. Hopefully, we’ll be able to find the correct regions soon.
Update:
We now have some clarity here GPT-4o early access playground
Hi, the function calling is still very slow
When the model has to send a lot of text to the function arguments it might seem slow. Do you think that might be the case for you?
Hi,
thank you for response. I do not think that is the case for me.
The input is relatively easy for example:
"I need to create a presentation about AI, due date is 14.5. 8 pm, set reminder a day before "
Then it use function call to create a task.
Thanks for researching. From experience, do you know how long it takes Azure to incorporate the new models?
No idea mate. It might take couple of weeks, however, seems it’ll still be a bit restrictive in rate limits.
I have an Azure instance deployed in East US and I don’t see the gpt-4o available. Only through Early Access Playground and for Chat Completion, but that’s not useful… As I said, we can sit and wait…
The API will ne available on Azure starting May 21.