Accoring to the new prompt caches documentation, it says it supports
- gpt-4o
- gpt-4o-mini
- o1-preview
- o1-mini
And I was wondering if chatgpt-4o-latest is in the field of gpt-4o.
Accoring to the new prompt caches documentation, it says it supports
And I was wondering if chatgpt-4o-latest is in the field of gpt-4o.
Yes, and as a matter of fact:
On Wednesday, October 2nd, the default version of GPT-4o will be updated to the latest GPT-4o model, gpt-4o-2024-08-06.
Hi @HyperBlaze
Welcome to the community!
Based on the email that OpenAI sent to members;
On Wednesday, October 2nd
the default version of GPT-4o will be updated to the latest GPT-4o model:
I wish it reduced input token limit and overall rate limit impact.
If I send a 75k token prompt, then send again followed by 75k of new text it fails based on 128k input token limit constraint.
Not saying that is the use case, but is how I tested it. We are prompt chaining a bunch of stuff together so have to send pretty much 99% same prompt over and over.