Does prompt caching works on chatgpt-4o-latest?

Accoring to the new prompt caches documentation, it says it supports

  • gpt-4o
  • gpt-4o-mini
  • o1-preview
  • o1-mini

And I was wondering if chatgpt-4o-latest is in the field of gpt-4o.

1 Like

Yes, and as a matter of fact:

On Wednesday, October 2nd, the default version of GPT-4o will be updated to the latest GPT-4o model, gpt-4o-2024-08-06.

2 Likes

Hi @HyperBlaze :wave:

Welcome :people_hugging: to the community!

Based on the email that OpenAI sent to members;
On Wednesday, October 2nd
the default version of GPT-4o will be updated to the latest GPT-4o model:

gpt-4o-2024-08-06


2 Likes

I wish it reduced input token limit and overall rate limit impact.

If I send a 75k token prompt, then send again followed by 75k of new text it fails based on 128k input token limit constraint.

Not saying that is the use case, but is how I tested it. We are prompt chaining a bunch of stuff together so have to send pretty much 99% same prompt over and over.

Thanks for responding! However, I think my original post might be misleading.
What I was asking is this model


The last one on the list.
Can I consider this also works with prompt caching?