Can a developer prompt also be a cached token?

Can a developer prompt also be a cached token?

I think I can test it myself, but I heard that GPT might say it’s not possible.

I’m just asking to be sure.

1 Like

Can you elaborate on that? I don’t understand what you mean by that.

It will be part of an input. It should still appear front-loading the input, then a chat history.

The reasoning output which is a new generation should only be added, although it might reanalyze the message.

You only can “cache” something sent before, recently, and greater than 1024 characters.

Yes I believe this will hit the cache, assuming it meets all the other criteria (e.g. at least the first 1024 tokens in the prompt are static). There is a chain of command in OpenAI models when it comes to prompts, and developer messages are at the root.

2 Likes

Oh and welcome to the community @dev87 :wave:!

1 Like

ohh… look at that the “system” role was renamed to “developer”…

confusing. cause now there are two developer, me and the prompt.

Yeah, exactly…

That confused me.

1 Like