Understanding Prompt caching

I am not sure how the prompt caching is working for my case:
I am trying to understand how prompt caching is being applied to my case. For that I prompted the GPT-4o model with 4 different prompts. When I analyzed the chat completion object, I get the following values for the 4 completion objects:
Prompt tokens: 13418, Cached tokens: 0
Prompt tokens: 12183, Cached tokens: 8192
Prompt tokens: 14614, Cached tokens: 8192
Prompt tokens: 15326, Cached tokens: 8192

Here “Prompt tokens” is the object.usage.prompt_tokens. and “Cached tokens” is obj.usage.prompt_tokens_details.cached_tokens

From the above values, I expect the uncached tokens to be the sum of the difference between Prompt tokens and Cached tokens from the 4 prompts. If I do that then I get 30965 uncached tokens.

However, when I check my OpenAI dashboard, I see the following values:
Uncached: 47349, cached: 8192. I fail to understand the difference in values when I retrieve them programatically from the chat completion object v/s from the OpenAI dashboard. Any help would be greatly appreciated!