Cache not always activated for requests with more than 1024 tokens

Hi, I’ve been testing the new cache feature and it doesn’t seem to work as expected.
When I send a request with a number of tokens slightly higher than the cache threshold specified in the OpenAI documentation (1024 tokens), OpenAI does not cache the tokens.
After increasing the number of tokens, the cache is activated correctly.

Documentation reference:
https://platform.openai.com/docs/guides/prompt-caching

Code:

client = openai.OpenAI(api_key='your_api_key')
text = 'Testing cache ' * 550
r = client.chat.completions.create(
    model='gpt-4o-mini',
    messages=[{"role": "user", "content": text}]
)