|
Caching rate drop after switching to Responses API
|
|
3
|
151
|
January 11, 2026
|
|
Caching is borked for GPT 5 models
|
|
19
|
2360
|
January 8, 2026
|
|
Realtime caching between sessions?
|
|
0
|
54
|
November 18, 2025
|
|
Can I cache large chunks on gpt-5-nano?, Does each cache-read request reset cache inactive time?, Does large caches affect cache overflow limits?
|
|
1
|
113
|
October 25, 2025
|
|
Input cache not registering with fine-tuned gpt-4o
|
|
0
|
69
|
October 15, 2025
|
|
How to use cached_tokens field to calculate cost estimation
|
|
1
|
727
|
July 31, 2025
|
|
Cached Input Tokens in Chat Completions
|
|
1
|
2654
|
April 30, 2025
|
|
Prompt Caching in Batching API
|
|
2
|
954
|
April 6, 2025
|
|
Stopped to use Cached Tokens
|
|
1
|
156
|
January 2, 2025
|
|
Caching Strategy for Different Projects
|
|
4
|
269
|
December 28, 2024
|