Yep, I can confirm that we append a knowledge cutoff sentence to the system message for gpt-4o mini.
Without this sentence, the model doesn’t know the limit of its knowledge, and is more likely to get things wrong about events from the last year.
We normally want to give developers 100% control over what the model sees, but at the same time we want to make things convenient and ‘just work’. So we had the option of (a) inserting it automatically or (b) documenting that additional prompting is required for better recent event performance plus hoping that every developer reads the documentation and does the prompting. Because gpt-4o mini tokens are cheap and we wanted things to just work, we went with option (b) in this case. I acknowledge it’s annoying to have the prompt modified, but we hope it helps more often than hurts. Sorry that this is one of the cases where it had a negative effect.
Definitely a miss from us to not document this though - I’ll tell the team that it would be great to have a page that lists any prompt manipulations we do (rare) so that no one is caught by surprise.
Our general philosophy for the API (unlike ChatGPT etc) is to give you more power and control, even if that means the power to make mistakes. We’d rather elevate the ceiling on what developers build than try to raise the floor with hamfisted attempts at helpful prompt manipulation. Still, it’s a balance, and in this case we didn’t think it was that costly to do this small tweak to help patch a shortcoming of gpt-4o mini.