The reason why is to protect you - from yourself.
A prompt is not a place for commands or behaviors.
It is for the whisper series of models, and is meant to be a lead-up text that is not reproduced, but is the immediate transcript before. This gives contextual information that enhances the output text production upon continuation.
I expect that it is similarly containered with the purpose of some text stated when given to gpt-4o. And thus doesn’t work reliably - the same way that you say “continue this” to the model, 50/50 chance it doesn’t continue.