Your definition of expected behaviour and OpenAI’s definition of expected behaviour seem to be different. Perhaps the recent NYT lawsuit about this exact behaviour has something to do with it?
What makes you think the model is doing this and not done ancillary system?
Maybe, just maybe they’re taking an aggressively cautious approach and refusing to output any verbatim requests?
Again, there is no error. There is no bug. This is expected behaviour. Stop trying to get the model to recite copyrighted materials in the training data and you’ll be fine.