Inconsistency Between GPT's Safety System and DALL-E's

I am trying to generate illustrations for an article automatically. So I asked GPT-4 to recommend prompts and called DALL-E-3 to draw them. But sometimes DALL-E-3 rejected my requests for safety reason. So is there any consistency between GPT’s safety system and DALL-E’s? Is there any way to avoid these situations?