Unusual lag on output_text.done when using stream mode

Greetings,

Today with no aparent reasons wher experiencing more than 5s lag to receive the response.output_text.done. The event output_text.delta is received normally witht the ususal response time, but the complete event is lagguing so much. This is my call:

        
stream = CLIENT.responses.create(
            model="gpt-5-mini",
            input=inputs,
            tools=self.tools,
            service_tier="priority",
            stream=True,
            text={
                "verbosity": self.verbosity
            },
            reasoning={
                "effort": self.reasoning
            },
        )
 

Yesterday the event appeared inmediately after the last delta.

I have also seen that the problem appears only in gpt-5 models, everything works with the expected lag in 4.1