GPT-5 sometimes returns intermediate reasoning steps in final `output_text` blocks

I am using the openAI responses API to generate text responses with gpt-5, minimal reasoning and low verbosity. When generating a response, rarely the response output_text will include what seems like an intermediate reasoning step, something like: “Ok, now i will check the account. done. checked the account. The account has a certain account type. How do i handle this account type. Ok, i must follow instructions for that account type. Ok, done.
Based on your account type, please […final response]

It does return the final output text that should be sent, but it’s prepended by some reasoning.

How i’m parsing:

First, messages is defined by taking all elements of response[‘output’] where the output has type == ‘message’ and role == 'assistant'

And then, for each element of messages, and returning each element’s text value if the type == ‘output_text’. Then, I join them all together. I’m guessing it’s something to do with joining them all together, but i’ve got no idea how then i would differentiate between ‘final’ output text and intermediate output text.

I am on ZLR, and i’m finding this issue impossible to reproduce. Is this a prompt issue, or is there an issue with my parsing? It works correctly almost always, but rarely will return what seems like a non-final response above. I haven’t been able to capture a request that causes this, which is driving me nuts. I’ll add more logging on my end to see if i can capture one, but in the mean time, wondering if anyone else has run into this issue.