Once AI inference starts, do we have to wait for the inference to complete before it ends?

When I use a streaming approach to call the OpenAI API, if I interrupt the request, is the AI’s inference also interrupted, or does it continue but just no longer send me tokens?