When I use a streaming approach to call the OpenAI API, if I interrupt the request, is the AI’s inference also interrupted, or does it continue but just no longer send me tokens?
When I use a streaming approach to call the OpenAI API, if I interrupt the request, is the AI’s inference also interrupted, or does it continue but just no longer send me tokens?