Fine-tuned gpt-3.5-turbo latency

Jailbreak the AI, make it produce obscene text or meaning. See the new response finish reason for content violation.

Infer that the response tokens are being held up while the generation is being scanned for bad content by an undocumented change (or somewhat documented by the “how we used GPT-4 to make a moderator” blog post)