Feature request: Add input streaming to gpt-4o-mini-tts

Hi OpenAI team,

Great launches today with the various new audio models. One big feature request: can you support input streaming for gpt-4o-mini-tts? Or in other words, the ability to send tokens from GPT immediately to gpt-4o-mini-tts for a lower latency stream?

I am aware that using the realtime API is the “solution”, but for us developers that want to be more modular with our approach/use the cost savings from today’s releases - without input streaming we are basically back to the old very high latency voice chat - which is unfortunate because these new models present different capabilities.

If possible, please add input streaming! Great launches today.

Try livekit for building real-time capabilities using gpt-4o-mini STT, LLM and TTS models.