Async support with new TTS API?

Loving the new TTS features, however instead of sending one chunk to process at a time. I was wondering if it was possible to asynchronously send all chunks up to be processed in parallel.

Its a bit too slow to have 4000 characters process at a time, especially when theres 200,000+ in the queue :sweat_smile:

It is certainly possible. The only limitation is your code.

Oh, and the rate limit per minute, which is relatively easy to conform to if you send exactly that many parallel requests off per minute and wait.

Yeah! Got it working with httpx/asyncio, curious if theres any way to increase the rpms for tts-1-hd from 7 to 100/200.

Also curious if those rpms are per api key or for all combined? Building an app that would potentially be handling thousands of requests per minute

Rates are per organization, and per model class.

I think they’ve set the HD voice model purposely for it not to be a production model. It only just increased from 3 to 7 even at tier 3, and I don’t know that it would be different for any other payment tier.

1 Like

Hi, were you able to get this to work? I have tried myself with asyncio and to no success… if you have some working code you could share, it would be much appreciated :slightly_smiling_face: