TTS model has a "hidden" 4096 characters limit

In the API documentation ( of the text-to-speech model (TTS) there is no mention that the input limit is 4096 characters; information that I instead receive from the API as a 400 error.

GPT is the heart of OpenAI, but even these “secondary” APIs should be implemented more carefully.

1 Like

It’s not a hidden limit,

1 Like

They also provide a helpful unhidden error message with the information you need to discover the reason for API request denial…


Thanks, so I don’t get why it is not mentioned here:

My point is that if I want to learn about an API, the right place to look for me is in the guides, not the help section on another subdomain.
My 2 cents.

1 Like

True, but my feedback here is dedicated to having a guide comprehensive about all the key points of this API.
Knowing at first glance that limit, you can design your application to comply from the beginning.
With the actual approach, you have only another problem to debug, and you will probably never be aware of this until your customers report to you that the app fails (will they?).

I can see the usefulness in documentation - there are still many gaps that have many people turn to the forum for answers.

I can also see why OpenAI might not want to splatter specifications that may change throughout documentation. The API Reference page is automatically generated, and a single point of code being changed means that those changes propagate out to the documentation.

1 Like

First, you are absolutely correct, all of the information you need should be in one, well-organized space.

I can only speculation here, but in my experience writing documentation is among the least glamorous and most frustrating things developers do.

Sometimes things are left out of the documentation because they’re not known at the time someone is writing the documentation, other times documentation is annoyingly out-of-date because of recent changes on the product side, still more often there are changes in one part of a product that affect other parts rendering documentation for those other elements incomplete or wrong.

One thing you can always do is send a message to OpenAI as it might just be something they’re unaware of.


I ran into this as well. No mention in the documentation. Took me a little while to track down the issue in the response error. At first I thought it was because of the number of concurrent requests. Something else that is enforced but not documented.

1 Like

They “hid” it in the main API reference.

Ya, I wish that limit was made clear. Seems like a pretty harsh limit especially considering we are willing to pay for API usage. At least extend it to accommodate a decent size blog post please!

You mean it should be somewhere other than in the OFFICIAL API reference?

What is a “decent size” blog post?

Why not simply send the text in batches? Seems like a simpler solution than magically extending the capabilities of the model.

1 Like