Strategies for handling long texts in ChatGPT API limits

Hi everyone,

I’m the founder of Lumatra, a platform designed to help aspiring authors connect with literary agents. We use AI to analyse manuscripts and generate a synopsis, pitch, and email. In the near future, we also aim to offer editorial suggestions.

However, we’re facing a challenge with token limits in the ChatGPT API. For longer books, the manuscript analysis uses up almost all 8,000 tokens, leaving little room to generate a coherent synopsis, pitch, and email.

I noticed there were some announcements about new API updates last week. Could these changes help with our issue? Does anyone have advice on optimising token usage, splitting tasks, compressing input, or any other strategies to handle long texts effectively within the current API limits? I also need to keep price of the calls as low as possible.

Any guidance would be greatly appreciated!

1 Like

The newer models have a 128k context length…

https://platform.openai.com/docs/models/model-endpoint-compatibility

2 Likes

Ahh, thank you for that. Much appreciated.

which comes with a different price tag obviously…

I’m looking at the various models that I could switch to for my needs. How can I work out the best one for me at the best price? Is there help to do that?

The best way to get answers in a developer forum is to try out stuff and when you get stuck you tell what you have tried and ask for help on what you can’t figure out.

Why don’t you try them all?

1 Like