How to define user message size limit for particlular prompt?

My case is content summarization: I provide text and chatgpt to summarize it. How can define user message size limit for this particular prompt, ideally considering language of the user message (as I assume it can also affect)? I mean, I want to identify limit before sending the request, to be able to reduce the content (user message) in advance if it is needed.

You have a good question there that has a multifaceted solution depending on the exact scenario.

First, I assume you are talking about API programming and not ChatGPT the web chatbot of OpenAI.

Is it an end user. Then it is likely the coded user interface that they interact with that should give them feedback about how large their input is and when it has exceeded your limitations or the capabilities of the model.

Is it a batch process or automated Then rather, it would be your own parsing and calculations of input documents that would need to manage the inputs to models and your budget.

Points:

  • Everything AI is measured in tokens. That implies a token encoder is in active use to calculate true counts. While English natural language can be estimated around 4 characters per token, that can degrade to under 0.5 characters per token on certain Unicode inputs like traditional Chinese. That means a character count estimation is not a good user input limit; crafted input can exceed your budget and capability.

  • Summarization tasks that exceed a model input context length can be processed in multiple steps. You can request summaries for many parts, and then have all those summaries themselves then summarized.

That should give you an idea of not just “identify”, but also “limit”.

Then the other “identify” is knowing how much you can send to AI models and still have room for a response.

2 Likes

yep, sorry for wrong term used

yep, this is my case

however as far as I understand, if I preliminarily preprocess text (to prevent other from English symbols occurences), I can expect that I can somehow calculate the approximate token size of my request?

Another question is - is the token size the same for different requests (with the samу physical amount of symbols)? For example I can ask to summarize text, which is (as I guess) more complicated operation, than lets say to extract first sentence for each text paragraph.

can you clarify a little bit this idea of a room for response?