What is calculations of max_tokens in completions api for text summarization?

Hi guys,

We have a requirement of summarizing the text (extracted pdf) using completions api (davinci003 model).
A typical case would be, pdf will have huge text.
Could you please assist me to understand how to manage token calculations as we would require to get summary at once.

check this out :wink:
https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb