I call the GPT API, ensuring that the token size for each prompt and completion remains below the maximum limit. However, within a single conversation ID, I encounter a maximum token size error when I submit the same prompt for the third time.
gpt_4o,
“This model’s maximum context length is 128000 tokens. However, your messages resulted in 168833 tokens. Please reduce the length of the messages.”