Encountered maximum token exceed exception via API call

Hi,
I am using a prompt in my java code to fetch some data using text-davinci-003. I get the following error

"message": "This model's maximum context length is 4097 tokens, however you requested 4116 tokens (116 in your prompt; 4000 for the completion). Please reduce your prompt; or completion length."

However, when I use the same prompt on openAI playground with same request parameters , it gives me the response with approx 550 completion tokens. I verified the same using the token estimator as well Search token estimator

Here is the snippet of request prompt

{
  "model": "text-davinci-003",
  "prompt": "some prompt message",
  "temperature": 0.7,
  "max_tokens": 4000,
  "top_p": 1,
  "frequency_penalty": 0,
  "presence_penalty": 0
}

Has anyone encountered a similar issue via API access?

Note the warning in the Playground and the wording… “Up to 4,000 tokens…” So, it’s likely subtracting your prompt from the 4k and giving some back. For the API, you just need to make sure you keep both under the limit.

Is there a reason you’re just setting to 4,000 tokens? What are you trying to accomplish?

Welcome to the forum…

144 tokens in prompt

Up to 4,000 tokens in response

This model can only process a maximum of 4,001 tokens in a single request, please reduce your prompt or response length.

Learn more about pricing

1 Like

Hi all, thank you for suggestions. I was able to resolve this. I know the approx tokens usage for each completion hence I reduced the size of max token to 3000 and it worked. Response is usually within 2000 tokens.

1 Like