Max-token not correct

I set the max-token 300 by api request, but a maximum of 20 is used, why and how to fix it?

If you make the same request in the Open AI Playground, then use all 300

Are you using the same model?

GPT-3 is notoriously bad at understanding word counts with a zero-shot prompt.

When you use the api, the max tokens is used for both the prompt and completion. Your prompt may be using most of them and not leaving enough for the completion

I don’t know about the playground. But maybe that is just for the completion and being added to the prompt token count in the background. Just a thought. Might be the case

How long is your prompt?


prompt = “3 paragraph fitness post”
model = text-davinci-003
max_tokens = 300
temperature = 0.7

I use “text-davinci-003” model

I would recommend using one or two examples in your prompt… and using a stop-token.

- In payground (with out stop-token)

- API result (with out stop-token)

- API settings (with out stop-token)

- API result (with stop-token)


Here it says that there were no requests, support is not responding

I thought you said you requested and had a problem with the completion length?

No, as it turned out, the problem is that the request is being made, but an incomplete response is returned, and in the settings you can see that the secret keys were not used, although I have them inserted in the request

But your screenshots upthread show that you did get a result.

Sounds like you need to reach out to support. Good luck!

https://help.openai.com/en/

Already wrote 3 times, even a return message does not come (((