Queries about generating multiple requests at a time on davinci model and increasing the token limit

Hi friends, Hope you are well. I am making an AI writing tool using
GPT-3 davinci model…

I have some queries about multiple requests generating and the token limit increasing, I will be very grateful if you guys could clarify these points.

  1. Since my SaaS application will be used by many users, so many requests will be generated simultaneously at a time. Can the open AI davinci model handle multiple requests at a time and give output properly?

  2. I noticed that the default token limit is a maximum 120$. Can I increase the token limit for my application?

1 Like

Hi NHaider634,

I will try to explain some of what we are doing from our side, with our prototype to create long texts, summarize, tokenize, classify and also, in separate scenarios, create parts of texts or add specific parts of texts in a previously created content, analyzing the context and connecting the ideas.

1+2 - If I understand correctly, you are referring to the request limits. If so, I suggest you take a look at:

Is API usage subject to any rate limits? | OpenAI Help Center

– There are also limits associated with quotas (for your account and also for the organization that the account is associated with), and in this case, if your account is new, you can request a quota increase for OpenAI, I suggest you take a look at the specific form for quota increase at:

Form

Obs: I suggest you take a look at the documentation and also the usage policies, they have helped me and help a lot. https://beta.openai.com/docs/usage-policies