Paid GPT API and Usage tiers

I’m using a paid GPT API,
but it’s very slow.
I heard that usage tiers are divided based on the billing charge.
Is there any way for me to find out what tier I am currently in?

Hi @LEE.J.H ,

There are different pricing tiers like you said. The difference in tiers comes mainly due to the prompt_tokens and completion_tokens used by the API.

Pricing: Pricing

Detailed explanation:

  1. All the billings are done per 1000 tokens used.

  2. For GPT4 there are 2 variants - 8k and 32k. 8k can process upto 8192 tokens and 32k variants can process upto 32768 tokens(approximately 24k words). These tokens limit include both prompt_token and completion_token.

  3. U can set a limit to the number of tokens used for each api call using “max_tokens” parameter to avoid erroros arising due to token limits.

  4. Similar to GPT4, there are other language models provided by openAI, like GPT3.5 TURBO which has 4k and 16k variants and so on. You can also fine tune your own models having other models as base. Refer to the resource 1) below for more information.

Resources:

  1. Pricing

Yes, what you would do is go to your account’s rate limit page, and look at the tokens-per-minute rate (TPM)

If you have paid the $50 or more into a prepay account, that should have jumped you up to the 80000 token level (after up to a week).