Can someone explain me the pricing model

Hi there!

We are trying to build an app ontop of the text-davinci API, now I was working on the pricing strategy but was quite confused when i was testing it in the playground.

So I was asking it couple of stuff, like a user and in the model usage it is splitt like that:

Local time: 3. Jan. 2023, 14:10
text-davinci, 1 request
17 prompt + 70 completion = 87 tokens

Local time: 3. Jan. 2023, 14:10
text-moderation, 4 requests
369 prompt + 0 completion = 369 tokens

Local time: 3. Jan. 2023, 14:25
text-davinci, 2 requests
103 prompt + 35 completion = 138 tokens

Local time: 3. Jan. 2023, 14:25
text-moderation, 10 requests
1,268 prompt + 0 completion = 1,268 tokens

What are those “text-moderation” requests? Why do I need to get charged for it altough I didnt select anything like that? Is there any way I can get rid of those charges?

Another question which I have is about ongoing conversations:

If a User wants to keep an ongoing conversation for example like this:

"Do you like dogs?

Yes, I absolutely love dogs! They’re such loyal and loving companions.

Why are they like that?

Dogs are loyal and loving because they form strong bonds with their owners and families. They are incredibly social animals and thrive on companionship."

Do I get billed in the second question for:

A : “Why are they like that?”

or

B: "Do you like dogs?

Yes, I absolutely love dogs! They’re such loyal and loving companions.

Why are they like that?"

It would be great if someone can help me out with those questions!

Thanks,
Basti

There is more detailed info on the Pricing page FAQ.

Specifically the way you’re charged includes both the prompt tokens (how much text you use to ask for a result) and the completion (the amount of text that is returned):

How is pricing calculated for Completions?

Completions requests are billed based on the number of tokens sent in your prompt plus the number of tokens in the completion(s) returned by the API.

Not sure exactly what the text moderation is, but you can see your token usage in your dashboard: https://beta.openai.com/account/usage

1 Like

The moderation is a process where the input and the output is checked by the API for forbidden content like violence, hate etc.
It’s not charged.

1 Like

Someone can answer that questions please ?

A or B ?

I think unfortunately that the answer seems to be B, so long long conversation can increasingly become more and more expensive

1 Like

Every time you send a request to the GPT-3 API, you get charged for the prompt + completion. So, if you send the previous questions (as is suggested because it’s needed for the bot to have a “memory” of sorts), you will be charged for it.

Hope that makes sense.

1 Like

is the playground also charging account for prompt and response? because it seems the ai has memory within the playground

Yes of cource, in playground get charge too . just by using Chat GPT ist total free but somthing no response because of capacity
you can also subscribe ChatGPT Premium to use fast and unlimited for 20$