What is the Prompt and Completion price in GPT-4 api?

On the openai page the gpt-4 api cost says

Model Prompt Completion
8K context $0.03 / 1K tokens $0.06 / 1K tokens
32K context $0.06 / 1K tokens $0.12 / 1K tokens

What is the difference and how to use them differently? Confused about prompt and completion pricing

2 Likes

Prompt would be what you send to the chat completion model, and completion would be the output that is sent back.

So think of it as “Prompt=input/question” “Completion=output/answer”

2 Likes

Yes. But what does “32k context” and “8k context” mean?

that’s the max token size for the conversation (input + output)

For what it’s worth, ChatGPT has access to all this information if you want to ask it.

The 2021 cutoff date is necessarily true for some info.

1 Like

Not really.

I quizzed it today about pricing and it just something up. I scraped the text off the pricing page, fed it that and again, and it made some slightly less loopy stuff up, related to the facts, well presented, but made up, wrong .

It is a language model, not an expert system

So, for gpt-4, if the sum of the prompt tokens and completion tokens is less than 8,000 the price is $0.03 per thousand prompt tokens and $0.06 per thousand completion tokens

Between 8,000 and 32,000 the higher price applies

32,000 is the upper limit.

Is that correct?

who can help me? I have no idea to get the api of chat-4!!

No. For gpt-4 8K context is a model, and 32K context is another model. If you use the 8K model you have one pricing for prompts and completions. If you use the 32K context model, the pricing is different. While size obviously is involved, it’s the model that sets the rate.

I do not understand. I do not have the choice.

I can specify gpt-4 or (currently) gpt-4-0314

The model specified in the return header is gpt-4-0314 either way.

You do not have a choice if you do not have access to the gpt-4 32K context.