The current pricing applies to older models?

In the pricing page, there is a distinction in pricing for the new GPT-4 model, but similar details for GPT-3.5 are not provided. Does the price change also apply to older models of GPT-3.5? If not, where can I find the specific pricing information for these older models? Additionally, does the change in context length for GPT-3.5 also affect the older models?

1 Like

From the OpenAI API pricing page

GPT-3.5 Turbo

GPT-3.5 Turbo models are capable and cost-effective.

gpt-3.5-turbo is the flagship model of this family, supports a 16K context window and is optimized for dialog.

gpt-3.5-turbo-instruct is an Instruct model and only supports a 4K context window.

Learn about GPT-3.5 Turbo

Model Input Output
gpt-3.5-turbo-1106 $0.0010 / 1K tokens $0.0020 / 1K tokens
gpt-3.5-turbo-instruct $0.0015 / 1K tokens $0.0020 / 1K tokens

Welcome to the forum. Hope you stick around!

Thank you for the reply and information, but it didn’t answer my question. I need to know about pricing and whether it applies to older models. Based on your answer, I assume not, but I would still like to know the pricing for older models if it’s available somewhere on the official page. Also if possible the maximun context length of each.

Sorry! You asked for GPT-3.5 prices, and I supplied them.

What older models are you referring to?

To these models, currently we are in 1106, in the announcement they refer to gpt 4 pricing was only for the new model , but it wasn’t clear if also was like that for gpt3.5.
gpt-3.5-turbo-0301
gpt-3.5-turbo-0613
gpt-3.5-turbo-1106
gpt-3.5-turbo-16k
gpt-3.5-turbo-16k-0613

gpt-4-0314
gpt-4-0613
gpt-4-1106
gpt-4-vision-preview

1 Like

Dub this, new pricing page is confusing.

There are only 2 versions of 3.5 mentioned, while there are a lot more. What pricing is for all of those other models available that are not mentioned on the pricing page?

Ah, I see what you mean now. As they’re older version of the 3.5, it’s assumed, I believe, they’re the same rates. Clarification in the docs might be needed, though.

Thanks for pointing it out and bearing with me!

2 Likes

It’s clear that the whole new usage page is designed to damage your ability to discover or dispute charges.

All “gpt-3.5” charges are grouped into one billing section, by day, not separated by model, not supplied with the precision of amounts below $0.01.

The pricing page also likely intentionally removes non-preview models so that you could be billed undisplayed prices for them.

Then, OpenAI had the opportunity to put more fields of metadata in the models endpoint, but instead they removed fields.


Take all your calls from the usage “activities” page daily since 11-07 for all 3.5 models, extract the token counts of inputs and outputs shown for each, place into a spreadsheet vs the amount billed for that day, and it’s possible you could get a solution to “if these models were at old prices vs new prices”.

1 Like

Okay, I’ve found this: the pricing applies only to the latest models. However, they mention only the previous model. Therefore, for the GPT-3.5 Turbo 0301, it’s likely that the original pricing, which was aprox 8x higher than the current, might still apply.

  • GPT-4 Turbo input tokens are 3x cheaper than GPT-4 at $0.01 and output tokens are 2x cheaper at $0.03.
  • GPT-3.5 Turbo input tokens are 3x cheaper than the previous 16K model at $0.001 and output tokens are 2x cheaper at $0.002. Developers previously using GPT-3.5 Turbo 4K benefit from a 33% reduction on input tokens at $0.001. Those lower prices only apply to the new GPT-3.5 Turbo introduced today.

The only significant gpt-3.5 discount is in comparison to the prior 16k model, which the gpt-3.5-turbo-1106 preview doesn’t completely replicate in capabilities.

In comparison to 4k model pricing before devday (of which only -instruct is now displayed in the pricing page), the price of input of the new 3.5 model is reduced from $1.50 to $1.00 per million, but output remains the same.

Regarding GPT-4, the turbo-preview has a dramatic performance difference compared to pricing and model seen in March’s version, so a difference even based on the reduced gpt-4 computation we receive today per token paid was warranted even without it being made “turbo”.