What model is ChatGPT Plus using?

By using API, I can access all the GPT4 models. Which one is being used in GPT Plus? And what is its context length?

ChatGPT+ is using GPT-4-Turbo and has a context length of 32,000.

The language model most likely being used in ChatGPT is probably the gpt-4-turbo-preview (and vision).

In GPT Plus, it would be reasonable to assume that they are truncating the 128K context of gpt4-turbo-preview to a 32K context for use.

Is this still true? Btw, do you have a link that says which version is currently used for ChatGPT+ users?

Are there any usage limits applied on my account?

As of May 13th 2024, Plus users will be able to send 80 messages every 3 hours on GPT-4o. and 40 messages every 3 hours on GPT-4.

 


Latest models

Model Input Output
gpt-4o $5.00 / 1M tokens $15.00 / 1M tokens

Older Models

Model Input Output
gpt-4-turbo $10.00 / 1M tokens $30.00 / 1M tokens
gpt-4 $30.00 / 1M tokens $60.00 / 1M tokens
gpt-4-32k $60.00 / 1M tokens $120.00 / 1M tokens

 

In this context, it should be understood that GPT-4 refers to gpt-4-turbo.

 


Context window

Free Plus Team Enterprise
8K 32K 32K 128K

 

GPT-4 Turbo and GPT-4

MODEL CONTEXT WINDOW
gpt-4-turbo 128,000 tokens
gpt-4 8,192 tokens

 

The context length of GPT-4, which is neither gpt-4-turbo nor gpt-4o, is up to 8K, and the cost of gpt-4-32k is six times for input and four times for output compared to gpt-4-turbo; therefore, it is unreasonable to assume that such a model is being offered with only half the limitations of gpt-4o.

Additionally, it is inconsistent with the availability of a 128K context service under the enterprise plan.

 

While I cannot be certain about the reason for the lack of direct mention, by piecing together bits of information, it becomes clear that GPT-4 in ChatGPT refers to gpt-4-turbo.

 

2 Likes