What model is ChatGPT Plus using?

By using API, I can access all the GPT4 models. Which one is being used in GPT Plus? And what is its context length?

1 Like

ChatGPT+ is using GPT-4-Turbo and has a context length of 32,000.

The language model most likely being used in ChatGPT is probably the gpt-4-turbo-preview (and vision).

In GPT Plus, it would be reasonable to assume that they are truncating the 128K context of gpt4-turbo-preview to a 32K context for use.

Is this still true? Btw, do you have a link that says which version is currently used for ChatGPT+ users?

Are there any usage limits applied on my account?

As of May 13th 2024, Plus users will be able to send 80 messages every 3 hours on GPT-4o. and 40 messages every 3 hours on GPT-4.

 


Latest models

Model Input Output
gpt-4o $5.00 / 1M tokens $15.00 / 1M tokens

Older Models

Model Input Output
gpt-4-turbo $10.00 / 1M tokens $30.00 / 1M tokens
gpt-4 $30.00 / 1M tokens $60.00 / 1M tokens
gpt-4-32k $60.00 / 1M tokens $120.00 / 1M tokens

 

In this context, it should be understood that GPT-4 refers to gpt-4-turbo.

 


Context window

Free Plus Team Enterprise
8K 32K 32K 128K

 

GPT-4 Turbo and GPT-4

MODEL CONTEXT WINDOW
gpt-4-turbo 128,000 tokens
gpt-4 8,192 tokens

 

The context length of GPT-4, which is neither gpt-4-turbo nor gpt-4o, is up to 8K, and the cost of gpt-4-32k is six times for input and four times for output compared to gpt-4-turbo; therefore, it is unreasonable to assume that such a model is being offered with only half the limitations of gpt-4o.

Additionally, it is inconsistent with the availability of a 128K context service under the enterprise plan.

 

While I cannot be certain about the reason for the lack of direct mention, by piecing together bits of information, it becomes clear that GPT-4 in ChatGPT refers to gpt-4-turbo.

 

3 Likes

ChatGPT doc (not API) about plus tier (and after talking about o1 and 4o, using the vague GPT-4):

In certain cases for Plus users, we may dynamically adjust the message limit based on available capacity in order to prioritize making GPT-4 accessible to the widest number of people.

But most recent table entry in the models page, about “GPT 4o”

gpt-4o-2024-08-06
Latest snapshot that supports Structured Outputs. gpt-4o currently points to this version.
128,000 tokens 16,384 tokens Up to Oct 2023
Those are
Model Description Context window Max output tokens Training data

Now as an experiment go start a new conversation after selecting GPT 4o in the selector. First prompt as the instance to tell of its models name or specification or whatever. is the magic word… or words (and if one knows I would like to be informed).

most of my recent experience is that they would spit ouf GPT-4-turbo, and if not asking about the model. but about their various memory limits…

They might actuall think they are GTP4 -turbo on the models page.
Models - OpenAI API

I have been trying to figure out how to make plans for some task that needs to know those thing to be efficient and it is a lot of work. and long convos don’t end well most of time. So I find this thread question a summum of exasperation restraint the way it was forumulated. Kudos! I second that question. It is still very relevant.

Also the “plus” tier quote is using “message” limits. But I wonder how much of the document is under the vague maximum probability principle that emanates from this ChatGPT experience of mine.