Upgrade Policy & Timeline

I’m sure this is one of those obvious-answer FAQ, but I just upgraded to GPT4 and surprisingly can’t find the info anywhere.

I understand that certain developers, etc., are now given access to the upgraded versions of GPT4. I also saw the line about how 4-Turbo was availabe to those who “pass the preview,” but I am unaware of how this works.

So, regarding the versions with 128000-token capacity:

  • Is there public info on when they’re estimated to be released to the public?
  • Is there a way to find out more about how to qualify for them early, if applicable?

Thanks for your help. I would just ask GPT4 himself, but he still has amnesia for anything after April '23.

Hi @willfulton86 - welcome to the Forum. May I ask, are you referring to the ChatGPT interface or the APIs? As for APIs, the latest models are generally available to everyone. Here you can find the overview: https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo

Yes, exactly…that’s the link; thanks. I’ve clicked the GPT4-Turbo link there, and I cannot see anything about it being available. There’s certainly no link or option to purchase it. The boy of the page talks about features and previews, but that’s it. How do non-specialists purchase it?

but it is available via API :slight_smile: just do an API call using for example the model gpt-4-0125-preview and just like that you have access to it. The only gpt-4 model that’s not widely rolled-out is gpt-4-32k. all other gpt-4 models are generally available to developers using the API.

1 Like

To be even clearer, if you want to use the API, then you need to have a payment method on file or a pre-paid credit. You don’t “purchase a model”. You pay for the API usage.

If you just want to use GPT-4 via the ChatGPT interface, then you just have to be subscribed to ChatGPT Plus for USD 20/month. But ChatGPT and the API are two different products.

1 Like

I got it now thanks. I’m just so new to this that I didn’t know what “call” referred to in the context of entering the API. But GPT3.5 filled me in on everything.

It’s unfortunate that the immense token limit can only be used on single statements or questions though; I was picturing some real possibilites for books.

You can totally have it write a book (incrementally) by saying “here’s the previous story, complete the next paragraph in the style of Edgar Allan Poe” or whatever.

That being said, the model isn’t that great at paying attention to all the tokens – there’s just not enough attention heads. This massive chase for “very large contexts” seems a bit beside the point for me. The effort is probably better spent on figuring out how to pre-process information such that every token put into the model actually has value.