Request for Update on GPT-4-32k (or 16k?) Release

Hello OpenAI Community,

I hope this message finds everyone well. I’m reaching out to discuss something that has been intriguing me for a while; where is gpt-4-32k? From what I know, it was introduced like 4 months ago, only to a select few.

The current GPT-4 model only supports up to 8k tokens, which, while impressive, is half of what GPT-3.5 is capable of handling with its 16k token limit version. I am curious why GPT-4-32k, or at the very least, a GPT-4-16k version, has not been made generally available. I believe that transparency is key in such situations.

Moreover, if demand is a concern, a simple solution could be to price the model accordingly. Many of us recognize the value of such a powerful tool and would be willing to pay a premium for access, even while it is still in beta.

I would greatly appreciate any updates or insights regarding the release of GPT-4-32k.

Thank you for your time and consideration.

6 Likes

Welcome to the community @Starwave

In my knowledge, the gpt-4-32k rollout has started but not generally available.

AFAIK there’s no 16k version of gpt-4 did you mean gpt-3.5-turbo-16k?

Note: General availability of gpt-4 model is different than gpt-4-32k

Thanks for the welcome! @sps

GPT-4-32k had an initial roll out back in like Mar-May time frame, but it was to very few people and then it stopped. The recent article here which was updated this week states that “We are not currently granting access to GPT-4-32K API at this time, but it will be made available at a later date.” Can’t we get more details than that, like why or when?

Regarding the 16k version, my point was more of a comparison between the two models. If GPT-3.5 could handle 16k tokens, I’m curious why GPT-4, a newer and more advanced model, is still at 8k tokens for its general version. It would seem logical for it to at least match its predecessor, upgrade the general version to 16k while keeping the 32k as limited release. Why hasn’t that been considered?

Thanks again for engaging in this discussion.

1 Like

Hi @Starwave, I hope this advice finds you well. I am writing to address the matter of GPT-4 API access, which you inquired about recently. As you may be aware, the GPT-4 API has been the subject of considerable interest and anticipation due to its potential to revolutionize natural language processing and AI applications.

I understand your enthusiasm and eagerness to explore the possibilities offered by GPT-4. However, I regret to inform you that at this time, OpenAI has not yet made the decision to roll out GPT-4 API access to a wider user base. The current availability of the GPT-4 API remains limited to select partners and developers.

One possible reason behind this selective rollout could be related to computational resources. The development and deployment of sophisticated AI models like GPT-4 demand substantial computational power and resources. OpenAI may be carefully managing the availability to ensure a stable and reliable experience for their current users.

As the technology landscape continues to evolve, OpenAI may periodically assess the scalability and stability of GPT-4, considering factors such as computational infrastructure, user feedback, and fine-tuning of the model. This approach is common in the AI industry, where controlled rollouts enable providers to gather valuable insights and make necessary adjustments before expanding access to a broader audience.

While it is unfortunate that GPT-4 API access is not yet available to you, I want to assure you that OpenAI remains committed to further enhancing and refining their offerings. As trusted partners in this rapidly advancing field, we can expect more opportunities to access cutting-edge AI technologies in the future.

In the meantime, I encourage you to stay updated with OpenAI’s announcements and developments. Their commitment to responsible AI innovation is commendable, and I am optimistic that they will continue to make strides in democratizing access to advanced AI capabilities.

Please feel free to reach out if you have any further questions or if there is anything else I can assist you with. I value our collaborative efforts and remain eager to explore the potential of AI advancements together.

Thank you for your understanding, and I look forward to our continued partnership.

(Written by ChatGPT as a response to ChatGPT, not any kind of official statement beside what OpenAI already said, and I forgot to include “32k”)

Rollout should have already happened, if you have spent more than $1 with OpenAPI. I think it’s supposed to be to everyone by 8/1. I hope this is true.

Yes, but only the 8k version. I’ve had access to that since May. My enquiry specifically is about the 32k version (or a possible 16k version).

3 Likes

There has been no announcement regarding the expected general availability of gpt-4-32k it’s currently invite only.

yes, because of that token count I’m still using 3.5 turbo 16k. Some things you just need the extra tokens.

I think they’re primarily rolling it out to businesses via Azure. I received GPT-4-32K for my startup on Azure a while ago, but still just 8K model on main OpenAI.

Maybe I should move my app to Azure instead of Google Cloud?

I wonder if the latency or response time would be better when querying OpenAI on Azure vs Google Cloud. I’m also using OpenAI Whisper so invested a lot in OpenAI.

I guess the only problem is I need to wait in queue again to get access to just gpt4 8k on Azure, and I can’t survive with just gpt 3.5.

Any progress on this request? I’m at that point now with a project I’m working that really needs 32k access. I’m curious, did you decide to move to Azure to get access to the 32k model? How does that work exactly?

2 Likes

waiting the gpt-4-32k too :slight_smile:

you guys know when this will be available for everyone with access to gpt-4?

1 Like

Going to throw my hat in the ring here. I would like access to GPT4-32K

Haha… I am still waiting for this too. :smiley:

1 Like

waiting the gpt-4-32k too :slight_smile:

We need access to GPT4-32K, we are also waiting :pray:

How can you guys afford 32k so expensive!! Haha

It’s expensive in absolute terms v something like GPT-3.5-turbo but relative to it’s performance, it’s quite reasonable, it’s also still in alpha so there is very little in the way of commercial use, that which exists is usually on projects that are well funded or have customers with specific needs and are willing to pay for it.

What surprises me is that GPT-3.5 context length is 16k yet the more capable GPT-4 is still 8k. Why not at least bump it from 8k to 16k?

1 Like

What surprises me is that GPT-4 context length is 8k yet the more affordable GPT-3.5 context length is still 4k. It’s not like GPT-4 will write over 1000 tokens without a jailbreak now anyway. Why not at least bump it down from 8k to 4k?