GPT-4 32k Model Availability

32k Model

Hi Team at Openai and Developers,

I am reaching out on behalf of a company called Phyre with the following company id: org-kRFvAzzyBYPynnIybhF2gn0e. We are automising banking and payment processors by removing the human element out of underwriting. However, for around 60% of cases the tokens we need are above the 8192 limit. So far we have not encountered a use case above the 32k limit. The reason for me reaching out is to see if it is possible for you to grant us access to the 32k model, or clarify when it will be released to developers as openai support has not replied to our request and there is no information on your website concerning the release/ access to the 32k model. Thank you very much for your time.
Best regards,

Oliver and the team at Phyre

1 Like

Welcome to the forum!

You might find replies from Codie regarding Azure of value.

https://community.openai.com/search?q=azure%20%40codie

1 Like

Welcome to the developer forum.

Unfortunately 32K model access is in invite only very limited alpha. I have no problem leaving your post here in case it is seen by a member of OpenAI staff, but it is unlikely to have an impact.

One of the best ways to gain access is to create an Evaluation set for the 32k model via the Evals framework, if you have a good set of example inputs and outputs to evaluate model performance, that would be your best way to gain access.

1 Like

Getting an eval merged does not give you 32k access, only the normal 8k model.

Source: I have one merged few months ago.

3 Likes

The demand for the GPT-4 32K model at my company is increasing (not this account). The 8K token model falls short in delivering consistently smart responses. Are there any plans ahead to possibly launch a token model with greater capacity?

1 Like

I read a few other posts suggesting reaching out to the OpenAI Sales team might be helpful (but will require paying upfront for a big subscription)

Hi Brian! May be able to help. Feel free to DM me

@oliver_gueorguiev did you find any success here? We’re in the same scenario.