It looks like GPT-4-32k is rolling out

I really need 32k as well, can anyone from OpenAI help, I would create a community in my country for contribution.

1 Like

I havnā€™t even got standard GPT-4 API access yet.

2 Likes

Devs if youā€™re reading this, would really love access for the apps my 15-person team is building!

Wow you have a really large team! :open_mouth:
What type of app are you trying to make?

1 Like

Curt, pleasure to read your updates. Curious if you made any special request for 32k access? Iā€™m a python developer and stock hobbyist with a recent project that requires the 32k model. Well technically Iā€™m using the 16k 3.5 model when my stock data comparison is over 6500 tokens. Otherwise I use GPT-4 model. Interestingly enough GPT-4 consistently picks the same results after analyzing all my stock input data. As opposed to 3.5 which seems to have a slightly more random response with the analysis. Iā€™m curious if you would be willing to run a couple tests for me using the 32k model and Iā€™d be more than willing to pay you for the cost via Venmo or cash app. Iā€™ve been coding this program for 3 months now in anticipation I will eventually get 32k access but I need to know if all my hard work is going to pay off. And yeah, Iā€™d it works and you are interested in the results and data Iā€™ll share with you. We should talk!

Also, I might not see your reply here so if you want hit me up on my Gmail. mrbirmingham (AT) Gmail

Use a ChatGPT clone with your API key:

  1. BetterChatGPT
  2. ChatGPT-web
  3. Anse
  4. YakGPT
  5. Chatpad AI

(In my subjective order of best to worst.)

Edit: Oh I can paste my whole table here:

Name System Profiles User MD Tokens Cost Long Title ```šŸ“‹ -3.5-turbo -0301 -0613 -16k gpt-4 -0314 -0613 -32k
BetterChatGPT Y N Y Y Y Y Y Y Y N N Y Y N N Y
ChatGPT-web Y Y Y Y Y Y N Y Y Y Y Y Y Y Y N
Anse P N Y N N Y Y N Y Y Y Y Y Y Y Y
YakGPT N N P N N Y Y Y Y Y Y Y Y Y Y N
Chatpad AI N N P Y Y Y Y Y Y Y Y Y Y Y Y Y

How can I access GPT-4? | OpenAI Help Center was updated yesterday to say:

We are not currently granting access to GPT-4-32K API at this time, but it will be made available at a later date.

Cā€™monnnnn I want it to process entire source code filesā€¦ :persevere:

2 Likes

If you canā€™t process entire source code files with gpt-35-turbo the code most probably isnā€™t worth it.

gpt-3.5-turbo-16k isnā€™t smart enough to be useful; it canā€™t follow directions or even analyze code. For example, I defined

RSS_TIMEOUT = 0x13
ā€¦ 
RSS_FAILED = 0x23

and in its response it said

ā€¦ and if it is equal to 0x23 (RSS_TIMEOUT), it sets the cResult variable to 1.

So it canā€™t even remember simple relationships.

Show me a guy who produces bug free code after giving him nothing but the same prompt you are sending to gpt-3.5 and then we are talking.

2 Likes

Unfortunately, when working with a language thatā€™s not frequently found on the internet, youā€™ll need a lot of context just to describe the language and libraries. And then you run into the problem that thereā€™s not enough attention heads, and it doesnā€™t pay attention to all the rules of the language. gpt-3 canā€™t do it at all (even with fine tuning), gpt-3.5 is bad at it, gpt-4 is less bad. But, honestly, a fine-tuned mpt-7b has done the best for me in this use case, so far. I would expect a fine-tuned gpt-4 to trouce that model, though.

1 Like

Fine tuning of gpt-4 is on the map for this year according to the latest announcement :smiley:

1 Like

Yeah when you start analysing esotheric languages like mindfuck.
Especially deep complexity doesnā€™t work, like BASIC code with 2 million goto might become diffecult.
There are devs though who can read such code. I know, because I am one of them.

That doesnā€™t make the model unusable at all.
Itā€™s just not that you only need to use the model.

Use linter, debugger, compiler etc as well and teach the model to do so.

My evaluations are quiet expensive though. Even on gpt-35 you hit 100ā€™s of $ on a small codebase very quickly.

Thu, 16 Mar, 12:12 (Mountain time) wooah, you got it a minute before me!

1 Like

Any human coder would be able to produce bug free code after giving them nothing but the same prompt I am sending to gpt-3.5. Itā€™s just uselessly dumb.

Show me one and I write the code.
Pretty sure I can mess it up.

2 Likes

Summary created by AI.

The GPT-4-32K model has begun to roll out and is reportedly available in the Playground. The model can accept a payload, such as a postmodern fiction piece, and generate meaningful outcomes with increased context length. Curt.kennedy claimed to have seen the model pop up in the Playground and shared a screenshot of his account to substantiate his claim. Other users reported being unable to access the model but revealed that the 8K model was available to them. Several users speculated that the roll-out might be proceeding based on the order in which people joined the waitlist.

The extended context window of the new model could be useful for applications such as Q/A Chatbots, large data summarization, and understanding. The extended context could also potentially allow for more advanced reasoning in factual-rich domains. However, some members noted that the response time could be slow, and the cost is higher with the Input priced at $0.06 / 1K tokens and output at $0.12 / 1K tokens.

There were also discussions around the actual context window of the model, with some claiming that it doesnā€™t seem like an actual 8K context. Some users noted the limitations in the chatGPT that make it conversational and prevent the use of all tokens in one go. Other users maintained that the new model surpasses an API containing 8K tokens, with some reporting a significant increase in speed with the GPT-4-32K model.

1 Like

OMG! Thatā€™s perfect!

We need summarization, and we need it now!

The real question though, did this use GPT4-32k :face_with_monocle:

That would be a @sam.saffron question.

Rephrase if Sam sees this post.

For the creation of a summary using the Discourse AI plugin on the OpenAI forum, which LLM/model was used?

1 Like

@EricGT It would be so ā€œmetaā€ if it was the 32k model used.

Speaking of meta, this post just hit 200 replies :crazy_face:

Thank you to all who posted!

Fingers crossed that GPT4 32k will become generally available here soon, just like GPT4 8k. :crossed_fingers: