I really need 32k as well, can anyone from OpenAI help, I would create a community in my country for contribution.
I havnāt even got standard GPT-4 API access yet.
Devs if youāre reading this, would really love access for the apps my 15-person team is building!
Wow you have a really large team!
What type of app are you trying to make?
Curt, pleasure to read your updates. Curious if you made any special request for 32k access? Iām a python developer and stock hobbyist with a recent project that requires the 32k model. Well technically Iām using the 16k 3.5 model when my stock data comparison is over 6500 tokens. Otherwise I use GPT-4 model. Interestingly enough GPT-4 consistently picks the same results after analyzing all my stock input data. As opposed to 3.5 which seems to have a slightly more random response with the analysis. Iām curious if you would be willing to run a couple tests for me using the 32k model and Iād be more than willing to pay you for the cost via Venmo or cash app. Iāve been coding this program for 3 months now in anticipation I will eventually get 32k access but I need to know if all my hard work is going to pay off. And yeah, Iād it works and you are interested in the results and data Iāll share with you. We should talk!
Also, I might not see your reply here so if you want hit me up on my Gmail. mrbirmingham (AT) Gmail
Use a ChatGPT clone with your API key:
(In my subjective order of best to worst.)
Edit: Oh I can paste my whole table here:
Name | System | Profiles | User MD | Tokens | Cost | Long | Title | ```š | -3.5-turbo | -0301 | -0613 | -16k | gpt-4 | -0314 | -0613 | -32k |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
BetterChatGPT | Y | N | Y | Y | Y | Y | Y | Y | Y | N | N | Y | Y | N | N | Y |
ChatGPT-web | Y | Y | Y | Y | Y | Y | N | Y | Y | Y | Y | Y | Y | Y | Y | N |
Anse | P | N | Y | N | N | Y | Y | N | Y | Y | Y | Y | Y | Y | Y | Y |
YakGPT | N | N | P | N | N | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | N |
Chatpad AI | N | N | P | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y | Y |
How can I access GPT-4? | OpenAI Help Center was updated yesterday to say:
We are not currently granting access to GPT-4-32K API at this time, but it will be made available at a later date.
Cāmonnnnn I want it to process entire source code filesā¦
If you canāt process entire source code files with gpt-35-turbo the code most probably isnāt worth it.
gpt-3.5-turbo-16k
isnāt smart enough to be useful; it canāt follow directions or even analyze code. For example, I defined
RSS_TIMEOUT = 0x13
ā¦
RSS_FAILED = 0x23
and in its response it said
ā¦ and if it is equal to
0x23
(RSS_TIMEOUT), it sets thecResult
variable to 1.
So it canāt even remember simple relationships.
Show me a guy who produces bug free code after giving him nothing but the same prompt you are sending to gpt-3.5 and then we are talking.
Unfortunately, when working with a language thatās not frequently found on the internet, youāll need a lot of context just to describe the language and libraries. And then you run into the problem that thereās not enough attention heads, and it doesnāt pay attention to all the rules of the language. gpt-3 canāt do it at all (even with fine tuning), gpt-3.5 is bad at it, gpt-4 is less bad. But, honestly, a fine-tuned mpt-7b has done the best for me in this use case, so far. I would expect a fine-tuned gpt-4 to trouce that model, though.
Yeah when you start analysing esotheric languages like mindfuck.
Especially deep complexity doesnāt work, like BASIC code with 2 million goto might become diffecult.
There are devs though who can read such code. I know, because I am one of them.
That doesnāt make the model unusable at all.
Itās just not that you only need to use the model.
Use linter, debugger, compiler etc as well and teach the model to do so.
My evaluations are quiet expensive though. Even on gpt-35 you hit 100ās of $ on a small codebase very quickly.
Thu, 16 Mar, 12:12 (Mountain time) wooah, you got it a minute before me!
Any human coder would be able to produce bug free code after giving them nothing but the same prompt I am sending to gpt-3.5. Itās just uselessly dumb.
Show me one and I write the code.
Pretty sure I can mess it up.
Summary created by AI.
The GPT-4-32K model has begun to roll out and is reportedly available in the Playground. The model can accept a payload, such as a postmodern fiction piece, and generate meaningful outcomes with increased context length. Curt.kennedy claimed to have seen the model pop up in the Playground and shared a screenshot of his account to substantiate his claim. Other users reported being unable to access the model but revealed that the 8K model was available to them. Several users speculated that the roll-out might be proceeding based on the order in which people joined the waitlist.
The extended context window of the new model could be useful for applications such as Q/A Chatbots, large data summarization, and understanding. The extended context could also potentially allow for more advanced reasoning in factual-rich domains. However, some members noted that the response time could be slow, and the cost is higher with the Input priced at $0.06 / 1K tokens and output at $0.12 / 1K tokens.
There were also discussions around the actual context window of the model, with some claiming that it doesnāt seem like an actual 8K context. Some users noted the limitations in the chatGPT that make it conversational and prevent the use of all tokens in one go. Other users maintained that the new model surpasses an API containing 8K tokens, with some reporting a significant increase in speed with the GPT-4-32K model.
OMG! Thatās perfect!
We need summarization, and we need it now!
The real question though, did this use GPT4-32k
That would be a @sam.saffron question.
Rephrase if Sam sees this post.
For the creation of a summary using the Discourse AI plugin on the OpenAI forum, which LLM/model was used?
@EricGT It would be so āmetaā if it was the 32k model used.
Speaking of meta, this post just hit 200 replies
Thank you to all who posted!
Fingers crossed that GPT4 32k will become generally available here soon, just like GPT4 8k.