GPT-4 is OpenAI’s most advanced system (and it's here...)

Where’s the uploading images for it to look at? Why is it so slow? Also it still has the character limit, yet in the video they said it can produce up to 15,000 characters???

From the Discord channel FAQ’s @Manbot12

1 Like

I can’t access the discord, it says I’m banned yet I was only in there for 5 minutes what the hell?


Here is a screenshot of the FAQ’s on GPT-4 from the Discord server:


Just got access to the API. I’m considering whether it’s worth it to sleep tonight or not :rofl:


Thank you for creating such interesting and helpful models. I’m not sure that this is the right place to ask questions, but I have a particular question. Please let me know if there’s somewhere else more appropriate. I’ve been thinking about the agentic behavior of GPT-4 mentioned in your research document. It occurred to me, that any agentic behavior would require a loop. As far as I know, the only loop in a transformer is for next-token generation. Is it possible that GPT-4 has learned to maintain state by passing information using that loop? It seems to me, then it would be easy enough to observe what is being passed for input for next-token generation. I hope this is helpful.

As an idiot that plays in the playground, is GPT-4 coming to there anytime soon?

@pugugly001 as soon as you get access to the API, it appears in the playground as well:

1 Like

Mmmm, so it’s only available with the chatgpt+ option and with the chat mode. I was hoping to use it in the prompt/submit context, although to be honest given how much I spent on Da Vinci in December when it first came out I’m not sure I can survive the new pricing on GPT4 anyway (Said the fox looking at the grapes he couldn’t reach - <Snerk>).

‘Aw shucks’ and assorted other comments.

@pugugly001 You can get API access, but there is a wait for that. Join the waitlist in the link in the OP above. @AgusPG has been approved for the API. So he has both API and Playground access.

You can submit evals to jump ahead in line, or just wait for it to roll out.

1 Like

Ditto. Saw something from OpenAi in my email and was like… :eyes:… well there go my plans for this evening. :rofl::rofl:

My word…

1 Like

I’m sorry to be dense about this, but I’ve searched and even asked ChatGPT for clarification, without success. What does “100 messages per 4 hours” mean? What constitutes a message? A prompt from me…is that one message? Does it count the reply from ChatGPT? Is that one message, or is each separate paragraph a message? Can anyone answer this for me? I have no idea if 100 messages is a lot or few. If anyone can point me to documentation on this, that would be cool, too. Thanks!

Really happy with the release of GPT-4. I’ve noticed the differences and am enjoying them. Looking forward to future progress.

Although it’s really disappointing how little information is actually being shared on its training.
Not a good look, or good trend to set.

Also, what’s going on with the Evals? I thought it was going to be a fun “open the flood gates” type scenario with lots of collaboration between developers. High energy, lots of fun. Lots of challenges.

Instead it’s a ghost town “get your ticket” lottery. I mean, not even a simple CLI to test our evals with GPT-4? Why are we blindly focusing on it failing (<90%) with GPT-4? It’s pretty obvious that a majority of the entries will be multi-step arithmetics & letterplay. Is that what they wanted?

It’s shifted the focus of “creating useful evals to benchmark models” to “the best evals to break GPT”. Instead of testing a car on how it drives, we’re testing how it can swim. Personally, it would have been much more fun to create an eval that gradually increases to benchmark which model suits our purposes best.

Unless this was the exact intention of OpenAI, in which again, transparency would have been nice.

I think the evals was a way for OpenAI to crowdsource/improve the model. There are other AI models that are doing this too (example: OpenAssistant)

I’m probably in the minority here when I say I’d prefer the jsonl files that they require for evals, but for some reason, I hate GitHub so much. If there was a way to just submit jsonl files like you submit a fine-tune, that would be better.

But the idea of crowdsourced feedback is OpenAI going in the right direction IMO.


I agree. I was looking at it as a way for us to benchmark the different models, similar to training tests. However after reading your post and looking into it more, I can see it now. I don’t know how I feel though about getting us to build their tools when they won’t even release any of their training data…

I also struggled with GitHub to get mine in and really don’t like the dependency on it as well. Hopefully that’ll change in the future.


I think sharing your intellectual property is a function of how much personal investment you have made and how much $$$ you could potentially earn by keeping it closed.

Right now, I see that OpenAI has a lot of bucks to make off of these AI models, and they have invested a lot of $$$ in R&D as well (as well as their investors). So it just doesn’t make sense for them, I can understand their position.

But there are other initiatives that are open from the get-go, and are totally community driven and crowdsourced. In this version, you would expect it to be “free” and “transparent”, and they normally are.

But yeah, get rid of the GitHub dependency!


Very fair points.

I’m very much looking forward to the future with OpenAI. I am slightly disappointed with the lack of transparency but you’re right, it’s understandable. I really do hope for some more community driven events here though. It’s these kinds of efforts that have placed us, and GPT here as it is today