GPT-4 is here! OpenAI's newest language model

Thank you for access to the API. Here is what we are building ChatStart - Build ChatGPT powered business. Fast!

  1. Love that API did not change between 3.5 and 4. We just had to swap the model parameter out to make our app work
  2. Noticing the difference in certain use cases like multilingual and math reasoning.
  3. Cannot wait to access multimodal as our use case is already mashing up DALL.E and ChatGPT in same session and chatbot UX.
    You guys are awesome!! Keep shipping fast… :slight_smile:

I kindly advise everyone to read this GPT-4 research report by OpenAI, dated March 15, 2023 (hot off the press), some of which I have summarized in excepts for those who don’t like (or have the time) reading research papers;


Hello, the GPT-4 API registration keeps saying that there is a problem with my application over, and over again.

@logankilpatrick Yea, that’s nice. Would be even more nice if chat history came back since I lost all my progress with GPT-4. It’s really hard to work on anything if the progress is just lost in one moment… :frowning:

Je suis vraiment satisfait quand cela fonctionne et cela fait 2 jours que rien ne fonctionnent pas d’historique et je suis obligé de recommencé et quelques minutes apres rien ne s’affichent et il faut tout recommencer… .je laisser plusieurs messages . Comment faire pour avoir la solution…

Hello, I recently purchased a subscription to ChatGPT Plus. I have been trying to use GPT-4, but often encounter a “Network Error: There was an error generating a response” message.
I paid for this service to have a more stable experience with ChatGPT. Since upgrading to GPT-3.5, the speed has indeed improved significantly, but the issue with GPT-4 persists.
Can you help me resolve this problem?

Donde hago mys preguntas y dudas?

No entiendo inglés

1 Like

I’m currently building a fitness ecosystem but lack proper capabilities with GPT 3.5.
How can I speed up the waiting list for GPT-4. I m hoping to launch the platform next week.
Obviously it’s a win-win considering higher usage leading to more revenue for OpenAI.

I am a bit dissapointed, I have been using this service since December last year, and I have a paid subscription, but the last week or so I have noticed GPT4 is perferming very poorly. It is as if GPT4 is performing the same as GPT3.5. I think something went wrong when changes were made to lower the message cap. Is anyone else experiencing this?


Do you have an examples of this? From my own experience GPT4 and GPT3.5 are equivalent on “lower end tasks”, but GPT4 shines at more complicated tasks. Also GPT4 follows the “system” prompt better, whereas GPT3.5 kinda uses it as a suggestion, but that is supposed to be improved in a future GPT3.5 patch.

1 Like

I removed my post since I think I resolved it now (support never came back to me though). It appears auto renewal stopped even though there were a valid default card registered. Removed the default card, subscribed again and added the card again. Hoping I’m not going to be double charged.

I read somewhere that it would be possible to give a pic of a web interface to ChatGPT and ask for the code to create this interface. Is that possible?

Not yet, but image input is an upcoming feature, but there’s currently no timeline on its release.

OK, thank you for the answer. I’ll have to wait :).

1 Like

When are you going to remove the cap of 25 messages every 3 hours? or at least increase … When we have the 32k gpt4 model without that cap … then I’ll pay for chatgpt plus.

1 Like

i am waiting api 4 for me ?

This text will be blurred

The release notes state “we are deprecating the Legacy (GPT-3.5) model on May 10th. Users will be able to continue their existing conversations with this model, but new messages will use the default model”. Does this mean that even free users are accessing GPT-4 when starting a new chat?

I understand you confusion, the legacy GPT-3.5 model have been depreciated, only the GPT-3.5-TURBO model will be available to free users :laughing:

I hope that helps clarify the situation.

Is there a good chance that OpenAI will be able to use the process they used to make the 3.5 turbo on GPT4, reducing its costs? I assume it’s a model optimization process after all training data is incorporated and they are satisfied with the performance of the current model.

My other question: is there a general plan to roll out the GPT4 API for 32k on a certain timeframe? I get that all GPT4 API is probably limited by ability to install new compute resources to service everyone, I just wish there was more communication about it. They did get access to a huge amount of new resources, but access to new GPUs might be limited. I guess I just want to know what to expect in the future.

If I could have access to my GPT-4 API, I would be able to utilize the latest version of the GPT-4 language model and apply its powerful natural language processing capabilities in my applications.Could you help me get it through (org-62SU5I2LI2FRYTUhW2N5oCQN)? Thanks a lot!