I kindly advise everyone to read this GPT-4 research report by OpenAI, dated March 15, 2023 (hot off the press), some of which I have summarized in excepts for those who don’t like (or have the time) reading research papers;
@logankilpatrick Yea, that’s nice. Would be even more nice if chat history came back since I lost all my progress with GPT-4. It’s really hard to work on anything if the progress is just lost in one moment…
Je suis vraiment satisfait quand cela fonctionne et cela fait 2 jours que rien ne fonctionnent pas d’historique et je suis obligé de recommencé et quelques minutes apres rien ne s’affichent et il faut tout recommencer… .je laisser plusieurs messages . Comment faire pour avoir la solution…
Hello, I recently purchased a subscription to ChatGPT Plus. I have been trying to use GPT-4, but often encounter a “Network Error: There was an error generating a response” message.
I paid for this service to have a more stable experience with ChatGPT. Since upgrading to GPT-3.5, the speed has indeed improved significantly, but the issue with GPT-4 persists.
Can you help me resolve this problem?
I’m currently building a fitness ecosystem but lack proper capabilities with GPT 3.5.
How can I speed up the waiting list for GPT-4. I m hoping to launch the platform next week.
Obviously it’s a win-win considering higher usage leading to more revenue for OpenAI.
I am a bit dissapointed, I have been using this service since December last year, and I have a paid subscription, but the last week or so I have noticed GPT4 is perferming very poorly. It is as if GPT4 is performing the same as GPT3.5. I think something went wrong when changes were made to lower the message cap. Is anyone else experiencing this?
Do you have an examples of this? From my own experience GPT4 and GPT3.5 are equivalent on “lower end tasks”, but GPT4 shines at more complicated tasks. Also GPT4 follows the “system” prompt better, whereas GPT3.5 kinda uses it as a suggestion, but that is supposed to be improved in a future GPT3.5 patch.
I removed my post since I think I resolved it now (support never came back to me though). It appears auto renewal stopped even though there were a valid default card registered. Removed the default card, subscribed again and added the card again. Hoping I’m not going to be double charged.
The release notes state “we are deprecating the Legacy (GPT-3.5) model on May 10th. Users will be able to continue their existing conversations with this model, but new messages will use the default model”. Does this mean that even free users are accessing GPT-4 when starting a new chat?
Is there a good chance that OpenAI will be able to use the process they used to make the 3.5 turbo on GPT4, reducing its costs? I assume it’s a model optimization process after all training data is incorporated and they are satisfied with the performance of the current model.
My other question: is there a general plan to roll out the GPT4 API for 32k on a certain timeframe? I get that all GPT4 API is probably limited by ability to install new compute resources to service everyone, I just wish there was more communication about it. They did get access to a huge amount of new resources, but access to new GPUs might be limited. I guess I just want to know what to expect in the future.
If I could have access to my GPT-4 API, I would be able to utilize the latest version of the GPT-4 language model and apply its powerful natural language processing capabilities in my applications.Could you help me get it through (org-62SU5I2LI2FRYTUhW2N5oCQN)? Thanks a lot!