Blown Away by CHAT GPT with GPT4 Speed! 🚀

Hello OpenAI Community! :wave:

I just wanted to take a moment to share my sheer fascination with the speed of CHAT GPT built on the GPT-4 architecture. It’s downright impressive! :star2:

As a follower of OpenAI’s developments, I must say that the advancements in speed and optimization are simply monumental. GPT-4’s snappy responses and the fluidity in which it carries out conversations is making my jaw drop! :astonished:

I feel it’s important to acknowledge the dedication and hard work of all the researchers, engineers, and everyone involved in the development of this remarkable model. The progress you guys have made in not just intelligence, but also in the performance domain is commendable. :tada:

I genuinely think that this kind of speed optimization is vital for the future applications of AI, and OpenAI is trailblazing a path that will be instrumental in shaping AI technologies. :hammer_and_wrench:

Just wanted to share my enthusiasm with all of you. Keep up the outstanding work! :raised_hands:

Catch you later, friends! :v:

3 Likes

is this a new feature of the API? from my experience the gpt4 api is very slow , or are you talking about the chatgpt client on web?

1 Like

I assume you are talking about the web-app or the ios app version of ChatGPT using the GPT4 model. From my experience, using GPT4 API is very slow. To get a 1000 token responses, I typically need to wait for 40 seconds. In comparison, gpt-3.5-turbo would take 10 seconds.

This made individual experimentation very challenging - in an hour, I can only anticipate about 50 rounds of interaction, and the waiting time simply breaks the work flow.

I do recommend using OpenAI’s Playground as I discovered the speed of response is optimized. However, the preset won’t function properly, making housekeeping, testing, and quality control very challenging.
https://platform.openai.com/playground/

1 Like

Yes, I agree, ChatGPT with the GPT4 model has become noticeably faster since the updates in May. At the same time, it can be assumed that the workload has also significantly increased (more and more users, application launch). It would be interesting to know what compromises were made to achieve this result. Quality?

1 Like

Hi there! I’m discussing the client-side of the GPT-4 chat. I find that the API still tends to be rather slow; I’ve not noticed a significant improvement in this area. Additionally, the costs remain rather high… This prevents its widespread use in applications, unfortunately

Hey! I don’t believe that the quality has decreased with the increase in speed, rather, I think there have indeed been adjustments that have reduced the load to run the model. However, this is just my perception. I don’t think I’m far off in predicting that GPT-4 will open up to non-subscribers in the weeks to come, as it seems like the model has achieved a level of stability. It’s likely we’ll see a GPT-4 turbo version soon. What I’m looking forward to the most is a reduction in costs! (The cost of using the API is too high for me to deploy it in my apps, and besides, GPT-3.5 Turbo does the job just fine)

Absolutely, I’m referring to the Web app, and I’ve noticed that the response times are getting quicker - it’s nothing like what we experienced at the launch of GPT-4. I’m in agreement with you about the use of APIs. To be honest, the significantly higher costs of GPT-4 have been a barrier to integrating it into my applications. I’m optimistic that an adjustment in speed and costs will occur in the coming weeks. Given the excellence I perceive in chat GPT, I’m certain it’s just a matter of time. Wishing you well!

There has undoubatably been a severe decrease in the memory and quality of responses from GPT, if you try to get it to edit, review or write even low level scipts you will see what im talking about. it used to be intuitive and logical, now the same exact system and prompts are producing incredibly sub par responses.

2 Likes

You are awesome for pointing this out! I bet they really appreciate that!

I haven’t particularly noticed such a thing in my daily usage. My usage is mainly within the context of scientific production (I’m involved in university research), where I use it as a writing assistant. Because of this, I’m pretty demanding and meticulous with my requirements, but I don’t see any significant drop-off. My other use is for programming, and honestly, the acceleration of the model more than makes up for the perceived quality decreases that some might notice.

Aha, thank you! We spend so much time complaining when things don’t go well, I believe it’s equally important to acknowledge when things are going well too! Wishing you a fantastic day.

1 Like

Yes, unfortunately … but i think the cost is quite low, the only problem is latency, and that’s why many developers are switching to other platforms…

1 Like