The new boundary with the GPT-4o model

Regarding the GPT-4o model: Recently, several users and I have noticed a new limit in the free version of ChatGPT with the GPT-4o model—approximately 10 messages every 3 hours. I understand that there need to be certain restrictions in the free version, but this limit is simply unacceptable. I believe I am not the only one who thinks this way, so I kindly ask you to reconsider this decision and adjust the limit to improve the user experience.

Regarding the GPT-4o model: Recently, several users and I have noticed a new limit in the free version of ChatGPT with the GPT-4o model—approximately 10 messages every 3 hours. I understand that there need to be certain restrictions in the free version, but this limit is simply unacceptable. I believe I am not the only one who thinks this way, so I kindly ask you to reconsider this decision and adjust the limit to improve the user experience

About GPT-4o:
Recently, many users—including myself—have noticed a new restriction in the free version of ChatGPT when using the GPT-4o model: around 10 messages every 3 hours. I understand the need for certain limitations to ensure stability, especially during high demand. Initially, this restriction seemed to be related to the excessive use of the image generation feature, which was later removed—suggesting the issue had been resolved.

However, now the limit has returned, even without the image feature being the main factor. This sudden change affects the overall experience and usefulness of the model. I believe many users share this concern, so I kindly ask you to reconsider the message cap and find a more balanced solution that supports accessibility and user satisfaction.

I’m experiencing the same issue again. It’s quite frustrating, especially since this seemed to be resolved just last week. I really think OpenAI should consider setting different usage limits between users who are just having text-based conversations with GPT-4o and those who are using more resource-intensive features like image generation.

The current cap feels a bit too strict for simple chat interactions. It would be great if they could find a more flexible or tiered system. The sudden return of this restriction definitely affects the overall user experience such a shame, especially after things had just improved.

I totally agree with you. I’ve also noticed that the current limit feels pretty strict, even when you’re just using it for chat. Hopefully, OpenAI considers a more flexible or tiered system, because it is frustrating when things seemed to be getting better and then suddenly go back to the same issues.