The GPT-4 model was limited to 100 messages in 4 hours. Now it seems to be limited to 50 for the same time. I think it should be more transparent what the limit is, before you buy ChatGPT Plus. I heard of many people that just bought ChatGPT Plus for GPT-4. The only official statement that I could find is that there is a “dynamic limit”. Maybe the current limit should be shown somewhere?
I also think that it should be (tokens / time) instead of (messages / time). When a user submits a long text and asks many small questions about it, it’s certainly less computing power, than when a user sends many huge texts to summarize, or generating long content with it.
Are there any estimates, how fast the servers will be scaled up, or the time it takes to make the model more efficient?