ChatGPT-4 Limits? Are they the same as for ChatGPT-3.5?

Hello quick question here does the ChatGPT-4 has the 8000+ Tokens capabilities as of GPT-4 or does it have the 4000+ Tokens limitations as the formerly beloved ChatGPT-3.5?

Also I must admit that I do feel really constrained by its output limits I am not sure how it is calculated I have the impression that it is not correlated to the length of my messages.

I have to manage ABRUPT ENDs of the output all the time which would output up to a slightly more than 1000 down to a little above 900 tokens… I am unsure how to explain this limitation to the AI sometimes he understands sometimes he doesn’t care much.

If any one have the perfect prompt to get the completion (either the 3.5 or 4) I would be happy to know…

My attempts give wildly inconsistent results. I can’t even say how often it gives me trouble but I guess my success rate is between about 60% reusing prompts that have been successful don’t seems to give me any benefits…

I am using the ChatBot mainly for its capabilities of generating good code (I have been struggling with both the playground models other than ChatGPT)… and paying 10$ (more) per month was a no brainer for me since I never been able to use GitHub copilot to its fullest.

Sometimes it is just obvious that the code produces in the completion is completely different from what was previously intended and this is regardless of the code complexity…

I hope anyone can share their experience with us in this thread and share their tips and tricks…

At least for me ChatGPT has been (especially earlier in the beginning) the best Assistant with my projects…

The recent degradation is very overwhelming. I started getting trouble with the model’s inclination, tendency, or predisposition towards using the ambient context to infer my needs and it is very difficult to find ways not to end up waisting my time (hours)…

Even when repeating everything from previous messages or asking to infer from them, the AI uses the parts that are less valuable but disregard the part that are more important… it is an infinite loop whenever I want to remind him he forgets an other element…

Most of the time it is easier to span a new session reexplain everything than it is to reexplain everything to the current session…

But at the end of the day it is still better than any other models found in the Playground…

1 Like

I’ve found this too in my experience. What I think is happening is that the “memory” of the conversation starts filling up with words that you don’t want and that feeds the LLM to give you even more of the same. Sometimes best to just start a new, fresh chat…

2 Likes

My understanding is that ChatGPT-4 is the 8k model as of right now, the 32k model is an API only future offering.

1 Like

I recently did an exercise that is quite time consuming but may perhaps save time in the long run.

I have lines in my Code Editor at 70 column and 80. I have started to write my prompts by never writing more than 70 characters per sentences. At first glance it may seem trivial but my goal is obviously not just to split the same text across multiple lines , no the goal is to use up all the length but without going over…

If anyone has ever used Twitter well maybe you can have the feeling for what I am saying… it forces you to think twice about the useless words that the AI for sure doesn’t need to infer properly anyway…

Taking 4 minutes to get as many 70 characters lines as you need to explain a complex problem waiting 1 minute and 12 seconds for the AI to reply to you and taking an other 2 minutes to implement the reply (I am using it as a coding assistant) make it virtually limitless (the current cap is 25 requests per 180 minutes and I did the maths to fit it exactly that in my example)…

But then I keep a cGPT-3.5 on the side to troubleshoot the code if it doesn’t perfectly work right away to quickly get feedback… Doing so it’s important not to forget to explain to the cGPT-4 what was updated in the meantime, forgetting to doing so leads to unexpected behaviours I must admit that I forgot more than once haha…

Well I hope people will be sharing their experiences and suggestions I believe that it is something that is going to steer everyone experience in the good direction…

1 Like

I thought the same thing. When the conversation increases at some point i start a new one.

2 Likes

@Luxcium, I have the same frustrations at the paid $20/month ChatGPT4 account: limited number of responses, etc. I do not know the algorithm it uses to count and then “time me out”. There is no indication in the browser window of the “remaining time”.
It is particularly maddening when using ChatGPT4 for coding. I am working in Swift on an iOS app. GPT sends me code and assures me it’s error-free. Then it takes several back-and-forths to get the errors cleaned up. And it apologizes for making mistakes, yet THOSE TOKENS STILL COUNT AGAINST MY LIMIT! I would prefer that its mistakes do not count. Good luck!

1 Like

Wouldn’t it be nice if the ChatGPT machine could be connected to XCode and the Swift compiler, along with simulators for many devices? Then when it has code to recommend, it could load that code into a Swift project in XCode and see if it compiles.
Since I have uploaded my entire code base (it’s a small app so far), ChatGPT knows my code. It can substitute its new code into mine (as it directs me to do), and then use XCode to check it. This would really reduce the back-and-forth use of tokens…

1 Like

I guess this will come with the plugin API soon enough… I am on linux using vscode and TypeScript I decided not to pay for the ChatGPT Plus this month and the ChatGPT-text-davinci-002-render-sha is making me regret my choice as he is so lazy (I apologize if I have given you the impression that I am lazy. As an AI language model, I do not have emotions or feelings, nor do I have the capability to be lazy or active. My sole purpose is to assist and provide information to the best of my ability.) but I have explained it all in the thread I created to explain my frustrations:

Honestly I am unable to understand if the ChatGPT-4 is able to do like GPT-4 in terms it’s 8K Tokens or if it is more like ChatGPT-3.5 with 4K Tokens as I have been getting abrupt interruptions pretty quickly with less than 1000 Tokens of output.

I have noticed that it does have the same limitations in response size and endings. I also have still encountered a lot of trouble trying to get it continue the response in the same format with the same rules. The best way I found to account for that is by prompting it. “Continue from this point from (paste ending of last response) and maintain the same formatting as before.”

1 Like

since May, GPT4 seem to be been downgraded to the same limits as GPT3.5

downgraded to 4k model (said by itself and clearly visible)

EDIT - I rephrase : I feel like the token limit has decreased. I was able to make dense conversations of 20-30 messages without loosing track of the first instructions, now the limits seems to be 10-15 and after some structuring instructions or details are forgotten.

Big claims need big evidence.

I’ve seen no evidence the context window has been shortened.