Hast GPT 3.5 also a cap on the amount of prompts?

After working one day, I have the feeling that there is also an not announced cap on prompts directed to the GPT 3.5. model-console. I experience just bullshit answers after having about 5 tasked finishd just fine. After that, the thing simply not reacts to the prompts anymore but repeats the result from the previous tasks. For instance, I ask the model to create a multiple choice question and an explanation, it provides me questions including answers and explanations for any topic without me specifiying it and its just comprehensive and good. But after doing this five times, the result becomes stupid. Answers become less detailed and it starts not to give an explanation on the question, as before (which was also not in the prompt but just fine) but gives explanation on each answer. I tell it not to do it and give an explanation on the question instead but it just repeats the same dumb result with explanation on the answer. This goes on and on and on until, miraculously afters 20 min. or so it just starts to be fine by itself. Usually I close the chat and start a new one to “refresh” but that doesn’t help at all. Just after some time it becomes “normal” again with expectable good results.

Sorry guys, I feeld kind of mistreated by the service and the company. Why not mentioning that there is a cap, why wasting my time?

Sorry if this topic was discussed somewhere before and I cross-posted, but I am currently so annoyed by that behavior because it wasted the whole sunday for me, just after GPT 4.0 was crap too.

:thinking:

It’s a fair sentiment, but I think that if you ran into a cap, it would tell you instead of spouting nonsense.

It would probably be good someone came up with an introduction to prompting - I think a lot of people could benefit from that. Here’s what stands out to me:

The models don’t really respond well to negative prompts - instead of telling them what not to do, it’s generally a much better idea to specify exactly what you want them to do.

Also, clearing the chat and removing confusing messages is also generally a very good idea.

Overall, there is a ton of nuance regarding efficient use of these models. I’d urge you to reflect on what methods work, and which don’t - and if you have the patience and perseverence, your effectiveness at using these tools will certainly improve!

Also, just checking, are you sure you’re talking about the API (platform.openai.com)? or rather chat.openai.com?

2 Likes

When you notice it happening, you might try starting a new thread. Once they get longer, they can become less useful with all the information upstream.

1 Like

Thanks for the fast response. Yes, I then start a new thread but the reaction often remains the same…

Thanks for the fast response. I usually define in my first thread how the answers should be presented. In a table or not, how many columns and so on. How many sentences etc. See example under this this text.
It responds to it for some time but then, after 5-6 tasks, it starts to do nonsense. So I try to get a grip on it, not knowing how to. First, I advice to stick to the pattern I want the answers to look like, I define it as „scheme 1“. So i prompt something like: use scheme one as a presentation format for the question. But it does the same thing as before again, it seems not to take the prompt. So I tell it to stop, or I tell it to start from scratch. Or I stop processing the prompt.
Usually after the second try, I stop and delete the chat, or wait for like 10 minutes and after that, it goes on well again. I have to investigate a bit more to see if its only some time that has to pass or a new chat, because I did a lot of prompts today, so I didnt document it.

(Scheme 1 is

  1. column question
  2. column answer 1
  3. column answer 2 etc…)

Thanks anyway.

I definitely recommend different conversations for different tasks! It’s very easy to confuse the model if there’s too much information hanging around.

1 Like

That is pretty common behavior with the newest version of the model and has been for a while with modifications made to the initial function-calling AI after August.

It gets “hung up” on old context. It no longer can improve upon or diverge from prior outputs.

ChatGPT has also gone a completely different direction for chat history management: instead of the bare minimum of conversation memory that was required to maintain an illusion of staying on topic, with complaints, the conversation of ChatGPT is now extended quite long, until the AI model can be degraded by the long context length of past chat that accompanies each new question.

This is not about “caps” (which some have still hit if really chatting it up with the AI, with a message) but about the AI quality and management backend itself.

1 Like