Struggling with ChatGPT-3.5 and Seeking Help

Experiencing Recurring Issues with ChatGPT-3.5

I am experiencing a recurring issue with ChatGPT-3.5, and I am struggling to find a constructive solution. It has become exhausting, and I am losing patience with the AI. I would appreciate any help or advice the community can offer.

Feeling Alone in the Struggle

I feel alone in my struggle, as I rarely see any negative feedback about ChatGPT-3.5. Most users appear to have trivial problems or problems related with billing or downtime issues, but no one seems to be facing the same challenges I am. I am unsure if I am doing something wrong or if others can relate to my situation.

Challenge One: Crafting Lengthy Messages

One of the main challenges I face when using ChatGPT-4 is its limitations. Due to its constraints, I often find myself cramming all my thoughts into a single message. At times, I feel it might be more efficient to use ChatGPT-3.5 and send multiple shorter messages instead, as I end up spending an excessive amount of time crafting a single message.

Challenge Two: Problem-Solving Loop

The issue becomes more frustrating when I ask the AI for help solving a problem. It tends to generate a new script, which often introduces a new issue while solving the previous one. This creates a never-ending loop that is difficult to escape. Despite my efforts to ask the AI to summarize my question or the entire session, I cannot seem to break this cycle.

AI’s Inconsistency in Context Retention

What baffles me is that ChatGPT-3.5 can recall detailed information from earlier in the conversation when asked to summarize, but it seems to forget anything beyond the last two messages during the problem-solving process.

Seeking Guidance and OpenAI’s Attention

I would greatly appreciate any guidance, especially from someone who has experienced similar issues. My goal is to get OpenAI’s attention and find a way to communicate my concerns more clearly.

Collaborating for a Better ChatGPT Experience

In conclusion, I am seeking assistance from the community in understanding and resolving the issues I am experiencing with ChatGPT-3.5. I hope that by sharing my struggles, I can connect with others who have faced similar challenges and together, we can find a solution to enhance our ChatGPT experience.


I am crafting lengthy ChatGPT-4 prompts because it is limited to 25 messages per 3 hours period… and so far I never exceeded the limit (more or less)

I also craft the message with a ton of information because the cGPT4 is capable of handling more complex messages…

1 Like

As OpenAI releases new models, it opens up to the public the previous models, and the daily news about ChatGPT, even if negative, catches everyone’s attention.
It floods this forum with what you call “trivial problems” - that doesn’t mean the same problems as yours don’t abound in this community.
I have a different point of view on the individual struggle of all of us.
GPT-4 is for business, and with the exponential addition of users to the other models, it has diverted all OpenAI Support to where the money comes from.
They were unprepared to meet the current public demand - and the previous models are becoming slower.
I think OpenAI is well aware of this, so I also believe that getting OpenAI’s attention would be unnecessary. And I don’t know if it would be helpful.
If we share the same “non-trivial” problems, we would be a minority without priority in the face of, for example, the “non-acceptance of credit cards from India.”
If we are looking for technical solutions, I believe this Community Forum is, and will be for a long time, the best place to help each other. However, we have to “dig” until we find our solutions.
I don’t consider myself experienced enough to help you. And since I don’t have high expectations about GPT-4 - I don’t understand what limitations force someone to make “lengthy messages” for this model - which is the opposite strategy for the previous models.
I like to post, here on this topic, guidelines (strategies for the most popular models) if you allow me.
If you consider this message to be diverting from the purpose of this topic, please let me know, and I will delete this message.

Some strategies can help you (us) with your (our) challenges:
1. Break down the thoughts: Instead of fitting all ideas into a single message, break them into smaller, more concise messaging. It can help to convey ideas more effectively and avoid hitting the model’s message length limits. It allows for a more interactive and dynamic conversation with the model, facilitating efficient communication. The strategies recommended for GPT-3.5 are also applicable to GPT-4;

2. Use bullet points or numbered lists: When conveying multiple points, consider using bullet points or numbered lists to structure the messages. It helps the model understand and respond to each point separately, making the conversation more organized and coherent;

3. Be clear and concise: When crafting messages, prioritize clarity and conciseness. Avoid unnecessary details or lengthy explanations. Stick to the main points to convey, and use simple and direct language;

4. Clear instructions: Provide explicit and specific instructions to the model. Specify the desired format or type of response, and ask the model to think step-by-step or explain its reasoning. It guides the model toward generating a more focused and relevant response;

5. Reframe the question: If stuck in a loop where the model keeps generating new scripts without solving the problem, try reframing the question. Be specific and concise, and clearly state what the model shall do. For instance, ask the model for a summary of the previous responses or focus on a specific aspect of the problem. Mind that the models are used to patronize the users (almost cartesian). They provide many plain premises around a requested solution and big chunks of code where only one or two lines are helpful - it is not necessary to copy the entire code but select the lines that could apply to the original user’s code;

6. Use the System message strategically: The System message at the beginning of the conversation can set the context and guide the model’s behavior. Include instructions or reminders in the System message to avoid generating new scripts and to focus on summarizing or addressing the current problem. The model will retain this information for longer;

7. Retain and reiterate important information in the User message: To help the model recall information, include it in the User message rather than the Assistant message. The model gives more importance to the User message and is more likely to retain them in the context. Consider repeating it in subsequent User messages - it helps reinforce the information and improve the model’s ability to recall it later in the conversation;

8. Take control of the conversation: The user can guide and direct the conversation. If the model keeps generating new scripts without resolving the issue, interrupt the loop by asking it to stop generating new ideas and instead provide a summary or a specific solution to the problem. Experimenting with different techniques, making clear instructions, and taking control of the conversation helps to break out of a problem-solving loop and get more focused and relevant responses from the model;

9. Review and summarize: If the conversation is getting complex or the model loses track of the context - review and summarize the previous interactions. It helps to get a clear view of the current status of the problem, proposed solutions, and any issues or inconsistencies, refresh the model’s memory, and ensure the necessary context to provide accurate responses;

10. Maintain focused conversations: If the model forgets information beyond the last two turns, keep the conversation focused and avoid unnecessary back-and-forth. Limit the number of messages. Avoid repeating information in multiple messages, as this can help the model better retain the context;

11. Explicitly reference previous messages: When the model needs to refer to past messaging, then use explicit references in the instructions. For instance, “As we discussed earlier…” or “Referring to the previous message…”. It helps to trigger the model’s memory and improve its ability to retain and recall relevant information. The user is responsible for keeping track of the conversation - quoting the model’s last response is a helpful way to refer to it. For instance, "You said: ‘… quote …’ ", followed by requests related to that quote;

12. Try context window settings: Some platforms or applications may have limitations on the context window size - which can affect the model’s ability to retain information. Try to adjust the context window settings to see if it improves the model’s ability to recall the earlier context. It can be done by truncating or extending the conversation history within the platform’s allowed limits to optimize the provided context;

13. Context management: It plays a crucial role in influencing the behavior of language models, including GPT-3.5. All language models have limitations in context retention. It is necessary experimentation and adjustments to optimize their performance. Strategic management of context, repeating important information, and keeping conversations focused can improve the consistency of the model’s context retention. It enhances its ability to recall earlier information during the problem-solving process;

14. Explore prompt engineering: It is possible to fine-tune a completion engine using only natural language and no code through a process called “prompt engineering.” To fine-tune a completion engine to generate text in a specific domain, the user provides a series of prompts that are relevant to a subject. The model would generate text similar to the provided prompts but with unique variations. Use these techniques, such as tweaking the prompt, adding context, or changing the wording, to guide the model toward the desired behavior. For cases of extensive datasets: prepare a dataset of texts or case studies relevant to the subject matter - one example per line in plain text format. Try different approaches to see what works best for a specific use case. Each conversation with the language model can be unique. Adjust the prompt and the parameters as needed to generate text that is relevant based on the model responses.


Wow thanks for sharing your thoughts on this thread… I appreciate your point of view which is from a very different perspective or different angle from mine yet I think it’s pointing in the same direction…

I am crafting lengthy ChatGPT-4 prompts because it is limited to 25 messages per 3 hours period… and so far I never exceeded the limit except once when I was talking about things more randomly before to go to bed…

Usually I am using it to help on my projects and I tend to cram a lot of things in one single message to make sure I don’t have to end up in a complicated back and forth like with the ChatGPT-3.5 (which would be waisting my limited amount of messages)…

Also, because the cGPT4 is a better model I tend to use it for more important things (in terms of programming and production of code)…

This is one technique I was using all the time saying something to give the inflection of the discussion and then leading the AI in the desired direction placing it in the appropriate mindset and I am not so bad at doing this multi message approach in a single message when addressing the cGPT4

Those are very important suggestions and they are valuable enough to be part of this thread

It is important to mention that at times I am having a very specific kind of struggle with the AI being unable to understand it should be inferring from what is very close in the scope of recent messages…

I guess it may be because it has to do with optimizing for speed, but I might be wrong as this is only a guess…

I most likely will not do specifically that but instead will ask the AI (ChatGPT-3.5 and ChatGPT-4 the Plus version) to summarize the instructions and to use bullet points to do so and I will also ask 1) to first summarize what I am asking to do and then 2) asking the AI to summarize what he will do and then let him give me the code output or the answer and if he is answering to a question I am asking him to give me a complete analysis of what he explained me…

This one is important I will usually be concise with cGPT3.5 and then with the other model I will be more detailed and would be reformulating using synonyms when I am unsure if I was clear enough and then because it would summarize my request I can quickly assess if I have confused him or help him to understand my request better… and then he will summarize what he would be doing and I quickly understand if we have an alignment…

I should probably more explicitly ask the AI to summarize what he will be doing in a step by step manner but I already knew that one… All your suggestions so far demonstrated that you are a skilled user and you know all best practices… your suggestions are very valuable and I will make sure I am asking my Assistant to do his summary in a way where I instruct him in a more explicit manner to explain his reasoning and not only what he is going to do… this is something I thought I was doing but not exactly…

Thanks @AlexDeM if this is not solving the problems I mention in this thread, it will at least make my overall experience way more productive…

I must admit that it would be important for me to focus on being less grumpy and then use what you explain here and additionally I think I should not be shy to go back few requests above and start from back explicitly mentioning each things that I know that the AI would miss all together in an initial prompt instead of trying to find out how to explain in the chronological order of the messages the interface of the chat is already made to go back and create a new timeline of interactions…

Thanks for your reading and comments. Sorry for the late reply. I just got time to answer emails, messages, and posts,…
Now I got it: “limited to 25 messages per 3 hours period” - your account/access to GPT-4 is limited… let’s say 8 messages per hour. Is it a paid subscription? Is it worthing in your opinion? I didn’t know of such limits. I don’t access GPT-4 yet. It seems to me a bit “between a rock and a hard place” situation - a bit claustrophobic. I hope they charge a bit more when you get over these limits instead of that block/suspension thing.

Anyway, you found a way to adapt to this situation, as you said you’re good at it. I’m not sure if my post could be of any help since you have it under control, though not entirely satisfactory.
But don’t think you are alone - we are just divided right now, it’s different - as Mondlane said: “The struggle continues.”

When I said trivial here I didn’t want to minimize the problem anyone else could be facing.

I do have the impression that problems with the way the API or the ChatGPT (Free and Pro) is behaving, trivial or not, is of a different nature than problems accessing the API or the ChatBots and I was not saying that those other problems are not worthy of being looked at. But it is my perception that they are not of the same nature.

While technical problems and problems related to using a specific API are both important, they are not of the same nature. API-related problems can be more significant as they directly impact the user’s ability to use the API effectively, and require specialized knowledge to resolve. Therefore, having access to dedicated support for API-related problems is critical to ensuring that users can use the API effectively and achieve their goals.

But obviously both kinds of problems require their own channels to have solutions… I have the impression that problem with billing and problems with access when the service is down and the status page is not updated… can lead to frustrations to those experiencing these issues… I think that @AlexDeM is on the same page that me on the topic but I just realized that it might have let people to believe that other problems were not important… I just want to express that they are not of the same nature but I don’t think it’s not important… when I was using the word trivial it was just to reflect the fact that those problems are easier to describe or explain… and maybe easier to solve…

The ChatGPT Plus subscription, the paid version, include the same as the free version but it is unlimited for what I know (ChatGPT-3.5 is unlimited in the Plus version).

It also includes an access to the ChatGPT-4 which is available only in the paid version but comes with a limit…

I personally don’t have any strong opinion on the fact that, despite paying, you only have a limited version of the ChatGPT-4 but I obviously do have a stronger opinion on the fact that it would be better if it was not limited…

I am not going to judge, because I have no idea what amount of resources is necessary to make all of that faster and more efficiently accessible to everyone…

@Luxcium totally relate to those challenges. Are you still facing issues with the crafting lengthy messages and inconsistency in context retention? My pre-seed startup is trying to tackle those 2 issues by making the process of adding content much more seamless and robust. We’re building an MVP right now and will have a version ready in the next week or so. Any interest in trying it out? Would love feedback or thoughts on whether our product helps those 2 pain points!

1 Like

I’m sorry to hear that things have been complicated for you. I don’t know if you’ve found solutions to the problem since you posted the message. I too am struggling with GPT-4. Personally, I mostly use ChatGPT Plus and iOS, with the playground version as a complement.

I’m genuinely puzzled by the fact that the applications I use based on this technology are far from what I’ve experienced in the playground. Out of the box, ChatGPT performs way better than the raw version found in the playground.

The Custom Instruction Set has been a great help to me recently, but the System prompt has been there from the beginning in the context of GPT-3.5/4.0.

Now that there are 50 requests per 3 hours, I’m using ChatGPT-4 almost exclusively. It’s been going smoothly, but due to the limitations, I’ve had to make long and detailed prompts this is probably beneficial compare with having many shorter prompts for cGPT3.5 could be making a difference. I had exactly the same situation yesterday where I seemed to be going nowhere, asking the same thing in a loop.

I am signed in. I want to start a new thread. Please tell me how I can do this.

As you can see the sentences above are all complete sentences. But the interface says that my messages and clear and asks if I’m using complete sentences. This seems like a simple Task for The forum interface. And yet it fails miserably.

I came to this form to talk about limitations of chat GT p. And now I find myself talking about the limitations of the forum
n software
It is clear that they are not using cat GBT As an engine for this forum. I find this to be pretty funny in an ironic way.


Are you trying to start a new thread? Is there a reason you’re posting in this old thread?

Yes. I wanted to start a new thread in my Android app. But I could not find the ‘compose new thread’ anywhere in the ui. Or whatever. I’ve always used reddit on my pc, and this app is unfamiliar.

So I posted my question to an old thread. My android is running on an Amazon fire pad. I’ve unlocked the Google play store and use that to install apps in the android environment.

Can you help me figure out how to start a new thread in the android app?

You don’t see the new topic button?\

Oops, part of that doesn’t make sense. I thought I was responding to a post on Reddit.Just the same I still can’t figure out how to start a new thread in the browser interface.