As OpenAI releases new models, it opens up to the public the previous models, and the daily news about ChatGPT, even if negative, catches everyone’s attention.
It floods this forum with what you call “trivial problems” - that doesn’t mean the same problems as yours don’t abound in this community.
I have a different point of view on the individual struggle of all of us.
GPT-4 is for business, and with the exponential addition of users to the other models, it has diverted all OpenAI Support to where the money comes from.
They were unprepared to meet the current public demand - and the previous models are becoming slower.
I think OpenAI is well aware of this, so I also believe that getting OpenAI’s attention would be unnecessary. And I don’t know if it would be helpful.
If we share the same “non-trivial” problems, we would be a minority without priority in the face of, for example, the “non-acceptance of credit cards from India.”
If we are looking for technical solutions, I believe this Community Forum is, and will be for a long time, the best place to help each other. However, we have to “dig” until we find our solutions.
I don’t consider myself experienced enough to help you. And since I don’t have high expectations about GPT-4 - I don’t understand what limitations force someone to make “lengthy messages” for this model - which is the opposite strategy for the previous models.
I like to post, here on this topic, guidelines (strategies for the most popular models) if you allow me.
If you consider this message to be diverting from the purpose of this topic, please let me know, and I will delete this message.
Some strategies can help you (us) with your (our) challenges:
1. Break down the thoughts: Instead of fitting all ideas into a single message, break them into smaller, more concise messaging. It can help to convey ideas more effectively and avoid hitting the model’s message length limits. It allows for a more interactive and dynamic conversation with the model, facilitating efficient communication. The strategies recommended for GPT-3.5 are also applicable to GPT-4;
2. Use bullet points or numbered lists: When conveying multiple points, consider using bullet points or numbered lists to structure the messages. It helps the model understand and respond to each point separately, making the conversation more organized and coherent;
3. Be clear and concise: When crafting messages, prioritize clarity and conciseness. Avoid unnecessary details or lengthy explanations. Stick to the main points to convey, and use simple and direct language;
4. Clear instructions: Provide explicit and specific instructions to the model. Specify the desired format or type of response, and ask the model to think step-by-step or explain its reasoning. It guides the model toward generating a more focused and relevant response;
5. Reframe the question: If stuck in a loop where the model keeps generating new scripts without solving the problem, try reframing the question. Be specific and concise, and clearly state what the model shall do. For instance, ask the model for a summary of the previous responses or focus on a specific aspect of the problem. Mind that the models are used to patronize the users (almost cartesian). They provide many plain premises around a requested solution and big chunks of code where only one or two lines are helpful - it is not necessary to copy the entire code but select the lines that could apply to the original user’s code;
6. Use the System message strategically: The System message at the beginning of the conversation can set the context and guide the model’s behavior. Include instructions or reminders in the System message to avoid generating new scripts and to focus on summarizing or addressing the current problem. The model will retain this information for longer;
7. Retain and reiterate important information in the User message: To help the model recall information, include it in the User message rather than the Assistant message. The model gives more importance to the User message and is more likely to retain them in the context. Consider repeating it in subsequent User messages - it helps reinforce the information and improve the model’s ability to recall it later in the conversation;
8. Take control of the conversation: The user can guide and direct the conversation. If the model keeps generating new scripts without resolving the issue, interrupt the loop by asking it to stop generating new ideas and instead provide a summary or a specific solution to the problem. Experimenting with different techniques, making clear instructions, and taking control of the conversation helps to break out of a problem-solving loop and get more focused and relevant responses from the model;
9. Review and summarize: If the conversation is getting complex or the model loses track of the context - review and summarize the previous interactions. It helps to get a clear view of the current status of the problem, proposed solutions, and any issues or inconsistencies, refresh the model’s memory, and ensure the necessary context to provide accurate responses;
10. Maintain focused conversations: If the model forgets information beyond the last two turns, keep the conversation focused and avoid unnecessary back-and-forth. Limit the number of messages. Avoid repeating information in multiple messages, as this can help the model better retain the context;
11. Explicitly reference previous messages: When the model needs to refer to past messaging, then use explicit references in the instructions. For instance, “As we discussed earlier…” or “Referring to the previous message…”. It helps to trigger the model’s memory and improve its ability to retain and recall relevant information. The user is responsible for keeping track of the conversation - quoting the model’s last response is a helpful way to refer to it. For instance, "You said: ‘… quote …’ ", followed by requests related to that quote;
12. Try context window settings: Some platforms or applications may have limitations on the context window size - which can affect the model’s ability to retain information. Try to adjust the context window settings to see if it improves the model’s ability to recall the earlier context. It can be done by truncating or extending the conversation history within the platform’s allowed limits to optimize the provided context;
13. Context management: It plays a crucial role in influencing the behavior of language models, including GPT-3.5. All language models have limitations in context retention. It is necessary experimentation and adjustments to optimize their performance. Strategic management of context, repeating important information, and keeping conversations focused can improve the consistency of the model’s context retention. It enhances its ability to recall earlier information during the problem-solving process;
14. Explore prompt engineering: It is possible to fine-tune a completion engine using only natural language and no code through a process called “prompt engineering.” To fine-tune a completion engine to generate text in a specific domain, the user provides a series of prompts that are relevant to a subject. The model would generate text similar to the provided prompts but with unique variations. Use these techniques, such as tweaking the prompt, adding context, or changing the wording, to guide the model toward the desired behavior. For cases of extensive datasets: prepare a dataset of texts or case studies relevant to the subject matter - one example per line in plain text format. Try different approaches to see what works best for a specific use case. Each conversation with the language model can be unique. Adjust the prompt and the parameters as needed to generate text that is relevant based on the model responses.