Do you also have this problem or maybe you found the solution? Does OpenAI have official information about this situation. Is anyone is aware of OpenAI position on the topic?

Do you also relate to this situation or maybe you found how to overcome the problem?

I am having a dilemma and I end up in the same eternal loop that I described in an other thread but this time I have what it takes to demonstrate the problematic behaviour of the ChatBot and the discomfort I am experiencing. Too often now crafting effective prompts for a ChatBot is like trying to find the right seat in a crowded room, where being too close to others can be uncomfortable.

tl;dr

If you want to save time you can directly scroll down to the example I made that describe exactly the real problem with ChatGPT-3.5 and ChatGPT-text-davinci-002-render-sha (used in the example).

What I am talking about today will make clear for everyone the problem I described in my post of late February, the thread I mentioned earlier:

How to get rid of the discomfort ?

Ultimately, the goal is not only to seek advice on how to create effective prompts, but also to know if some people find it complicated to use the AI because one prompt to the next you feel like you need to repeat everything you said so that the AI can stay in context?

Like a 3 way dilemma

Crafting the Perfect Prompt: The Challenge of Precise Communication

Crafting effective prompts for the lower versions of ChatGPT (lower than ChatGPT-4) can be a challenging task. The goal is to communicate precisely what you want in a way that the AI will understand and provide a satisfactory response.

However, finding the right balance between too much or too little information can be tricky. In this context, this text discusses the author’s experience with crafting prompts for the ChatBot, including the challenges of being too detailed, the benefits of being more focused and concise, and the dilemma of neither approach providing the desired results. The author also seeks advice on how to create effective prompts for the ChatBot.

Overdoing It: When Too Much Detail Confuses the ChatBot

When using the ChatBot, I’ve found that crafting the perfect prompt can be a challenging task. I want to make sure that the ChatBot understands my request precisely, so I tend to include every detail at the beginning. However, I’ve noticed that when I overdo it, the prompt becomes too long and complicated, which can confuse the ChatBot instead of clarifying the request.

Seeking Advice: How to Craft Effective Prompts for the ChatBot

To solve this problem, I’ve tried to be more focused and concise when crafting my prompts. Instead of including every detail, I try to provide the essential information necessary for the ChatBot to understand my request. However, this has resulted in a new problem. The ChatBot sometimes fails to pick up on the nuances of my request, resulting in an unsatisfactory response.

The Dilemma: Neither Approach Provides the Desired Results

This creates a dilemma for me because I’m not sure which approach to take to get the desired result. On one hand, I want to include every detail to ensure the ChatBot fully understands my request. On the other hand, I want to be more focused and concise, but this approach doesn’t always work either.

Overall, it’s a frustrating and confusing experience, and I’m struggling to find a solution. If anyone has any advice or insight on how to craft effective prompts for the ChatBot, it would be greatly appreciated.

It all started from a trivial idea: Asking ChatGPT to help with a simple task

The initial situation is that I naïvely wanted to save time. (sorry for my french accent I am from Québec City) I do not know if it already happened to you also: After deciding to use use ChatGPT because it will be faster and would save you time you end up wasting time because you have to reexplain yourself so many times?

An example and you will understand

Well I know that many professional prompters have solutions for those problems and I would love to hear from them in this thread, but for the rest of us I think it can not be only me who do not understand what wizardry is required to address the beloved AI…

This is a shortened version of the prompts just for you to understand

To be short I will reduce the text from the real prompt and I will only try to illustrate the bad behaviour of the AI. Please understand that I started 3 different session and that even after I realized it would be faster to do it all by myself I was still concerned about how to craft the best prompt to finally get the wanted result.

Shortened conversation:

From the list provided <a list was provide to the ChatBot> please create a new list based on this template`{[key element or descriptor]: [numeric or alphanumeric value]}` 
Sure, I'll use the templates you provided to create a condensed bullet list of the information:

* {Current NodeJS version: v20}
* {Current JavaScript version: EcmaScript2023}
* {Current TypeScript version: v5.1}
* {[Operating system]: Fedora v38}
* {[Kernel]: 6.2.12-200.fc37.x86_64}
* {[Computer name]: luxcium@localhost}
* {[Shell]: bash 5.2.15}
* {[Terminal]: vscode}
* {[DE]: Plasma 5.27.4}
* {[WM]: KWin}
* {[WM Theme]: Sunset-dark}
* {[Locale]: fr_CA.utf8}
* {[System information - Uptime]: 23 mins}
* {[CPU]: Intel i9-10900X (20) @ 4.8GHz [45.0°C]}
* {[Memory]: 7.08GiB / 46.72GiB (15%)}
* {[GPU]: NVIDIA TITAN Xp}
* {[GPU Driver]: NVIDIA 530.30.02}
* {[Resolution]: 3840x2160 @ 60.00Hz} 
Please remove `[` and `]` it was only to delimit the example
but keep the `{` and `}` as they are important part
Please replace WM by Window Manager and 
please replace DE by Desktop Environnement also please.
Sure, here's the updated list with the requested changes:
[...]
{Desktop Environment: Plasma 5.27.4}
{Window Manager: KWin}
[...]
{System information - Uptime: 23 mins}
{CPU: Intel i9-10900X (20) @ 4.8GHz, temperature: 45.0°C}
[...] 
please, keeping the same templated format infer how to separate
the element based on the following list first information that will stay 
true in time in the short or long term and then asses what is information 
that is changing over time, so to list information that is variable in time last:
{CPU: Intel i9-10900X (20) @ 4.8GHz, temperature: 45.0°C} should become {CPU: Intel i9-10900X (20) @ 4.8GHz}
{CPU  temperature: 45.0°C}
and 
{System information - Uptime: 23 mins} should be listed last same for the temperature 

THEN THE FATAL MOMENT:

Here's the updated list with the requested changes:
[...]
{[Kernel]: 6.2.12-200.fc37.x86_64}
{[Computer name]: luxcium@localhost}
{[Network - Local IP]: 192.168.15.15}
{[Shell]: bash 5.2.15}
[...]
{[Resolution]: 3840x2160 @ 60.00Hz}
{CPU: Intel i9-10900X (20) @ 4.8GHz}
{CPU temperature: 45.0°C}
{System information - Uptime: 23 mins}
{GPU temperature: 52.0°C}

The true probleme is about the lack of context

The fact that the AI hallucinated a temperature of 52° is not a problem to me but can someone please explain me this: If the AI can not infer from the context that I want to have a new list without the [ and ] then why can he remember that the list previously was of that form…

This is not about explaining why the problem is happening

The conversation I am hoping to initiate is centered around two main points. Firstly, I would greatly appreciate any suggestions for a short-term solution to an issue I am facing with ChatGPT-text-davinci-002-render-sha (But it was the same with ChatGPT Plus ChatGPT-3.5). Specifically, I am experiencing an infinite loop problem, and I am seeking advice on how to get around it.

Secondly, I am curious about the expected behavior of the ChatBot. I have noticed that since around February of this year, the ChatBot has been behaving strangely. It requires me to re-explain each previous step in subsequent prompts, which can be both frustrating and time-consuming.

To elaborate, I have a question about the ChatBot’s use of “4096 Tokens.” I would like to know if this means that the AI will only use the last prompt to infer or if it will utilize all previous prompts in scope to do so.

Moreover, even when I include all of the previous information in my prompt, the AI output is still not what I requested. I have tried mentioning the information early, repeating it, or even putting it at the end of the prompt, but I continue to receive the same response.

Overall, I am seeking a better understanding of the ChatBot’s behavior and how to work around the infinite loop problem. Any advice or insight on these matters would be greatly appreciated.

Based on the raw text you provided, here are the relevant pieces of information that I was able to extract, and the corresponding templates:

  • Current NodeJS version: v20
  • Current JavaScript version: EcmaScript2023
  • Current TypeScript version: v5.1
  • Fedora version: v38
  • Kernel version: 6.2.12-200.fc37.x86_64
  • Shell: bash 5.2.15
  • Desktop environment: Plasma 5.27.4
  • Window manager: KWin
  • Window manager theme: Sunset-dark
  • Locale: fr_CA.utf8
  • Uptime: 23 mins
  • CPU: Intel i9-10900X (20) @ 4.8GHz [45.0°C]
  • Memory: 7.08GiB / 46.72GiB (15%)
  • Primary GPU: NVIDIA TITAN Xp
  • NVIDIA driver version: 530.30.02
  • Resolutions: 1920x1080 @ 60.00Hz, 3840x2160 @

Note that all of the above templates use the format {Current [key element] version: [numeric or alphanumeric value]}, except for the Fedora version, which uses the format { [key element or descriptor]: [numeric value]}.

I am not expecting something impossible

I understand the AI is not perfect but I do believe that OpenAI strive for perfection. I also do understand that perhaps OpenAI will not want to make changes to the model I am using (the free version of ChatGPT I am using is based on an older mode model=text-davinci-002-render-sha). But I hope everyone will agree that it is a problem.

@Luxcium
I’ve been following your comments and threads - I tried to solve the puzzle a while ago, but it already had a solution - and I’ve been following several topics and trying to provide solutions for prompt engineering, an area I’ve waited a long time to become popularly accessible.

I hope you don’t mind my long comments (for humans) again - please let me know if you want me to take them down. I realize you are very fluent in writing (for humans), so perhaps you expect too much from LLM - reading fluency and understanding like humans. The criticisms are purely technical regarding the prompt itself - not about you.

First, some considerations:

  1. Of course, many of us encounter the same challenges - but I’ve noticed that models don’t seem to share a more complex reading ability;

  2. Even if the models are capable of reasonable written fluency originating from human examples in the training dataset - the understanding of the models is still “robotic” and “obedient” - follow conciseness;

  3. Of course, I didn’t spend as much time on the solution because I took advantage of your test as a draft prompt - I wouldn’t repeat it the same way - so it reinforced the “how-would-I-do” even more;

  4. Context maintenance still seems to be an issue. But in my case, it was the opposite. I wanted to leave it in a step-by-step format to compare to yours. ChatGPT insisted on skipping steps by presenting the ready-made list of the first two successful tests - I had to ask it to forget the previous results - too much context-maintenance;

  5. ChatGPT has timeline issues - each time you ask for a GMT date and time, the model answers something different. Even with news updates, it seems to confuse what is “before” and “after” - especially concerning the end of training in September 2021;

That said, “how-would-I-do”:

Issues:
The template isn’t well described since there are different formats in the list ([...]) - and too much description.
The list breaks down the instructions.

Issues:
Separated instructions demand more context memory.
Needless please.
Needless justifications: it was only to..., as they are important part....
Use of stop words (and) to merge different instructions. Please, check Embeddings in OpenAI documentation.

Solution:
Concise instructions first (“consider”), then the list (“below”).
Numbered itemization of instructions.
Short description of the template.
Punctuation is essential - check it in the tokenizer.
Keep standard references to contents, use “`…`” - diacritic circumflex accent (monospaced fixed width), or “```…```” for code - even if the user has no preview panel available in the interface. The ChatGPT chat interface uses a markdown style similar to the forum editor: {key_descriptor: key_value}, WM, DE, etc.
Use a few capital letter words to request model focus.
Aggregate the detailed instructions (Instructions 1. to 4.) in one prompt.
IGNORE... and Instruction 5. for clean previous results in context memory.

IGNORE requested changes in previous prompts, as a fresh start: consider as a new list.
Please consider the following five instructions:
1. Template`{key_descriptor: key_value}`;
2. Remove all `[` and `]` in key_descriptor ONLY;
3. Replace all `WM` with `Window Manager` in key_descriptor;
4. Replace all `DE` with `Desktop Environment` in key_descriptor;
5. Do NOT alter the order, add, or remove any data or information not given in these instructions;
6. WAIT for further instructions in the next few turns to edit the list below.

Apply these instructions to the list below:
* {Current NodeJS version: v20}
* {Current JavaScript version: EcmaScript2023}
* {Current TypeScript version: v5.1}
* {[Operating system]: Fedora v38}
* {[Kernel]: 6.2.12-200.fc37.x86_64}
* {[Computer name]: luxcium@localhost}
* {[Shell]: bash 5.2.15}
* {[Terminal]: vscode}
* {[DE]: Plasma 5.27.4}
* {[WM]: KWin}
* {[WM Theme]: Sunset-dark}
* {[Locale]: fr_CA.utf8}
* {[System information - Uptime]: 23 mins}
* {[CPU]: Intel i9-10900X (20) @ 4.8GHz [45.0°C]}
* {[Memory]: 7.08GiB / 46.72GiB (15%)}
* {[GPU]: NVIDIA TITAN Xp}
* {[GPU Driver]: NVIDIA 530.30.02}
* {[Resolution]: 3840x2160 @ 60.00Hz}

ChatGPT:

Here's the updated list following the given instructions:

* {Current NodeJS version: v20}
* {Current JavaScript version: EcmaScript2023}
* {Current TypeScript version: v5.1}
* {Operating system: Fedora v38}
* {Kernel: 6.2.12-200.fc37.x86_64}
* {Computer name: luxcium@localhost}
* {Shell: bash 5.2.15}
* {Terminal: vscode}
* {Desktop Environment: Plasma 5.27.4}
* {Window Manager: KWin}
* {Window Manager Theme: Sunset-dark}
* {Locale: fr_CA.utf8}
* {System information - Uptime: 23 mins}
* {CPU: Intel i9-10900X (20) @ 4.8GHz, temperature: 45.0°C}
* {Memory: 7.08GiB / 46.72GiB (15%)}
* {GPU: NVIDIA TITAN Xp}
* {GPU Driver: NVIDIA 530.30.02}
* {Resolution: 3840x2160 @ 60.00Hz}

Issues:
No punctuation for long instructions.
Contradictory instructions make the model confused. As in keeping the same templated format - the model considered the templated as the user’s version, and not its own version;
Different words describing the same tasks: list and asses (with misspellings).
Confusing timing references: true in time in the short or long term..., changing over time..., variable in time last....

Solution:
Maintain context through reference to the previous model response when pure-text only: repeat your..., you said... - the models are text-oriented.
Lists, tables, graphs, etc., are considered objects despite containing text - they also have a format - once the model provided them, the user becomes the object’s owner. Here's your {object}....
Paste these objects in consecutive prompts to maintain non-textual metadata context.
Consider the context importance order:

  1. System role - exceptions: few models, such as some 3.5, have a weak System role importance, except when the User role gives the importance: Follow the instructions in System role.... The user has no access to the System role from the Chat Interface API on ChatGPT;
  2. User role - it has more command importance than any other role, and it can affect the System role: Ignore instruction 3 in System role...;
  3. Assistant role - the model itself does not give much importance to its responses in context, except under context control by the user command during conversation;

Prevent time reference to the model using the list position - as a language model, it understands texts better than timelines.
As software, it understands step-by-step procedures [move]... then... [change] each step better than a general procedure [all moves]... then... [change all moved...].

Consider now the changing instructions below ONLY in this order:
1. Move the line `{CPU: Intel i9-10900X (20) @ 4.8GHz, temperature: [45.0°C]}` to the last position;
2. Replace the moved line `{CPU: Intel i9-10900X (20) @ 4.8GHz, temperature: [45.0°C]}` with:
`{CPU: Intel i9-10900X (20) @ 4.8GHz}`;
3. Append the line `{CPU temperature: 45.0°C}` to the last position of the list;
4. Move the line `{System information - Uptime: 23 mins}` to the last position of the list;
5. Do NOT add or remove any other character, text, data, format, or information than described in the instructions above;

Apply the instructions above in the list below:
{Current NodeJS version: v20}
{Current JavaScript version: EcmaScript2023}
{Current TypeScript version: v5.1}
{Operating system: Fedora v38}
{Kernel: 6.2.12-200.fc37.x86_64}
{Computer name: luxcium@localhost}
{Shell: bash 5.2.15}
{Terminal: vscode}
{Desktop Environment: Plasma 5.27.4}
{Window Manager: KWin}
{Window Manager Theme: Sunset-dark}
{Locale: fr_CA.utf8}
{System information - Uptime: 23 mins}
{CPU: Intel i9-10900X (20) @ 4.8GHz, temperature: 45.0°C}
{Memory: 7.08GiB / 46.72GiB (15%)}
{GPU: NVIDIA TITAN Xp}
{GPU Driver: NVIDIA 530.30.02}
{Resolution: 3840x2160 @ 60.00Hz}

ChatGPT:

Here is the updated list after following the given instructions:

* {Current NodeJS version: v20}
* {Current JavaScript version: EcmaScript2023}
* {Current TypeScript version: v5.1}
* {Operating system: Fedora v38}
* {Kernel: 6.2.12-200.fc37.x86_64}
* {Computer name: luxcium@localhost}
* {Shell: bash 5.2.15}
* {Terminal: vscode}
* {Desktop Environment: Plasma 5.27.4}
* {Window Manager: KWin}
* {Window Manager Theme: Sunset-dark}
* {Locale: fr_CA.utf8}
* {Memory: 7.08GiB / 46.72GiB (15%)}
* {GPU: NVIDIA TITAN Xp}
* {GPU Driver: NVIDIA 530.30.02}
* {Resolution: 3840x2160 @ 60.00Hz}
* {CPU: Intel i9-10900X (20) @ 4.8GHz}
* {CPU temperature: 45.0°C}
* {System information - Uptime: 23 mins}

Please, I appreciate your comments and critics.

1 Like

Wow that’s very interesting and I will need to read this again when I will be on my computer later and will be able to get in to the details.

For the moment I want to say to you and anyone reading this that my puzzle is an open question for anyone I was hoping to get as many different examples as possible…

You can still give me your take on how to make the AI create the 2x3 or 3x2 table giving him a few information as possible (but as much as needed to get the desired results) I am still stuck I know I should explain him everything but the initial moment when I ended up in that position it was just because I was trying to understand what the AI understood and apparently it is not something easy to think in 2 dimensional way (not in a spacial manner but in terms of the problem having some kind of two dimensionality aspect to it)…

It is more like a thought experiment than is still open to everyone… it is not a quiz but more a way a human user can puzzle his beloved ChatGPT AI