Do you also relate to this situation or maybe you found how to overcome the problem?
I am having a dilemma and I end up in the same eternal loop that I described in an other thread but this time I have what it takes to demonstrate the problematic behaviour of the ChatBot and the discomfort I am experiencing. Too often now crafting effective prompts for a ChatBot is like trying to find the right seat in a crowded room, where being too close to others can be uncomfortable.
tl;dr
If you want to save time you can directly scroll down to the example I made that describe exactly the real problem with ChatGPT-3.5
and ChatGPT-text-davinci-002-render-sha
(used in the example).
What I am talking about today will make clear for everyone the problem I described in my post of late February, the thread I mentioned earlier:
How to get rid of the discomfort ?
Ultimately, the goal is not only to seek advice on how to create effective prompts, but also to know if some people find it complicated to use the AI because one prompt to the next you feel like you need to repeat everything you said so that the AI can stay in context?
Like a 3 way dilemma
Crafting the Perfect Prompt: The Challenge of Precise Communication
Crafting effective prompts for the lower versions of ChatGPT (lower than ChatGPT-4
) can be a challenging task. The goal is to communicate precisely what you want in a way that the AI will understand and provide a satisfactory response.
However, finding the right balance between too much or too little information can be tricky. In this context, this text discusses the author’s experience with crafting prompts for the ChatBot, including the challenges of being too detailed, the benefits of being more focused and concise, and the dilemma of neither approach providing the desired results. The author also seeks advice on how to create effective prompts for the ChatBot.
Overdoing It: When Too Much Detail Confuses the ChatBot
When using the ChatBot, I’ve found that crafting the perfect prompt can be a challenging task. I want to make sure that the ChatBot understands my request precisely, so I tend to include every detail at the beginning. However, I’ve noticed that when I overdo it, the prompt becomes too long and complicated, which can confuse the ChatBot instead of clarifying the request.
Seeking Advice: How to Craft Effective Prompts for the ChatBot
To solve this problem, I’ve tried to be more focused and concise when crafting my prompts. Instead of including every detail, I try to provide the essential information necessary for the ChatBot to understand my request. However, this has resulted in a new problem. The ChatBot sometimes fails to pick up on the nuances of my request, resulting in an unsatisfactory response.
The Dilemma: Neither Approach Provides the Desired Results
This creates a dilemma for me because I’m not sure which approach to take to get the desired result. On one hand, I want to include every detail to ensure the ChatBot fully understands my request. On the other hand, I want to be more focused and concise, but this approach doesn’t always work either.
Overall, it’s a frustrating and confusing experience, and I’m struggling to find a solution. If anyone has any advice or insight on how to craft effective prompts for the ChatBot, it would be greatly appreciated.
It all started from a trivial idea: Asking ChatGPT to help with a simple task
The initial situation is that I naïvely wanted to save time. (sorry for my french accent I am from Québec City) I do not know if it already happened to you also: After deciding to use use ChatGPT because it will be faster and would save you time you end up wasting time because you have to reexplain yourself so many times?
An example and you will understand
Well I know that many professional prompters have solutions for those problems and I would love to hear from them in this thread, but for the rest of us I think it can not be only me who do not understand what wizardry is required to address the beloved AI…
This is a shortened version of the prompts just for you to understand
To be short I will reduce the text from the real prompt and I will only try to illustrate the bad behaviour of the AI. Please understand that I started 3 different session and that even after I realized it would be faster to do it all by myself I was still concerned about how to craft the best prompt to finally get the wanted result.
Shortened conversation:
From the list provided <a list was provide to the ChatBot> please create a new list based on this template`{[key element or descriptor]: [numeric or alphanumeric value]}`
Sure, I'll use the templates you provided to create a condensed bullet list of the information:
* {Current NodeJS version: v20}
* {Current JavaScript version: EcmaScript2023}
* {Current TypeScript version: v5.1}
* {[Operating system]: Fedora v38}
* {[Kernel]: 6.2.12-200.fc37.x86_64}
* {[Computer name]: luxcium@localhost}
* {[Shell]: bash 5.2.15}
* {[Terminal]: vscode}
* {[DE]: Plasma 5.27.4}
* {[WM]: KWin}
* {[WM Theme]: Sunset-dark}
* {[Locale]: fr_CA.utf8}
* {[System information - Uptime]: 23 mins}
* {[CPU]: Intel i9-10900X (20) @ 4.8GHz [45.0°C]}
* {[Memory]: 7.08GiB / 46.72GiB (15%)}
* {[GPU]: NVIDIA TITAN Xp}
* {[GPU Driver]: NVIDIA 530.30.02}
* {[Resolution]: 3840x2160 @ 60.00Hz}
Please remove `[` and `]` it was only to delimit the example
but keep the `{` and `}` as they are important part
Please replace WM by Window Manager and
please replace DE by Desktop Environnement also please.
Sure, here's the updated list with the requested changes:
[...]
{Desktop Environment: Plasma 5.27.4}
{Window Manager: KWin}
[...]
{System information - Uptime: 23 mins}
{CPU: Intel i9-10900X (20) @ 4.8GHz, temperature: 45.0°C}
[...]
please, keeping the same templated format infer how to separate
the element based on the following list first information that will stay
true in time in the short or long term and then asses what is information
that is changing over time, so to list information that is variable in time last:
{CPU: Intel i9-10900X (20) @ 4.8GHz, temperature: 45.0°C} should become {CPU: Intel i9-10900X (20) @ 4.8GHz}
{CPU temperature: 45.0°C}
and
{System information - Uptime: 23 mins} should be listed last same for the temperature
THEN THE FATAL MOMENT:
Here's the updated list with the requested changes:
[...]
{[Kernel]: 6.2.12-200.fc37.x86_64}
{[Computer name]: luxcium@localhost}
{[Network - Local IP]: 192.168.15.15}
{[Shell]: bash 5.2.15}
[...]
{[Resolution]: 3840x2160 @ 60.00Hz}
{CPU: Intel i9-10900X (20) @ 4.8GHz}
{CPU temperature: 45.0°C}
{System information - Uptime: 23 mins}
{GPU temperature: 52.0°C}
The true probleme is about the lack of context
The fact that the AI hallucinated a temperature of 52° is not a problem to me but can someone please explain me this: If the AI can not infer from the context that I want to have a new list without the [ and ] then why can he remember that the list previously was of that form…
This is not about explaining why the problem is happening
The conversation I am hoping to initiate is centered around two main points. Firstly, I would greatly appreciate any suggestions for a short-term solution to an issue I am facing with ChatGPT-text-davinci-002-render-sha
(But it was the same with ChatGPT Plus ChatGPT-3.5
). Specifically, I am experiencing an infinite loop problem, and I am seeking advice on how to get around it.
Secondly, I am curious about the expected behavior of the ChatBot. I have noticed that since around February of this year, the ChatBot has been behaving strangely. It requires me to re-explain each previous step in subsequent prompts, which can be both frustrating and time-consuming.
To elaborate, I have a question about the ChatBot’s use of “4096 Tokens.” I would like to know if this means that the AI will only use the last prompt to infer or if it will utilize all previous prompts in scope to do so.
Moreover, even when I include all of the previous information in my prompt, the AI output is still not what I requested. I have tried mentioning the information early, repeating it, or even putting it at the end of the prompt, but I continue to receive the same response.
Overall, I am seeking a better understanding of the ChatBot’s behavior and how to work around the infinite loop problem. Any advice or insight on these matters would be greatly appreciated.
Based on the raw text you provided, here are the relevant pieces of information that I was able to extract, and the corresponding templates:
- Current NodeJS version: v20
- Current JavaScript version: EcmaScript2023
- Current TypeScript version: v5.1
- Fedora version: v38
- Kernel version: 6.2.12-200.fc37.x86_64
- Shell: bash 5.2.15
- Desktop environment: Plasma 5.27.4
- Window manager: KWin
- Window manager theme: Sunset-dark
- Locale: fr_CA.utf8
- Uptime: 23 mins
- CPU: Intel i9-10900X (20) @ 4.8GHz [45.0°C]
- Memory: 7.08GiB / 46.72GiB (15%)
- Primary GPU: NVIDIA TITAN Xp
- NVIDIA driver version: 530.30.02
- Resolutions: 1920x1080 @ 60.00Hz, 3840x2160 @
Note that all of the above templates use the format {Current [key element] version: [numeric or alphanumeric value]}, except for the Fedora version, which uses the format { [key element or descriptor]: [numeric value]}.
I am not expecting something impossible
I understand the AI is not perfect but I do believe that OpenAI strive for perfection. I also do understand that perhaps OpenAI will not want to make changes to the model I am using (the free version of ChatGPT I am using is based on an older mode model=text-davinci-002-render-sha
). But I hope everyone will agree that it is a problem.