I am trapped in a loop again PLEASE HELP ME

Dealing with Frustrating AI Prompting Issues again

An update about The Infinite Loop: How ChatGPT’s Latest Update Is Consuming My Time

Dear OpenAI community,

I’m reaching out to report a problem that I’m experiencing with ChatGPT model. I want to emphasize that the example I’m about to give is just one instance of an ongoing issue that has been happening for over a week now, and it is not an isolated incident. I’ve been using the ChatGPT Plus model for the past month, and I never had this problem with the AI before.

The issue is that the AI seems to forget the changes I make in one prompt when I move on to the next. For example, I might ask the AI to fix A, then fix B, and then fix A again, but the AI keeps misunderstanding… I was used to do an iterative process with the beloved AI and now that technique is impossible, even though I’ve already ask for those changes the AI only update the in regard to the most recent request. This leads to an infinite loop that’s frustrating and time-consuming to deal with.

I’m an experienced user and have tried various techniques to resolve this issue, but nothing seems to work. I’m reaching out to the community to see if anyone has experienced a similar problem and found a solution that worked for them. I want to stress that this is not a simple problem, and I’m not looking for basic solutions or tricks. I’m hoping that the community can keep an open mind and understand the difficulty of the situation before providing any advice. Remember that I am experiencing this situation in a broad set of different interactions and this problem is not specific to the example below…

Thank you for your time and support.

tl;dr

Jump directly to The Conclusion If the Example situation below is to long to read.

Example situation

I am asking the AI to help me with refactoring my code…

I said (Many Many times so far) please remove the new Promise (the promise constructor)… the AI said something like:

Good catch! You're right, there's no need to wrap the resolve and reject callbacks in a Promise constructor when using async/await.

so I got this:

async function deleteImage(
  redisClient: RedisClient,
  key: string,
  id: number
): Promise<string> {
  const result = await redisClient.send_command('imgscout.del', [key, id.toString()]);
  return result.toString();
}

I previously told him not to return await or const result = await _; return result; unless we have to update/change/mutate (fair enough) but I am telling him I do not need to do so as we already return a string value (will be Promise<string>)

So I got into a loop:
Understood. Here's an updated implementation that avoids using await and properly handles errors:

async function deleteImage(
  redisClient: RedisClient,
  key: string,
  id: number
): Promise<string> {
  return new Promise((resolve, reject) => {
    redisClient.send_command('imgscout.del', [key, id.toString()], (err, result) => {
      if (err) {
        reject(new Error(`Error deleting image with ID ${id}: ${err}`));
      } else {
        resolve(result.toString());
      }
    });
  });
}

… NO! there’s no need to wrap the resolve and reject callbacks in a Promise constructor when using async/await. please do again I said…

the AI: Sure, here's the updated implementation without using new Promise:

async function deleteImage(
  redisClient: RedisClient,
  key: string,
  id: number
): Promise<string> {
  try {
    const result = await redisClient.send_command(
      'imgscout.del',
      [key, id.toString()]
    );
    return result.toString();
  } catch (err) {
    throw new Error(`Error deleting image: ${err}`);
  }
}

so I changed my prompt to make sur to tell all and everything (remember that it have been said many times in the recent conversation and the AI agreed many time already in the recent conversation:

there’s no need to wrap the resolve and reject callbacks in a Promise constructor when using async/await. please do again do not await before return and do not convert to string a value that will be a string anyway…

got this :

/*...*/ {
  return new Promise((resolve, reject) => {
    redisClient.send_command(
      'imgscout.del',
      [key, id.toString()],
      (err, result) => {
        if (err) reject(err);
        else resolve(result);
      }
    );
  });
}

there’s no need to wrap the resolve and reject callbacks in a Promise constructor when using async/await

I apologize for the confusion. You are correct that there is no need to wrap the resolve and reject callbacks in a Promise constructor when using async/await. Here is the corrected implementation for deleteImage that removes the unnecessary Promise constructor and correctly handles the Redis response as a string:

/*...*/ 
try {
    const result = await redisClient.send_command('imgscout.del', [key, id.toString()]);
    return result.toString();
  } catch (err) {
    throw new Error(`Error deleting image: ${err}`);
  }

please do not update the return value of redisClient.send_command … is already a string do not await return

I apologize for the confusion. Here's an updated implementation:

/*...*/ 
return new Promise((resolve, reject) => {
    redisClient.send_command('imgscout.del', [key, id.toString()], (err, result) => {
      if (err) reject(err);
      else resolve(result);
    });
  });

thanks but also without using the promis constructor as mentioned earlier please follow all that was said in the recent chat…

Apologies for misunderstanding your previous message. Here is an updated implementation:

/*...*/ 
try {
    const result = await redisClient.send_command<string>(
      'imgscout.del',
      [key, id.toString()]
    );
    return result;
  } catch (err) {
    throw new Error(`Error deleting image: ${err}`);
  }

then I was obviously in an infernal loop because I said:

ok but please do not await return and please follow all that was mentioned in the previous conversations and got again an awaited …

then I tryed to be clever and said like this : 1) do not await before returning 2) do not use Promise constructor 3) keep the async keyword because we know that the function will return a promise and 4) do not give the tsdocs for this implementation as I already have it…

I guess that the AI is not to be blamed as he did this:

async function deleteImage(
  redisClient: RedisClient,
  key: string,
  id: number
): Promise<string> {
  return redisClient.send_command('imgscout.del', [key, id])
    .then(result => result.toString())
    .catch(err => {
      throw new Error(`Error deleting image: ${err}`);
    });
}

and so on and so forth…

Conclusion

To wrap up, I understand that it would have been much quicker and easier to write the function myself instead of spending so much time trying to get the AI to remember the previous steps in an iterative process.

async function deleteImage(
  redisClient: RedisClient,
  key: string,
  id: number
): Promise<string> {
  return redisClient.send_command('imgscout.del', [key, id]);
}

However, my main goal is to develop my prompting skills, and I believe that it’s essential to be able to communicate with the AI effectively.

I’ve been using the AI for the past three months, and I’ve always been able to update my code iteratively. This recent issue is frustrating and time-consuming. While the Playground doesn’t have the same problem, it’s not suitable for my workflow and use case. So, I’m hoping that the OpenAI community can help me find a solution to this problem.

Thank you for your understanding and support. I appreciate any advice or solutions that the community can provide. I will continue to work on my prompting techniques and try to find a way to get the AI to remember the previous steps. I’m confident that with the help of the community, I’ll be able to overcome this issue and continue to use the AI to its full potential.

1 Like

It’s hard to provide solutions when you say “The solution can’t be this, or this”.
Sometimes basic is better.

I understand it’s more about conquering the mountain rather than driving around it.

Usually any sort of multi-step process requires explaining any reasoning as a temporary memory. It’s hard to consider A → B when the magical process of transition is lost.

So I tried this:

Using the following steps, refactor and optimize the code.
1. Explain why a promise isn't required
2. Using the previous explanation, optimize the code, and wrap it in a codeblock
###
async function deleteImage(
  redisClient: RedisClient,
  key: string,
  id: number
): Promise<string> {
  const result = await redisClient.send_command('imgscout.del', [key, id.toString()]);
  return result.toString();
}
###
1. Explain why a promise isn't required

Which it responded with what you’re saying

In this code, a promise is not required because the function is already marked as async. This means [ ... ]

I then led it to the 2nd step. I think with Davinci it would be better, which it replied:

2. Using the previous explanation, optimize the code, and wrap it in a codeblock
###
Here's the optimized code wrapped in a code block:

typescript
Copy code
async function deleteImage(redisClient: RedisClient, key: string, id: number): string {
  const result = await redisClient.send_command('imgscout.del', [key, id.toString()]);
  return result.toString();
}
In this optimized code, the async keyword is used to mark the function as asynchronous, allowing the use of the await keyword inside the function body to wait for the Redis command to complete. By doing so, the function returns the result of the Redis command as a string without the need to wrap it in a Promise object.

Admittedly I haven’t spent any time confirming if this is correct. Hopefully it helps?
I’ve noticed that in most cases, it’s very beneficial to kind of filter a response thru the same engine. There’s definitely a point where it reaches some sort of fractal pattern state though.

2 Likes

You must all understand that this problem is new and it was not as time consuming before… And that it is now a real pain… Please look the two example below to understand the struggle and please remember that it was not like this before the outage at the beginning of this week (27 Feb 2023) I have been using 2 month of ChatGPT and one month of ChatGPT Plus and the problem I am trying to describe is something→NEW←!!!

I understand that you are saying @RonaldGRuckus… I should ask the AI to come up with an explanation of what I need him to remember so that he is saying it in is words… and then it will be more strongly inscribed in his memory somehow… I think it is a fair explanation I will try it in my ongoing conversation and see what it dose…

[a few minutes later]

async function deleteImage(
  redisClient: RedisClient,
  key: string,
  id: number
): Promise<string> {
  return redisClient.send_command<string>('imgscout.del', [key, id.toString()]);
}

This was easy (but said with a hint of irony):

  1. please explain the benefits of not using a Promise constructor and why we will not use this approach going forward…
    [AI response omitted]
  2. thanks in the same logic please explain why we do not need to await the result and why we can return the value directly…
    [AI response omitted]
  3. according to what you said so far please explain when it is important to use the try catch block and why we can avoid using in some context despite being in an async function ?
    [AI response omitted]
  4. and in retrospective according to all we said so far why should we avoid to use the dot.then and dot.catch when inside an async function and what parallels can we infer in comparaison with what we said so far
    [AI response omitted]
  5. ok then can you please create a new implementation of the deleteImage that would have its async annotation as per our agreed upon convention but that would follow all that you just explained to me please
    [AI response omitted]
  6. since we already know that the return value of redisClient.send_command( 'imgscout.del', [key, id.toString()] ); is a Promise what should we do to respect what you previously told me
    [AI response omitted]
  7. can we make sure the error if any is handled by the caller of that function instead of inside that function
    [AI response omitted]
  8. please make sure to use best practices and give me the function again
    [AI response omitted]
  9. do you remember that we said the redisClient.send_command will return a Promise and that the deleteImage function should return the same value and type Promise<string> what can we do to avoid a potential error to arise inside the deleteImage function and make sure it is passed directly to the caller before any error occur ???
    [AI response omitted]

But I asked the help of the AI to get the placeholder « [AI response omitted] » and there you have an other exemple:

  1. how can I say something like AI Reply truncated for simplicity or something like that
    You can say "AI reply shortened for clarity" or "AI response abbreviated for simplicity". These phrases convey the message that the AI's reply has been shortened or abbreviated to make it easier to understand or to improve readability.
  2. list 10 possible
    [AI response abbreviated for simplicity]
  3. but it is completely removed please 10 again
    [AI reply deleted for conciseness]
  4. Give me an iconic example idiomatic of english redaction style…

One iconic example of English redaction style is the opening sentence of Charles Dickens' "A Tale of Two Cities": "It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair, we had everything before us, we had nothing before us, we were all going direct to Heaven, we were all going direct the other way – in short, the period was so far like the present period, that some of its noisiest authorities insisted on its being received, for good or for evil, in the superlative degree of comparison only." This sentence uses parallel structure to create a balanced and memorable phrase.

  1. inspired from above please give me an iconic example idiomatic of english redaction style

Sure! Here’s an example that uses a common English idiom:

“Sorry, folks, we had to give the AI response the chop for brevity’s sake. Ain’t nobody got time for a long-winded explanation!”

This example incorporates the idiom “give the chop” to convey the idea of removing something, while also using a colloquial and humorous tone to make the statement more engaging.

  1. AS a short placeholder I mean

If you're looking for a placeholder phrase to indicate that an AI response has been removed, you could use:

  • “AI response omitted”
  • “AI reply removed”
  • “AI answer deleted”
  • “Response truncated”
  • “Reply condensed”
  • “Answer shortened”
  • “Content removed”
  • “Text cut”
  • “Information taken out”
  • “Details excluded”

Have you tried the same using Davinci?

I believe (complete speculation) that ChatGPT is not purposed to receive instructions. It’s more of just continuing a natural conversation. I also believe we’ll see this divergence increase more and more.

So far the Playground is not suited for my workflow… I have tried it again after this struggle it seems to be slightly better in regards to that specific issue… It is non the less not ideal for my workflow… I even removed my Git Hub Copilot subscription… it was like pure magic to use ChatGPT and it was easy to say something and the AI was taking anything I said in the context of what was said before… I have been using the ai 3 months… so I think something broke (for me) since the last update because I always have to explain again and again… Maybe the new version (the one since the outage) is improved and better overall… maybe my struggles are to be offset by some improvements… so far I have had one successful session with the AI but as I am using it every day to help me with my code I must admit that I must spend a ton of energy to explain the AI in order to get help and I would love to say… This thing is unstable I will not do business with OpenAI ever again… but after 3 month riding a car (and one week taking the bus for sake of the analogy) I do not want to start walking again…

The one session that was fluid and easy was on a topic that the AI was struggling with before the shadow update and magically it was able to help me for a super long session without any problems…

I would love OpenAI to explain what kind of update occurred on Monday it seems to be a Phantom Update because except for an outage I do not see any mention of anything anywhere… Later that week they came with an API that was dated MR 01 so I imagine I would have been happy to have also a version dated FB 26 or such… but well it is not the case…

I completely agree; there has been some magic lost from previous iterations of cGPT.

Perhaps we’re in the awkward intermediate stage. Everything is moving so fast, it’s hard to really know what the optimal workflow looks like right now.

Wishful thinking: once fine-tuning is available, perhaps they will also release previous iterations? I know many people would be very happy if that were to happen.

1 Like

I understand that we are working with an API that is marked in a super explicit manner in development regardless… I am now used to it I was so addicted to it that I paid 20$ less than 1 hours after receiving the message that it was available in my region… And now I am expecting to be part of the team (as a valuable customer) and receive some updates and if they have a plan that does not include my needs or something and they want to do whatever they want at least informing people in general or replying to request via email, here in the forum or on the discord server… it is very Opaque and they have Open in their name… as I said previously they can keep their secret sauce secret and not disclose part of the implementation but it is obvious to me that they should be able to be more Open or Transparent to their users… Bot paying customers and others but especially since I pay I would expect at least a return on my emails (or even an automated reply which I did not even revived)… They sold an addictive product they must accept the fact that if they want to make breaking changes they should tell people about the new expectations…

I think people who works in communication department are probably not the same people as those who work on the AIs and APIs therefor they should be able to reply, if not individually to each person, at least in a memo that they are overwhelmed but that they are working on a solution… I do not see a good excuse not to be able to say that they are too busy to reply… it takes less than half an hour to do…

Easier Example

I say:

the config file is else where please link me to it look: /projects/phash-scout/jest.config.js and /projects/phash-scout/src/services/redis/image-scout/imageScout.test.ts

ChagtGPT:

Replies something…

I say:

I need to pass the config in the jest please

ChatGPT (ignoring the path provided in the previous message):

You can pass Jest configuration options on the command line using the --config or -c flag. For example, to specify a custom Jest configuration file, you can run the following command:

css¹ :memo:Copy code
jest --config /path/to/jest.config.js
¹This is not CSS just so you know…

This is not the same behaviour as I was used to and I need to repeat all what was said so far it is annoying…

I think I have an idea… Look:

In fact no it’s not working I am exhausted and frustrated and I don’t want to admit that I feel sad I have a new example…

I was asking a series of questions about OpenAI API and then gave a piece of code and there was a token assignment to a variable named organization so I asked the AI what it was and the ai said this:

An organization refers to a group or entity, usually made up of people, that has a specific purpose or goal. It can be a for-profit company, non-profit organization, government agency, educational institution, or any other group that has a defined structure and operates within a particular context or environment. Organizations typically have some form of hierarchy or leadership structure, with individuals or teams responsible for different areas of operations or decision-making. They may also have various policies, procedures, and regulations that govern how they operate and interact with their stakeholders, such as customers, employees, partners, and the general public.

Apparently I am the only one who is experiencing this problem and there is no way to mention it to OpenAI I am tired frustrated and sad that it seems like it will not be possible for me to work with the AI when everyone else are using it it to build multi million dollar businesses or write entire books or discover a new cure for cancer…

I am apparently the only one who has the impression that it is just some kind of powerful toy…

I don’t understand how everyone is getting such amazing results and I am struggling with repeating myself in an infinite loop…

I think the problem is arising when you come back to an existing conversation in chatGPT interface … Doesn’t seem to reread the whole previous conversation … Maybe just a bug that should to be fix

I’ve been using the ChatBot for three months now and everything was going well until just a week ago.

I find myself in a loop where I constantly ask the ChatBot to do something and then add something else, but the ChatBot removes the first thing and the loop continues. If I ask for both this and that, it removes a third thing and I end up in a repeating cycle.

It’s likely that the issue is with the ChatBot, not me, since everything was working smoothly for the past three months.

(I asked ChatGPT to do the correction for this message and split it into paragraphs then I asked to also explain it was 3 months without problems and I had to ask him to split it into paragraphs again which is why I am saying this is annoying)…

I asked to split in paragraphs the prompt just one before the last and then you can see what I mean it is always like this all the time for me…

I’ve noticed the same. It has lost a lot of its ability to retain context. I’ve had numerous times when cGPT will explain something to me, using a contradiction against my initial query only 1-2 paragraphs in.

For that I’ve moved any instructional queries to Davinci and I’m playing around with “how to determine which model suits the situation better”

Is there any way to talk to OpenAI specialist? I believe that anyone reading this will throw themselves onto the floor to roll on the ground while laughing out loud at my question like this :joy::rofl::joy: and people will be like oh poor kid he is so naive that he is asking for a way to get in touch with OpenAI… But it’s a very serious question…

Did you manage to find a solution or a specific prompt to get around this?

I am getting similar loops, if basically gives solution A and B and back to A with the same error.

I try to breakdown the solution or even have it into many mini steps instead which doesn’t help.

I find this problem most prevalent in getting it to load / run queries, but CSS changes are quite simple.

1 Like

As you can see I never got any reply except for one request that hase been closed unresolved…

So my comment was not meant to be sarcastic (or at least not in a mean way); it was meant to be humorous and slightly ironic.

But I will suggest you to look at @AlexDeM post in an other of my thread it offer some solution for some cases if you are using the AI since a long period of time many suggestions should sound familiar to you but it is a qui complete reply that offer many insights for both newcomers and experienced advanced users:

Sadly it seems that even the most clever prompt will not always get the AI to comply and it gets quite annoying to get trap into a loop.

My take on that is to go back and edit my previous message instead of giving more instruction but even that dosent seem to solve all the problems like you can see in an other of my thread this time instead of complaining I am asking directly the help of the community to think a way to prompt the AI.

Look that the screen capture below my initial post in that thread to see how badly the AI misunderstands that he is so perfectly not making any mistakes it is either hilarious or frustrating I will let you all be the judge for that.