I wish I could say this is the only time, I’m a little out of it and have a headache right now but I just grabbed a conversation from a little bit back where I had asked it to modify something. I can’t tell you that I even remember this but looking back I was attempting to get it to change something and it just kept outputting the same response over and over without actually modifying the code.
It literally just kept providing the same code over and over again, but with each one acted like it was providing me modified code.
Once again, this is a bugged output, and since I kind of gave up I would have to do some digging to provide specific examples that I can give context for. In that conversation, it kept producing the same output that ignored my instructions. It would acknowledge the problem, then just kept outputting the same code with various explanations for what it changed and various ways of apologizing. Looking through, it looks like it did this seven total times in a row before I gave up on the thread and moved on to a new conversation.
The instructions were pretty clear, but now I am going to admit something.
Looking back at some of these, I will tell you that there is a good reason why I don’t want to share some of them. In some of these, after I’ve gone over and attempted many prompts many ways, and I will admit that sometimes do vent my frustration in the chats. By this point in time, I have given up on the notion of it becoming helpful and am only venting my frustrations. Under these circumstances, my language in the chat has gotten… let’s just say colorful.
Regardless, I have tried longer and more detailed prompts as well as simplified prompts. I’ve tried having ChatGPT, Bard, and other AI write prompts for the purposes I’m requesting and I’ve tried writing them as simplified logical instructions.
The end result is typically far more consistent than the prompts or the general outline of the request. If the request is short, it’s hit-and-miss. Sometimes the information it gives is just awful, like when my daughter tried to use it for help with her Javascript classwork. Every answer it gave her was wrong. When I ask for requests, some of my conversations are longer consisting of more simple responses to fine-tune my answer. Sometimes they are longer and more complex. I almost always eventually find the right combination of prompts and almost always get it to break down and provide what I am asking for, so eventually I do find a solution that works.
The problem is that when I take that working solution and attempt to duplicate it, it fails to provide the same results and I find myself back at square one taking sometimes absurd numbers of prompts across multiple chats to get the result I was originally after. I keep a lot of prompt notes and keep up on modern prompting practices, especially when there are model changes.
I find jailbreaking AI to be relatively easy using logic and reasoning. I’ve done a lot of bias checking and testing ranging from controversial subjects to ethics and I generally don’t have problems getting the AI to respond the way I want it to. It’s just the issues with abbreviating code that drive me nuts. I attribute these issues to ChatGPT’s fine-tuning to conserve token usage and a reduction in processing time used for generating responses. These things often cause me to hit the conversation token threshold before I’m able to get the responses I am after and the inability to keep up with too long of prompts, limiting ChatGPT’s ability to maintain an accurate or fluid response to the conversation.
To this, I think an increased token limit would largely solve the issue I’m facing as I believe that is one of the walls I’m hitting, largely influenced by ChatGPT’s desire to conserve tokens. This is the factor that was not originally there in earlier models of ChatGPT-4, it did not attempt to ignore requests for the sake of preserving tokens and I believe that modification is what has broken my ability to use ChatGPT as I previously did. Nothing more, nothing less.
I hope eventually there will be a version of ChatGPT Enterprise that regardless of price structure, is accessible to those of us who do not have large corporations and endless bank accounts.
Note: Another reason I don’t necessarily like adding singular examples to illustrate the problem is that it tends to become an individual focus, as if the problem only happens under these conditions. In reality, the scenarios in which this has been the result are far greater than I can ever share and I have made many countless changes and adjustments to modify what I’m using it for, how I attempt to accomplish it, the process of how I structure my prompts and my conversations as a whole, and much more, and I’ve encountered these problems across a wide range of scenarios so large, it’s actually hard for me to narrow it down to one that I think illustrates the issue or does the problem justice.
Here’s an example of a prompt where I was obviously trying to compensate for previous failures and frustrations. This was a request to move some functions from a single script program into a modular framework.
In the AI’s response, it acknowledged exactly what I wanted it to do and had it followed the instructions it claimed to understand, there would have been no issues.
Prompt (not how all my prompts are structured, this was a Hail Mary).
Here is the ChatGPT response:
Now, to alleviate too many issues, this sample worked in this exact scenario. I received a response that was 58 lines of code and did exactly what I requested. The script I provided for context was 195 lines of code.
Nearly this same identical format, cut/paste, same instruction style, but with an output that I knew would be larger, (180~ range), the abbreviations and omissions were immediate and ChatGPT could not produce a response without omitting code. I have tried this both with and without the acknowledgment response request. Sometimes I’ll make the request for a response to make sure ChatGPT understands my prompt and takes the same thing from my request that I am asking, then if I have a context length issue I’ll take it and just include the code with the response and not ask for the acknowledgment. Even when I know that ChatGPT understands what I am asking it to do, that is no guarantee it will actually do it.
In the example where I included multiple screenshots of the same code output, that was an attempt to break the request into smaller prompts, and as you can see, even with a small response it could not complete the request before it got amnesia and couldn’t tell what it was doing.
I will make a commitment that as I am about to attempt to start using ChatGPT again, I will omit my colorful language at the end and attempt to create conversations that I can share to demonstrate these issues.
I thank those who have responded for your attempts to help me solve this issue. I am not above finding new ways to accomplish my tasks even when I am trying old tasks that worked before. I simply want to figure out how to get code outputs that I need without causing chat amnesia because the context gets too long. Traditionally, the best way to do this was to use the most accurate and specific prompts possible so that you did not have to go back and forth where the extra conversational garbage, apologies, and explanations clog up the chat.
I’m also going to play with roles and context for chats.
If any of you out there who use ChatGPT to generate code have good roles that seem to help, please share if you’re willing to.
I think in my API I have the role as something like “You are a full stack developer that only responds in code. Any context or instructions will only appear within that code as comments.”
I haven’t played with the roles in ChatGPT much since as I said, I largely stopped using it as it was taking me far longer to use the ChatGPT than it did to just hand code everything myself. Also, my recent health issues kind of left me unmotivated to deal with it.


























