I think youāre missing or not understanding the context.
This isnāt code it has already been produced or discussed at all.
An example of what Iām talking about would be a single response in a single new conversation.
-
I submit code that I want the ChatGPT to alter along with the instructions of the code.
-
In my original prompt, I request that ChatGPT not abbreviate or omit any code from the response.
-
ChatGPT responds with more than half the code omitted from the response, instead replacing it with comments such as āOmitted for brevityā or instructions that the omitted part wasnāt changed.
-
When testing the code, it doesnāt work.
5a. I reply with the modified code explaining errors, ChatGPT responds by telling you code that is missing and referencing code that was either changed in the code it said was not changed or code in that part that never existed at all.
5b. I respond, in the response I include a copy/paste of one of the comments and something like āYou abbreviated when I asked you not to, hereās an example of what you abbreviated, please respond with the entire code without any omissions or abbreviations.ā and it will be followed by some form of apology and something like āhere is the entire code without any abbreviationsā, followed by it literally returning the exact same code with the exact same abbreviations as it originally responded with.
As a general rule, with longer prompts like this, you canāt really have much in the way of conversations about it. After the one prompt, it gets worse with subsequent responses because the conversation becomes too long for ChatGPT to attempt to track because it will exceed the token limits.
The actual problem is that no matter what I say, it will NOT listen to requests not to abbreviate code. It prioritizes saving tokens over actually outputting the requested response.
Iāve been through scenarios that have included these kinds of responses, changed prompts, changed the formatting, etc. literally more ways than I can count. If you break them up, the quality gets worse, if you donāt, it abbreviates and those abbreviations are filled with hallucinations that make the outputs completely useless.
Iāve even gone in one chat, asked ChatGPT to create the prompts for ChatGPT, or even used Bard or other AIs to create the prompts best suited for the tasks and I can assure you, even the prompts ChatGPT or Bard or any other AI comes up with in the role of a prompt engineer, they arenāt vastly off from my own prompts, yet even they can not create prompts that will stop the problems I am constantly running into.
What Iāve tried above are just small examples, they are not isolated problems. I could sit here for weeks going over every time an abbreviation has resulted in useless outputs and frustration as well as more circumstances that have produced those abbreviated responses than I can even name.
So itās not because of my UI in the code or because itās already output it before, because it hasnāt, and these circumstances are so vast that they encompass many variations in many scripts across multiple programming languages across multiple types of requests.
Even if I just requested it to write something from scratch, you get different forms of abbreviations such as "Your code for [whatever it tells you is needed] or ālogic for [whatever part applies to the code] hereā.
Occasionally, often after MANY attempts, I can get it to produce the output Iām looking for. However, if I take the exact same prompt that worked and put it in a new chat, it does not reproduce the same results and like clockwork will begin to abbreviate again.
It doesnāt seem to matter how I word my prompt either. Whether I put āPlease include 100% of the codeā or requests for āNo abbreviations or omissionsā or whatever, it ignores those requests.
When I get successful results, Iāll thumbs up, make sure I provide feedback, whatever that this was a good result. Then I can copy/paste and attempt to use that prompt again under slightly different circumstances and nope, back to the same original problem.
This doesnāt matter or have any meaning to whether or not it is one prompt with one response or if I continue it into smaller requests within one prompt, itās all the same.
If I make smaller requests such as single functions, it hallucinates the previous responses and provides code based on made-up gibberish. If I use a single prompt with a single response, it omits parts of the response and hallucinates further prompts regarding the parts that were omitted. If I request AI-generated code and the code would be longer for a reply, it simply omits parts of the code and basically tells you to do it yourself, refusing to output the requested code in the first prompt, then forgetting everything it did provide when it provides the missing parts or responds based on made up stuff that doesnāt exist.
I canāt sit here and go over every scenario I have encountered this more than if it gets farther over the 100 lines of code threshold, the prioritization of saving tokens causes it to disregard the actual prompt, particularly in the 180+ line range where it will repeatedly apologize and repeat the same mistakes over and over again.
If the goal is to reduce token usage, itās an absolute failure when it can take MANY conversations, some of which include multiple prompts going back and forth to produce what should have been included in the original prompt. If it takes me 50k tokens to produce what should have been included in one response, but couldnāt be because a few blocks of code are always omitted, then they havenāt saved anything, theyāve used more tokens to produce lower-quality responses.
Once again, this literally NEVER happened to me, not even one single time before that update. Since then, it has been a 100% hassle, and that is the reason why Iām not really even using ChatGPT anymore. ChatGPT shed a LOT of users after that update, like a LOT, and a LOT of people had similar complaints about the quality of responses being degraded. Those people were almost unilaterally being gaslit by people telling them it was them not prompting correctly.
When you put in months of time learning to create quality and effective prompts that produce results that consistently work, then an update that has zero transparency changes that and turns your previously working prompts into useless garbage that you canāt overcome, thatās not a prompting issue, thatās a crappy priority issue.
OpenAI has ZERO problems still producing quality responses to users on the new Business Tier ChatGPT, but for some reason, they feel ZERO obligation to maintain any sort of quality for people who are paying money to use ChatGPT Plus.
Yes, I understand for many, ChatGPT Plus still provides good enough responses or they donāt encounter issues. However, prompts such as ācut your response in half and divide it across two responses so that you do not exceed the token limit of your responseā no longer work at all. I canāt divide my response and I canāt get it to respond with a solid block of code as requested.
Since that update, I have not successfully found ANY way to get it to stop degrading response quality by trying to conserve tokens. Meanwhile, my friend using the business class version can literally get 300+ lines of code modifications or responses with no issues at all and NEVER experiences what I began experiencing literally every day.
So this isnāt prompt-related, itās not conversation-related, and itās not me requesting things that this ChatGPT canāt handle.
It is 100% that OpenAI has changed ChatGPT to prioritize token usage in response which has made it incapable of responding with the exact responses that are the reason I started paying for the service. Originally, I thought āwell this downgrade sucks and a lot of people hate it, so hopefully they will fix it once theyāve lost enough people over it.ā, or maybe once their available resources increased that they didnāt have to worry so much about strained usage that they could re-allocate the resources back and allow the higher quality outputs that they may have struggled with when they were having server outages or long queue times because they couldnāt handle the usage that was happeningā¦ but no, thatās not how it went down. Instead, every time someone posts about it they are mobbed by fanboys telling them how itās their fault, enabling OpenAI to feel safe in their crappy decisions, and instead of ever fixing it, they just allocate those resources to provide better quality ChatGPT responses that are respectful of privacy to business class customers and leave the downgraded quality to the service we pay for.
The last prompt I was trying to get to work but I could not get it to stop abbreviating the responses, I asked my friend to see if he could get the output I was trying to get on his, it worked perfectly with no issues and he did NOT get the abbreviations. His is the 32k context though. Meanwhile, I know that ChatGPT is supposed to be the 4k, but Iāve actually used a token counter and none of the outputs Iām requesting are even close to that. If I had to count failures and successes, Iād say in coding, ChatGPT Plus seems like ChatGPT 1.5k, not 4k. The perfect irony to this is that no matter how hard I try, it almost always includes a bunch of extra useless gibberish in those responses. So it will exclude 20 lines of code in the response, but give me 3 paragraphs of useless information I told it not to.