I think you’re missing or not understanding the context.
This isn’t code it has already been produced or discussed at all.
An example of what I’m talking about would be a single response in a single new conversation.
I submit code that I want the ChatGPT to alter along with the instructions of the code.
In my original prompt, I request that ChatGPT not abbreviate or omit any code from the response.
ChatGPT responds with more than half the code omitted from the response, instead replacing it with comments such as “Omitted for brevity” or instructions that the omitted part wasn’t changed.
When testing the code, it doesn’t work.
5a. I reply with the modified code explaining errors, ChatGPT responds by telling you code that is missing and referencing code that was either changed in the code it said was not changed or code in that part that never existed at all.
5b. I respond, in the response I include a copy/paste of one of the comments and something like “You abbreviated when I asked you not to, here’s an example of what you abbreviated, please respond with the entire code without any omissions or abbreviations.” and it will be followed by some form of apology and something like “here is the entire code without any abbreviations”, followed by it literally returning the exact same code with the exact same abbreviations as it originally responded with.
As a general rule, with longer prompts like this, you can’t really have much in the way of conversations about it. After the one prompt, it gets worse with subsequent responses because the conversation becomes too long for ChatGPT to attempt to track because it will exceed the token limits.
The actual problem is that no matter what I say, it will NOT listen to requests not to abbreviate code. It prioritizes saving tokens over actually outputting the requested response.
I’ve been through scenarios that have included these kinds of responses, changed prompts, changed the formatting, etc. literally more ways than I can count. If you break them up, the quality gets worse, if you don’t, it abbreviates and those abbreviations are filled with hallucinations that make the outputs completely useless.
I’ve even gone in one chat, asked ChatGPT to create the prompts for ChatGPT, or even used Bard or other AIs to create the prompts best suited for the tasks and I can assure you, even the prompts ChatGPT or Bard or any other AI comes up with in the role of a prompt engineer, they aren’t vastly off from my own prompts, yet even they can not create prompts that will stop the problems I am constantly running into.
What I’ve tried above are just small examples, they are not isolated problems. I could sit here for weeks going over every time an abbreviation has resulted in useless outputs and frustration as well as more circumstances that have produced those abbreviated responses than I can even name.
So it’s not because of my UI in the code or because it’s already output it before, because it hasn’t, and these circumstances are so vast that they encompass many variations in many scripts across multiple programming languages across multiple types of requests.
Even if I just requested it to write something from scratch, you get different forms of abbreviations such as "Your code for [whatever it tells you is needed] or “logic for [whatever part applies to the code] here”.
Occasionally, often after MANY attempts, I can get it to produce the output I’m looking for. However, if I take the exact same prompt that worked and put it in a new chat, it does not reproduce the same results and like clockwork will begin to abbreviate again.
It doesn’t seem to matter how I word my prompt either. Whether I put “Please include 100% of the code” or requests for “No abbreviations or omissions” or whatever, it ignores those requests.
When I get successful results, I’ll thumbs up, make sure I provide feedback, whatever that this was a good result. Then I can copy/paste and attempt to use that prompt again under slightly different circumstances and nope, back to the same original problem.
This doesn’t matter or have any meaning to whether or not it is one prompt with one response or if I continue it into smaller requests within one prompt, it’s all the same.
If I make smaller requests such as single functions, it hallucinates the previous responses and provides code based on made-up gibberish. If I use a single prompt with a single response, it omits parts of the response and hallucinates further prompts regarding the parts that were omitted. If I request AI-generated code and the code would be longer for a reply, it simply omits parts of the code and basically tells you to do it yourself, refusing to output the requested code in the first prompt, then forgetting everything it did provide when it provides the missing parts or responds based on made up stuff that doesn’t exist.
I can’t sit here and go over every scenario I have encountered this more than if it gets farther over the 100 lines of code threshold, the prioritization of saving tokens causes it to disregard the actual prompt, particularly in the 180+ line range where it will repeatedly apologize and repeat the same mistakes over and over again.
If the goal is to reduce token usage, it’s an absolute failure when it can take MANY conversations, some of which include multiple prompts going back and forth to produce what should have been included in the original prompt. If it takes me 50k tokens to produce what should have been included in one response, but couldn’t be because a few blocks of code are always omitted, then they haven’t saved anything, they’ve used more tokens to produce lower-quality responses.
Once again, this literally NEVER happened to me, not even one single time before that update. Since then, it has been a 100% hassle, and that is the reason why I’m not really even using ChatGPT anymore. ChatGPT shed a LOT of users after that update, like a LOT, and a LOT of people had similar complaints about the quality of responses being degraded. Those people were almost unilaterally being gaslit by people telling them it was them not prompting correctly.
When you put in months of time learning to create quality and effective prompts that produce results that consistently work, then an update that has zero transparency changes that and turns your previously working prompts into useless garbage that you can’t overcome, that’s not a prompting issue, that’s a crappy priority issue.
OpenAI has ZERO problems still producing quality responses to users on the new Business Tier ChatGPT, but for some reason, they feel ZERO obligation to maintain any sort of quality for people who are paying money to use ChatGPT Plus.
Yes, I understand for many, ChatGPT Plus still provides good enough responses or they don’t encounter issues. However, prompts such as “cut your response in half and divide it across two responses so that you do not exceed the token limit of your response” no longer work at all. I can’t divide my response and I can’t get it to respond with a solid block of code as requested.
Since that update, I have not successfully found ANY way to get it to stop degrading response quality by trying to conserve tokens. Meanwhile, my friend using the business class version can literally get 300+ lines of code modifications or responses with no issues at all and NEVER experiences what I began experiencing literally every day.
So this isn’t prompt-related, it’s not conversation-related, and it’s not me requesting things that this ChatGPT can’t handle.
It is 100% that OpenAI has changed ChatGPT to prioritize token usage in response which has made it incapable of responding with the exact responses that are the reason I started paying for the service. Originally, I thought “well this downgrade sucks and a lot of people hate it, so hopefully they will fix it once they’ve lost enough people over it.”, or maybe once their available resources increased that they didn’t have to worry so much about strained usage that they could re-allocate the resources back and allow the higher quality outputs that they may have struggled with when they were having server outages or long queue times because they couldn’t handle the usage that was happening… but no, that’s not how it went down. Instead, every time someone posts about it they are mobbed by fanboys telling them how it’s their fault, enabling OpenAI to feel safe in their crappy decisions, and instead of ever fixing it, they just allocate those resources to provide better quality ChatGPT responses that are respectful of privacy to business class customers and leave the downgraded quality to the service we pay for.
The last prompt I was trying to get to work but I could not get it to stop abbreviating the responses, I asked my friend to see if he could get the output I was trying to get on his, it worked perfectly with no issues and he did NOT get the abbreviations. His is the 32k context though. Meanwhile, I know that ChatGPT is supposed to be the 4k, but I’ve actually used a token counter and none of the outputs I’m requesting are even close to that. If I had to count failures and successes, I’d say in coding, ChatGPT Plus seems like ChatGPT 1.5k, not 4k. The perfect irony to this is that no matter how hard I try, it almost always includes a bunch of extra useless gibberish in those responses. So it will exclude 20 lines of code in the response, but give me 3 paragraphs of useless information I told it not to.