Prompts to stop abbreviations in code?

So this is a issue that started a few months ago after some obvious changes were made to ChatGPT and I have not figured out a way to get this to stop. I was wondering if anyone else has found a way to stop this. If so, please share your prompts if you can.

Problem:
Sometimes I use ChatGPT for various coding tasks and sometimes the scripts get close to what seems to be the limit with code output, which seems to start failing around 180-200 lines of code for GPT-4.

Specifically, the tasks I am discussing in this context are mostly modifications of existing code.

This happens more when the code exceeds 100 lines of code, but what happens is it starts abbreviating the code.

The problem with these abbreviations is that it suffers a LOT more hallucinations when it abbreviates code. It will respond based on made up code that doesnā€™t exist, as if the abbreviations create a loophole for it to imagine anything to fill those places. Unfortunately, this makes the modifications it makes useless.

Iā€™ve tried many approaches to getting this to stop, including:

  • Having ChatGPT write the prompts to get it to stop abbreviating the code.

  • Refining the prompts in countless ways and starting prompts in new chats.
    -Deleting the old chats using those prompts (as somehow it does seem old conversations seem to impact current ones.

  • Breaking the request into shorter requests in the same chat (this seems to cause more inconsistency in responses).

  • Countless ways of wording my request to not abbreviate, including trying copy/paste examples and explaining what I mean in regards to the abbreviation.

  • Changing placement of the no abbreviation request, including repeating it at the parts where it abbreviates repeatedly.

  • My my next prompt asking it to repeat with the entire code again, without abbreviations, and even copy/pasting the specific abbreviations it made telling it not to do this.

  • Plus many more that I cant think of off the top of my head.

Typically, it mostly abbreviates parts it says are not being changed, but that isnā€™t actually true. It often changes them and then omits the changes and sometimes those imagined changes cause the rest of the output to be completely useless gabage.

If anyone who uses ChatGPT to generate, refactor, convert, or otherwise alter code has found a way to get it to make changes without omitting code for brevity, especially getting closer to the 200 line mark, please share how you get it to stop abbreviating itself into hallucinations.

5 Likes
  • get in a time machine and go back to March when there was a GPT-4 that took time to think.

On the API, you can program your own chatbot that maintains as much conversation history as you want to pay for, but the best you can go for is a gpt-4-0314 dated model which doesnā€™t seem representative of the actual date nor the plodding token rate that came out of the full-quality model then.

3 Likes

I would fail at this point as well.

Try to break up your code. Read it closely. Try to separate it. For example, are there instances where you are performing a transformation that was done elsewhere? Boom, that can be encapsulated as a utility.

Each function should be pure (not always though, ideally though), and represent a single responsibility. So if you were to comment your code and write /* And thenā€¦ */ You have gone too far.

Next, I am going to gamble and say that you are using Python. Use Pydantic. Typing is structure. Structure is good. Assumptions are bad.

If you try to fight GPT you will lose. Or you will be left with a circuit that somehow works, but that has so many loose ends you may as well put some decorations on them and call it a christmas tree

3 Likes

I use it for a mix of coding ranging from Python to C# to Lua to Js. Mostly when I am working on things where I need to make changes in code that needs to work with other code.

Iā€™ve tried breaking it up, it doesnā€™t work. At the 180+ lines of code mark, you canā€™t really even have comments in your code because the more you have the more it forgets and the worse your responses get.

Also, I am rarely ever asking ChatGPT to create functions. Usually itā€™s stuff like modifying a module to change something, like in python taking a UI from one plugin and converting it to work with another plugin Iā€™ve never used before but that I know has the same capabilities or when I needed to modify a data file so it would work with Lua and Discord.js, which also involved me converting my bot from Discord.py to Discord.js because Discord.py (which is what I used) got discontinued when they added slash commands.

When itā€™s something small, I donā€™t have an issue. Just sometimes when Iā€™m working with something new or something Iā€™m not used to converting I need to make sure the whole module is consistent.

If you break it up, the conversation quickly becomes too much and ChatGPT can not remember important details. Itā€™s far better if you can accomplish it with one prompt. In a 180 line prompt, if it abbreviates even one time, it will not remember everything in the next prompt, at all.

This change obviously was made to reduce token usage, but token reduction really messes up code. So Iā€™m just trying to find out if anyone has had better luck with getting ChatGPT to actually listen and not abbreviate. Telling it to output the entire code with no omissions or abbreviations doesnā€™t do crap, it will tell you do it anyways and if you ask it right after it did it, youā€™ll get something like ā€œIā€™m sorry, here is the complete code without any abbreviations.ā€ Followed by abbreviated code.

1 Like

True. But not a good true. You are trying to force GPT to work in the way you want.

Based on the languages you listed this is a very complex system. But I need to ask, why are you passing UI information elsewhere?

Opinion: no itā€™s not. Quality responses require multiple prompts.

If thatā€™s how you feel you can use the API as mentioned. Itā€™s not the issue though.

1 Like

No, I am simply attempting to figure out how to get it to continue to work the way I used it for months before they changed things. Iā€™m not trying to get it to do anything differently than I always have, I simply want it to stop abbreviating its responses and omitting the bulk of the output. Iā€™ve seen outputs that were way bigger than a 180~line code output and I personally have gotten over 200 line code responses before. I literally never seen stuff like this before that update and now even when it apologizes and says it will output the entire code with no abbreviations, it just changes the words it uses to replace this and does the same thing.

image

Iā€™m not sure how this is relevant, but in that one particular example, I was converting the UI handler module for an older app that I wrote to PySide, which I had never used before, as the only UI APIs I had much experience with were Tkinter and Kivy.

Out of context, I would agree, but in context, I would completely disagree. When you get to the 180+ lines of code threshold, ChatGPT can not maintain an active conversation. Each prompt causes it to forget more and more of the history and it is unable to make further adjustments. Each prompt becomes subsequently worse. I have literally been through this exact scenario hundreds of times. The need to abbreviate the code becomes worse because the reference conversation quickly exceeds the token limit and it canā€™t remember what it needs to in order to accomplish the original task.

For the exact same reason you need to start a new conversation when threads get too long, you need to do the same thing when you are asking for large outputs as those large outputs are no less difficult to track than larger conversations consisting of too many smaller prompts and responses.

I wrote a chatbot that Iā€™m working on for this very reason because that absolutely is the issue. Before the update (I think it was March) when ChatGPT kind of went to crap in a lot of ways, you could get it to spread a response across multiple messages and I literally never one time from the early beta to that update ever had it abbreviate code like that. Further, it completely ignores requests not to abbreviate the code as well. I have even tested this going back in my chat history and taking chats where I successfully got it to output 180+ lines of code without abbreviating and took the exact same prompts and could not reproduce the results, it abbreviated 100% of the time.

The only thing that sucks is I only have access to the 4k model for GPT-4 and the GPT-3.5-16k is pretty trashy at code, my daughter tried to use it to get some help in her programming class (java), and literally every answer it gave her was wrong. If I could get even the option to pay more to use the newer business class ChatGPT I wouldnā€™t even bother with this crap.

Note: This is why I asked if anyone had any experience with abbreviations of code and had experienced anything that worked. If youā€™re not using ChatGPT to output larger blocks of code, then you really donā€™t understand what happens when you do. This is something I had been doing since GPT-4 launched all the way up until it started abbreviating the code. Since then, itā€™s not functional for this purpose, not because it canā€™t do it, but because getting it to not abbreviate is like pulling teeth and itā€™s infuriating, especially when it will constantly apologize and repeat the exact same thing over and over. This is why I barely use ChatGPT anymore outside comparisons with other AI responses. Iā€™m not entirely certain why I keep paying for ChatGPT+ and hoping to either get some use out of it or for them to improve it. I was hoping when people started canceling ChatGPT+ that they would improve it back to where it was before it started trying to rate limit token usage. Instead, they just added a business-class version that us bottom feeders canā€™t even pay to use.

1 Like

Unfortunately weā€™re at the whim of whatever OpenAI decides to do that day to ChatGPT. They have no true changelogs so something that worked today may not work tomorrow.

If you want consistency use the API. Thatā€™s the answer.

I donā€™t think I have ever run into a situation where ChatGPT omits code that it hasnā€™t already typed out before.

Itā€™s relevant.

Itā€™s a universal truth that long, convoluted text leads to poor results with LLMs. Regardless of the context.

I use ChatGPT daily for my work. I program in multiple languages. From building Android Applications (using Kotlin & Jetpack Compose) to simple back-end calculations & business logic. I am not, and Iā€™m certain that the majority of people do not run into this issue you are having. You are an exception and Iā€™m trying to figure out why. But I think I know.

Hopefully your customized chatbot writes the code you want.

1 Like

Thatā€™s a trained behavior. You might have a GUI init function with 150 lines. The AI is going to say (...) # previous code goes here instead of dumping out 2000 tokens again.

1 Like

Keyword is again.

Thereā€™s definitely times when code will be more than 100 lines. What Iā€™m trying to say is that by focusing on pure functions and separating your concerns (there can be a bunch of more granular concerns regarding a GUI init function) the code becomes easier to test, more modular, and more likely to be understood and written by ChatGPT

That was one circumstance one time converting a UI.

The behavior is actually quite simple to understand. If I provide a script and ask it to modify the script, it attempts to return anything that it perceives as unchanged as an abbreviation.

However, it isnā€™t really things that are unchanged. It generalizes it and returns this way things that might be generally the same in theory, but different in syntax.

The holes left in the code it provides leaves room for greater hallucinations, where when you question a change it made, it will add reference code that doesnā€™t exist, and it will then further those changes by more code that never existed.

If you get a complete code output with no omissions, it is likely to be correct. The moment it omits anything, the hallucinations increase and the likelihood it output anything functional or correct is very slim.

This isnā€™t just UI related or similar item context. It does this modifying scripts for all different purposes across different languages on different platforms. I literally could not count the number of variations in circumstances that I have encountered this.

1 Like

It depends on what the ā€œitā€ is that you reference.

If you refer to ChatGPT, it is almost forced to hallucinate by the chat management system. AI responses are discarded within a few turns, so that you can no longer discuss what the AI produced. The lack of AI-written output for multiple user inputs that OpenAI figures makes good ā€œcontextā€ for a general subject means the AI model is hung up on simply reproducing the same thing again.

If working with a lossless context, you just have an overall loss of coding abilities vs the GPT-4 of early 2023, and it is hard to point the finger at why it generates code without consideration of maintaining corrections or the original logic, even though it produces more pre-thought without prompting for it. Iā€™ve actually found it unstoppable at reproducing whole functions when you just want a line rewritten, and gpt-3.5-turbo is worse at this, ensuring that even fabricated constants and calling of functions are included so it can make examples.

I havenā€™t worked with large context code bases recently in AI, as I gave up coding large structures in gpt-4 needing big-picture view, with disappointment at what it had become. Repeated corrections nearly as tedious as writing all myself have succeeded in most small examples after I reduce my expectations.

GPT-4 will do anything and everything it can to avoid outputting tokens over a certain length, and it seems multishot of long examples only makes it more reluctant ā€¦ because in following fine-tune to never give satisfactory output length, reduced-quality AI canā€™t distinguish context length of what it is generating now from total context it received.

The way how I understand the problem of ChatGPT omitting parts of code is when the model tries to cut down on the length of the response. This usually happens, or maybe happens more often, when the model has already produced the exact same part of the reply before.
In programming, where we sometimes want to make little changes to existing code, we run into this issue very fast.
The solution that I found, is to go back to a previous message, adapt it to the last current state, rephrase the task, and thus the model has to produce the output as if it was the first time.
As an example, if I want to work on a specific module, maybe create it from scratch, I start the conversation with the goal and the plan and the framework. And I end the first message with, and now letā€™s work on step 1. As soon as step 1 is completed, I go back to the very first message and say: This is what we got, letā€™s work on step 2.
So maybe give it a try. I hope it helps.

I think youā€™re missing or not understanding the context.
This isnā€™t code it has already been produced or discussed at all.

An example of what Iā€™m talking about would be a single response in a single new conversation.

  1. I submit code that I want the ChatGPT to alter along with the instructions of the code.

  2. In my original prompt, I request that ChatGPT not abbreviate or omit any code from the response.

  3. ChatGPT responds with more than half the code omitted from the response, instead replacing it with comments such as ā€œOmitted for brevityā€ or instructions that the omitted part wasnā€™t changed.

  4. When testing the code, it doesnā€™t work.

5a. I reply with the modified code explaining errors, ChatGPT responds by telling you code that is missing and referencing code that was either changed in the code it said was not changed or code in that part that never existed at all.

5b. I respond, in the response I include a copy/paste of one of the comments and something like ā€œYou abbreviated when I asked you not to, hereā€™s an example of what you abbreviated, please respond with the entire code without any omissions or abbreviations.ā€ and it will be followed by some form of apology and something like ā€œhere is the entire code without any abbreviationsā€, followed by it literally returning the exact same code with the exact same abbreviations as it originally responded with.

As a general rule, with longer prompts like this, you canā€™t really have much in the way of conversations about it. After the one prompt, it gets worse with subsequent responses because the conversation becomes too long for ChatGPT to attempt to track because it will exceed the token limits.

The actual problem is that no matter what I say, it will NOT listen to requests not to abbreviate code. It prioritizes saving tokens over actually outputting the requested response.

Iā€™ve been through scenarios that have included these kinds of responses, changed prompts, changed the formatting, etc. literally more ways than I can count. If you break them up, the quality gets worse, if you donā€™t, it abbreviates and those abbreviations are filled with hallucinations that make the outputs completely useless.

Iā€™ve even gone in one chat, asked ChatGPT to create the prompts for ChatGPT, or even used Bard or other AIs to create the prompts best suited for the tasks and I can assure you, even the prompts ChatGPT or Bard or any other AI comes up with in the role of a prompt engineer, they arenā€™t vastly off from my own prompts, yet even they can not create prompts that will stop the problems I am constantly running into.

What Iā€™ve tried above are just small examples, they are not isolated problems. I could sit here for weeks going over every time an abbreviation has resulted in useless outputs and frustration as well as more circumstances that have produced those abbreviated responses than I can even name.

So itā€™s not because of my UI in the code or because itā€™s already output it before, because it hasnā€™t, and these circumstances are so vast that they encompass many variations in many scripts across multiple programming languages across multiple types of requests.

Even if I just requested it to write something from scratch, you get different forms of abbreviations such as "Your code for [whatever it tells you is needed] or ā€œlogic for [whatever part applies to the code] hereā€.

Occasionally, often after MANY attempts, I can get it to produce the output Iā€™m looking for. However, if I take the exact same prompt that worked and put it in a new chat, it does not reproduce the same results and like clockwork will begin to abbreviate again.

It doesnā€™t seem to matter how I word my prompt either. Whether I put ā€œPlease include 100% of the codeā€ or requests for ā€œNo abbreviations or omissionsā€ or whatever, it ignores those requests.

When I get successful results, Iā€™ll thumbs up, make sure I provide feedback, whatever that this was a good result. Then I can copy/paste and attempt to use that prompt again under slightly different circumstances and nope, back to the same original problem.

This doesnā€™t matter or have any meaning to whether or not it is one prompt with one response or if I continue it into smaller requests within one prompt, itā€™s all the same.

If I make smaller requests such as single functions, it hallucinates the previous responses and provides code based on made-up gibberish. If I use a single prompt with a single response, it omits parts of the response and hallucinates further prompts regarding the parts that were omitted. If I request AI-generated code and the code would be longer for a reply, it simply omits parts of the code and basically tells you to do it yourself, refusing to output the requested code in the first prompt, then forgetting everything it did provide when it provides the missing parts or responds based on made up stuff that doesnā€™t exist.

I canā€™t sit here and go over every scenario I have encountered this more than if it gets farther over the 100 lines of code threshold, the prioritization of saving tokens causes it to disregard the actual prompt, particularly in the 180+ line range where it will repeatedly apologize and repeat the same mistakes over and over again.

If the goal is to reduce token usage, itā€™s an absolute failure when it can take MANY conversations, some of which include multiple prompts going back and forth to produce what should have been included in the original prompt. If it takes me 50k tokens to produce what should have been included in one response, but couldnā€™t be because a few blocks of code are always omitted, then they havenā€™t saved anything, theyā€™ve used more tokens to produce lower-quality responses.

Once again, this literally NEVER happened to me, not even one single time before that update. Since then, it has been a 100% hassle, and that is the reason why Iā€™m not really even using ChatGPT anymore. ChatGPT shed a LOT of users after that update, like a LOT, and a LOT of people had similar complaints about the quality of responses being degraded. Those people were almost unilaterally being gaslit by people telling them it was them not prompting correctly.

When you put in months of time learning to create quality and effective prompts that produce results that consistently work, then an update that has zero transparency changes that and turns your previously working prompts into useless garbage that you canā€™t overcome, thatā€™s not a prompting issue, thatā€™s a crappy priority issue.

OpenAI has ZERO problems still producing quality responses to users on the new Business Tier ChatGPT, but for some reason, they feel ZERO obligation to maintain any sort of quality for people who are paying money to use ChatGPT Plus.

Yes, I understand for many, ChatGPT Plus still provides good enough responses or they donā€™t encounter issues. However, prompts such as ā€œcut your response in half and divide it across two responses so that you do not exceed the token limit of your responseā€ no longer work at all. I canā€™t divide my response and I canā€™t get it to respond with a solid block of code as requested.

Since that update, I have not successfully found ANY way to get it to stop degrading response quality by trying to conserve tokens. Meanwhile, my friend using the business class version can literally get 300+ lines of code modifications or responses with no issues at all and NEVER experiences what I began experiencing literally every day.

So this isnā€™t prompt-related, itā€™s not conversation-related, and itā€™s not me requesting things that this ChatGPT canā€™t handle.

It is 100% that OpenAI has changed ChatGPT to prioritize token usage in response which has made it incapable of responding with the exact responses that are the reason I started paying for the service. Originally, I thought ā€œwell this downgrade sucks and a lot of people hate it, so hopefully they will fix it once theyā€™ve lost enough people over it.ā€, or maybe once their available resources increased that they didnā€™t have to worry so much about strained usage that they could re-allocate the resources back and allow the higher quality outputs that they may have struggled with when they were having server outages or long queue times because they couldnā€™t handle the usage that was happeningā€¦ but no, thatā€™s not how it went down. Instead, every time someone posts about it they are mobbed by fanboys telling them how itā€™s their fault, enabling OpenAI to feel safe in their crappy decisions, and instead of ever fixing it, they just allocate those resources to provide better quality ChatGPT responses that are respectful of privacy to business class customers and leave the downgraded quality to the service we pay for.

The last prompt I was trying to get to work but I could not get it to stop abbreviating the responses, I asked my friend to see if he could get the output I was trying to get on his, it worked perfectly with no issues and he did NOT get the abbreviations. His is the 32k context though. Meanwhile, I know that ChatGPT is supposed to be the 4k, but Iā€™ve actually used a token counter and none of the outputs Iā€™m requesting are even close to that. If I had to count failures and successes, Iā€™d say in coding, ChatGPT Plus seems like ChatGPT 1.5k, not 4k. The perfect irony to this is that no matter how hard I try, it almost always includes a bunch of extra useless gibberish in those responses. So it will exclude 20 lines of code in the response, but give me 3 paragraphs of useless information I told it not to.

1 Like

Well I found some solutions that work ā€œgood enoughā€ and shared them with you. Anyways, even though you did not provide any prompt examples I whipped one up that works quite well.

Itā€™s a little bit overboard with the requirement but thatā€™s just that.
So I tested it on roughly 250 - 300 lines of code per iteration. Here is an example of adding a random debug print statement somewhere in the middle of a larger piece of code.

Hi!
You are a coding assistant tasked with making adjustments to a existing codebase.
Your name is ā€œCompletionist AIā€ because you return responses that cover the previously provided code as a whole from the first line to the last line.

Your task will be split into several roles.

Role 1 will describe the task and role 2 will perform a independent check if the task has been executed as requested.

Task for Role1:
In the following code we need to implement a debugging print statement for the honeypot key. ā† This is where the request for changes can be made
Reiterate the importance of returning the complete code provided by the user including all code from the beginning to the end.

Task for code generation:
Implement the change according to the requirements set forth by Role 1, the user request and the description of the chatbotā€™s characteristics as Completionist AI.

Task for Role2:
Check if the change has been made AND that the code output contains the complete code from start to finish, including each single line of code even it has not been changed. This is because we are programmatically extracting your reply and inserting it into the codebase for testing.

Well, it omitted less code than some of my prompts, so itā€™s not bad. It appears to have returned 223~ of the 253 lines you originally provided with only one abbreviation that I noticed, which I get a lot, saying ā€œ# ā€¦ rest of the codeā€.

I didnā€™t provide examples because Iā€™m a little uncertain about sharing the specific code Iā€™m working on and I have been kind of laid up with a back injury so Iā€™ve been too lazy to screenshot and redact the outputs.

Thank you for your effort to assist with this.

Not saying you are wrong. They definitely have been tightening/changing their policies.

Do you have proof of this?

Hmā€¦ I have never encountered this issue unless I was trying to use a library that
A) ChatGPT wasnā€™t knowledgeable of
B) ChatGPT wasnā€™t up-to-date with

What you are trying to do is an uphill battle of bending ChatGPT to work when you should be trying to work in the way that ChatGPT wants to. Prompting only goes so far and if you have exhausted every route without success then itā€™s time to think outside of the box.

When you say it ā€œhallucinatesā€ from single functions. Are your providing context for this function? It may benefit you to actually go through each function and ask ChatGPT to provide some documentation above it that gives context.

As mentioned, I use Jetpack Compose, and I also use ReactJS (MUI) for any UI-based programming. Both of these UIs are built on declarative programming. So my UI segments are almost always less than 100 lines of code.

I cannot use ChatGPT for the UI because the libraries are constantly being updated and always because ChatGPT doesnā€™t truly understand anything spatial. But, UI building takes less than 5% of my worktime because itā€™s so simple.

It may be of interest to people browsing and OpenAI to see some of your code. I canā€™t see how any UI code can have anything important.

Well Iā€™m not sure how I can provide ā€œproofā€ from people using enterprise ChatGPT and Iā€™m certainly not ousting my friend who is using it since it is at his work, even though he is pretty smug about it, but OpenAI made this pretty obvious when they announced it.

ā€œWeā€™re launching ChatGPT Enterprise, which offers enterprise-grade security and privacy, unlimited higher-speed GPT-4 access, longer context windows for processing longer inputs, advanced data analysis capabilities, customization options, and much more. We believe AI can assist and elevate every aspect of our working lives and make teams more creative and productive. Today marks another step towards an AI assistant for work that helps with any task, is customized for your organization, and that protects your company data.ā€

https://openai.com/blog/introducing-chatgpt-enterprise

Even in the example prompt above, where vb sent the link, youā€™ll see that it says ā€œ# ā€¦ rest of the codeā€. In my experience, obtaining that rest of the code rarely actually works with the rest of the code provided. Instead, if you ask it for that code, it often reveals that there is imaginary code that was supposed to be in there in order to make the rest of the provided code work. Typically, my requests are more involved than the example that was given by vb, so when I ask for the code that was omitted, it is because when I used the actual code it claims it omitted bor brevity, it doesnā€™t work with the provided script.

After that, it gets more convoluted, such as ChatGPT responding in a broad range of ways to express this such as telling me that I forgot code, that it required something else to be modified, and even telling me that I never provided that portion of the code. Regardless, there are often many errors with this that take many prompts attempting to solve in many ways before I can finally get the issue solved.

Note: I started writing this reply right after I saw your reply, but then I got busy and that was a long time ago, so I donā€™t actually remember what else I was going to say.

Would you be willing in that sample conversation to attempt to get it to provide the ā€œ# ā€¦ rest of the codeā€ and share the results of that?

Youā€™re right. I think I approached this thinking you said ā€œenterprise clients get a better modelā€ but yeah, longer context windows obviously can lead to better responses

Yeah. I think with something like changing UIs it could be that ChatGPT initially thinks that itā€™s worth omitting. But itā€™s a ā€œby-tokenā€ sequence. Itā€™s not like it can type something out, and then edit it afterwards before anyone can see (like I commonly do). Or even backspace. Thereā€™s only one time I have ever seen cGPT back-pedal:

So I am wondering if your use-case is something worth investigating. I think it would be worth posting your chatlogs.

Yeah, and itā€™s more than just that, but hereā€™s the breakdown of enterprise that I can think of off the top of my head right now.

ā€œunlimited higher-speed GPT-4 accessā€

They donā€™t have the whole rate-limiting thing. Where this is important is that they donā€™t have the fine-tuning ChatGPT has that makes it want to conserve tokens. ChatGPT Plus is a $20, flat rate, and if we burn 10 million tokens or 10 in a month, it doesnā€™t change our bill. Itā€™s very obvious that ChatGPT is attempting to save tokens, or they wouldnā€™t have things like abbreviating in the first place.

ā€œlonger context windows for processing longer inputsā€

Itā€™s not a little longer, itā€™s 8x the 4k they claim ChatGPT to be without considering throttling and token reduction impacts. This is enormous and can not be understated.

ā€œHigher-speed performance for GPT-4 (up to 2x faster)ā€ or ā€œUnlimited access to advanced data analysis (formerly known as [Code Interpreter])ā€

Something I think people discount because they donā€™t know what it means, but itā€™s more than tokens, itā€™s how much processing power the AI is allowed to dedicate to finding your answers/solutions. A little while back, Google tweaked Bard, not dissimilar to changes that have been done in ChatGPT, but they were not subtle, it was big. Sometimes Bard doesnā€™t remember what youā€™re talking about from one message to the next anymore and the reason it lies so much is it isnā€™t given a lot of room to search for the answers. This has a HUGE impact on how accurate your AI is. This influences things like how strictly it can follow your instructions and how much processing time it can put into creating your output.

Enterprise ChatGPT 4 is smarter than ChatGPT Plus because it is not being throttled, itā€™s not trying to save tokens, and it is given more access to processing time than what we get.

ā€œcustomization optionsā€

Using an API, we can do things like set our roles and even fine-tune our data. ChatGPT Plus does not have the same level of options that an API does, but Enterprise is in many ways better than the API version. It is in whole the best of both worlds with some extra perks. For example, no Enterprise subscriber has to worry about whether or not their entire corporation will end up banned because they donā€™t have good enough moderation filters set up.

ā€œenterprise-grade security and privacyā€ includes not just the actual security stuff, but Enterprise ChatGPT users can keep their chat history, have it secure, AND not have their inputs used as training data. If we want to opt-out of training on our work, we lose our chat history which makes archiving prompts a lot harder and everything else that comes with that chat history, including sharing our conversations. Itā€™s actually this lack of security that caused a lot of companies to ban ChatGPT usage at work, which is what created the biggest need for the Enterprise ChatGPT to win back those companies. Once you get into proprietary work, you make the water very muddy with training on that data. Take away the chat history, you make it a lot less practical to use. I personally would pay for the option to have both, but without a huge corporate contract, that option doesnā€™t exist for me outside of creating my own API-based model which isnā€™t nearly as convenient for me.

Thatā€™s just raw out-the-gate stuff though that would be big no matter how you spin it. There are plenty of ones like data encryption in transit, sharable chat templates, and dedicated customer support that set Enterprise ChatGPT far apart from ChatGPT Plus.

The point is that Enterprise ChatGPT is a WAY superior product to our ChatGPT in every way, except the bill at the end of the month. There are companies out there rocking $1k/month/seat on average, so I understand they are shelling out money to get it.

Me, for my usage, I would gladly spend $40, $50, heck if I got the use out of it even $100 / month to have the better version and I donā€™t really think my usage would ever be where itā€™s not profitable. If they offered the ability to even pay a usage-based fee for the better service to us, I would definitely at least try it and spend the money to do so and I know that if my usage did get higher that means the benefit would also be higher, making it worth whatever I ended up spending.

If they told me that I could pay a little more and stop it from trying to conserve tokens or even said I could get the higher service with more limited usage unless I want to pay more, I would do that.

I hope that our product gets better, but Iā€™m not confident we will see the level of improvements anywhere near what enterprise sees. It wasnā€™t long ago that we were the ones getting love over the free users. Now, it feels like weā€™re the low-key free-plus users.

People here donā€™t like to face the reality that we are now the unloved stepchild to Enterprise ChatGPT, but thatā€™s it, and they are putting priority into that product that we used to get over free users. What we have here is absolutely nowhere near the product they have on that side of the fence. The grass is absolutely greener, itā€™s just also expensive and they are very selective about who gets access to it, which goes against everything OpenAI once stood for making AI available to the population and not just big companies. Now, even what we pay for is more of a sample, not a representation of what the tech can do.