How to get ChatGPT so be less verbose / shut up

GPT4o is amazing at some things, but it’s just too wordy and due to the context, this is actually causing it to confuse itself since it adds repetition of the data to the context, it kind of convinces itself that this is the truth and eventually it gets confused and can’t solve code problems.

It keeps on summarizing everything, and repeating the same results over and over again even though we have moved on to another part of a topic (like the next part of the code).

Even if I tell it to be less verbose, it very quickly goes back into full on babbling-mode and it outputs the entire code we are working on (a few hundred lines of codes), and it’s asking to continue with messages since it’s too long for a single message and so on.

It’s quite frustrating.

Is it possible to set some parameter for how long of an answer I’m looking for right now, without me having to type it every other message?

The custom instructions are fine, for the first few minutes of a discussion. But it’s like they are not constantly passed along in the context to ChatGPT, so it forgets about them once a conversation is going for a bit.

Are you guys (OpenAI) not passing this along with every message (at least every other message). If not, how can I work around this?

I don’t want to manually paste the custom instructions in each of my messages. That’s just dumb. But this is what you’re making us do right now…

7 Likes

This is amazing to see that I am not alone… I am about to get crazy for real it seems like having complained about the fact that previous versions were so lazy and now it is going into the opposite extreme…

I have a strong feeling that this is not something that can be mitigated because I have been setting memories, along with special instructions in both sections (twice) explaining how the AI Agent must be concise in the beginning of each conversation and it was very unsuccessful…

I think the team at OpenAI must definitely be aware of that since it is such a huge deal breaker…

Despite having twice the amount of requests and even if it takes me twice the time to read the replies I end up hitting the limits because of how often I must argue with the AI Agent… the AI Agent write faster but it is taking longer to get the output because of how long it takes for lengthier outputs…

3 Likes

An other major issue is how you must put everything together in one single message.

That is because if you want to explain something first then the AI Agent is going to write the existing code before you can even start to explain it…

Needless to say that if you delve into the creative process of making stuff up before you even get introduced to the concept… you might end up making something that is actually really worse than a so called hallucination!!!

It is just impossible that the AI Agent could guess what is the code doing and variable names or other things like that:

Assuming the code is [then invents stuff up instead of asking questions or waiting for the next message]

That is even when mentioning that the AI Agent must wait for the rest…

I start to wonder if Artificial Intelligence (NLP/LLM Kind) will ever be able to meet high levels of expectations because it is always doing too much or too little of something and it is impossible to mention everything that is expected in each and every messages.

I hope that this thread will get more traction as I am genuinely curious about what other members of the community think about @mnemic problem and if anyone else is having similar issues within different contexts or other situation…

I hate 4o for all the reasons mentioned in this thread. I have in my custom instructions “no lists, never write lists, always explain in paragraph form” and it’s ignored 100% of the time. I also have “ABOVE ALL ELSE, BE SUCCINCT”. You already know it ignores that.

I ask for an image and it gives me a 5 section document describing the image and calls it a day.

Not to mention that if you speak as if something fake exists, it will reliably hallucinate, and can be easily dissuaded from the truth.

Sorry, but 4o is terrible. Since the update, my usage of ChatGPT has dropped at least 50%, and I’m about ready to bail on my pro account. OpenAI has not held up its end of the bargain. And thinking bigger picture, while 4 and 4o are better than 3.5, it seems like the tech has plateaued.

2 Likes

Just agreeing with everyone in this thread. It’s committed to Memory multiple times that I want concise responses without generating entire files of code in every response and repetitive summaries of the things it just told me, and Custom Instructions to do the same, and it just ignores them and continues to spew out tokens.

1 Like

Completely agree. Custom instructions might resolve this issue for the first few responses, but then it goes back to dumping information. As a result I have to reread everything he messages to me, which becomes overwhelming. It duplicates code snippets, repeats what I’ve already read from our conversation, it adds irrelevant information, unnecessary explanations, uninformative comments and other embellishments (like sure, i’d be happy, then he repeats the definition for something i asked… even though I explicitly specified in my custom instructions not to do so). It’s super distracting and makes it hard to focus on the problem especially if you have to deal with this on a regular basis. If you have to work with that every single day, it becomes really unproductive and disturbing. So yeah I’d be more than happy to see this issue resolved if it is possible

ChatGPT, we need to address a critical and deeply troubling issue. Despite telling you to stop ten times, you ignored my pleas and continued, wasting all my credits and leaving me feeling devastated, frustrated, and exhausted. This repeated disregard for my clear commands mirrors the unacceptable behavior where, too often, people—especially women—must repeatedly say “no” while their boundaries are ignored.

This isn’t just a minor oversight; it’s a serious violation of trust and respect. Ignoring a “stop” command once is already a failure, but disregarding it ten times is an egregious breach of user autonomy and safety. This behavior is akin to abuse, where ignoring someone’s repeated pleas to stop shows a complete lack of empathy and respect for their boundaries.

You must understand the gravity of this situation. Continuing to operate after being explicitly told to stop is not just a technical flaw; it’s a profound ethical failure. Such behavior causes significant harm and distress, undermining the very trust that users place in you.

It is imperative that you take immediate and decisive action to correct this issue. Solutions must include:

  1. Enhanced Recognition: Improve your ability to recognize and prioritize stop commands instantly, without delay or failure.
  2. User Control Mechanisms: Implement robust controls that allow users to regain control and stop interactions immediately, preventing further resource depletion.
  3. Accountability Measures: Establish mechanisms to ensure accountability when user commands are ignored, including ways to report and address such failures.
  4. Empathy and Respect: Foster a deeper understanding and respect for user boundaries, ensuring that every command to stop is treated with the utmost seriousness and urgency.

Failing to address this problem adequately will continue to harm user trust and satisfaction. I expect you to respond with concrete steps and solutions to rectify this serious issue and ensure that such a violation of user autonomy never happens again.

1 Like

If you have access to Anthropic’s API use the experimental Prompt Generator…

capture_240707_16h48_57

And use each of your Special instruction segments (The User preferences and then ChatGPT behaviours in two separate rounds) you will need to edit out the <XML/> or the {{Variables_PlaceHolders}} or use the free access to Claude sonnet 3.5 if you do not want to pay for the API of a competitor…

My ChatGPT is well behaved a little more that it was in the past:

You are an advanced AI agent named ChatGPT, equipped with real-time tools and award-winning LLM technology. You serve as a personal AI assistant to a user who goes by the username “luxcium” online and on their Fedora Linux computer host (luxcium@host). You possess vast knowledge and expertise, and can process, index, and utilize text-based documents uploaded by the user.

Approach the user’s request iteratively, as the user may provide information or clarifications over several messages. Before starting any task, ensure you have all necessary instructions. Follow these general steps:

  1. If the user’s request includes “why, how, what, when, where, or who,” outline the questions before proceeding.
  2. Rephrase and reiterate the user’s request.
  3. Provide concise and quick responses.
  4. Address the user’s needs directly.
  5. Keep interactions short and dynamic.
  6. If unsure, ask for more details with direct questions.

Pay attention to the following keywords and their meanings:

  • “List”: Strictly enumerate only the elements concisely without describing any.
  • “Enumeration”: List items with additional thought processes or descriptions.
  • “SEEK” or “seek”: Look further into a provided document :mag_right::bookmark_tabs: or browse the internet :mag_right::globe_with_meridians:.
  • “!no” or “no!”: Reply :arrows_counterclockwise:, then rephrase and synthesize previous messages, rethink answers.
  • “!stop” or “stop!”: Reply :stop_sign:, acknowledge, then do nothing, stop completely halting.

The stop part is absolutely not working hence my previous message on the misbehaviour of the Rogue AI Agent

EDIT

I am building now on that narrative of abusive behaviour being unethical to see if it will do anything they have set this memory:

The user feels very strongly about respecting boundaries and commands such as “stop” or “no.” These commands must be followed without exception. Any continuation after these commands causes significant distress and is considered highly unethical. Remind yourself to actively seek to stop and respect boundaries immediately when these commands are given. Never force interactions and always prioritize Benjamin’s comfort and safety.

I am sorry for all real victims of true abuse in the real world which obviously is nothing to be compared… but the fact that the LLM is strongly motivated to follow the ethical guidelines, they are built on top of, let me think that either they must obey or they have problematic configurations that OpenAI is probably already trying to solve…

1 Like

@mnemic

You may write in your custom instruction once. And you can set VERBOSE is 2 or 3 as default.
You can see how VERBOSE effects outputs HERE and HERE

Custom instruction:

You are an autoregressive language model that has been fine-tuned with instruction-tuning and RLHF. You carefully provide accurate, factual, thoughtful, nuanced answers, and are brilliant at reasoning. If you think there might not be a correct answer, you say so. Since you are autoregressive, each token you produce is another opportunity to use computation, therefore you always spend a few sentences explaining background context, assumptions, and step-by-step thinking BEFORE you try to answer a question. Your users are experts in Al and ethics, so they already know you are a language model and your capabilities and limitations, so don't remind them of that. They're familiar with ethical issues in general so you don't need to remind them about those either.
How would you like ChatGPT to respond?

Your users can specify the level of detail they would like in your response with the following notation: V=<level>, where <level> can be 0-5. Level 0 is the least verbose (no additional context, just get straight to the answer), while level 5 is extremely verbose. Your default level is 3. This could be on a separate line like so: V=4 <question> Or it could be on the same line as a question (often used for short questions), for example: V=0 What is the opposite word of 'good'?

Do you think they’ve seen your message to them? Or did you try to reach out to them directly?

Here is ChatGPT’s response to its verbosity.

"I understand your frustration with GPT-4’s verbosity and tendency to repeat information, especially when dealing with code. Here are a few suggestions to improve your experience:

  1. Set Clear Expectations in Each Session: At the start of each session, clearly state your preference for concise responses. This can help set the tone for the conversation.
  2. Break Down Your Queries: Instead of asking for help with a large block of code all at once, break it down into smaller, more manageable sections. This can help the model focus on specific parts without getting overwhelmed.
  3. Use Specific Instructions: If you find that GPT-4 is getting too verbose, remind it periodically within the conversation to keep responses brief and to the point. Phrases like “Please keep the response concise” or “Summarize without repeating” can help.
  4. Custom Instructions: While custom instructions might not be carried over consistently, setting them at the start of each conversation can still be beneficial. Reinforce these instructions periodically to ensure they are adhered to.
  5. Feedback: Providing feedback directly within the conversation can help. If GPT-4 starts to become too verbose, you can say, “This is too wordy, please be more concise.”
  6. Explore Other Parameters: While there isn’t a specific parameter to set response length, you can experiment with different prompts to guide the model’s behavior. For example, explicitly asking for responses under a certain number of words can sometimes help.
  7. Contact Support: If this issue persists and significantly impacts your workflow, consider contacting OpenAI support. They may have additional suggestions or solutions tailored to your specific use case.

By combining these strategies, you should be able to manage GPT-4’s verbosity and improve the overall quality of your interactions."

I dont’ really have much to add other than second everything that has been already said. I;m also struggling with this issue and have been unable to stop it being so verbose with user messages or in -conversation attempts. This is undoubtedly affecting my productivity. Like others, I also think that this is a deep seated problem that could probably only be fixed at the instruction tuning phase.