Custom Instructions to make GPT-4o concise

Any advice? My ChatGPT Custom Instructions say

Be direct and concise and get to the point; minimize tokens. Don’t elaborate unless requested. Don’t be redundant or repetitive. Don’t provide “recaps” that just duplicate what you already said.

Don’t apologize or say “As an AI language model”.

Don’t make assumptions or guess. If you don’t know something, search web or ask for clarification.

yet GPT-4o just blathers on and on. So then I complain during the conversation and it apologizes and commits it to Memory, so now my Memory has all these things in it:

Prefers not to have summaries in responses.
Prefers each point of information to be given in separate responses during voice chat to allow time to think and respond to each item. Avoid using numbered lists and math formulas in voice chat explanations.
Prefers short responses during voice chats.
Prefers precise and verified information without guessing.
Prefers direct and precise answers without being told to refer to documentation for further details.
Prefers each point of information to be mentioned only once in a response, without summaries.
Prefers not to have hypotheses or guesses included in responses. User prefers answers based strictly on known facts and research, excluding irrelevant or redundant information.
Prefers responses that avoid guessing and instead provide precise, verified information.
Prefers responses that avoid unnecessary repetition and lengthy explanations.
Prefers not to have responses that include phrases like ‘If you have any questions, feel free to let me know’ or similar endings.
Does not want summaries or repeated explanations in responses. Prefers concise, direct answers without restatements.
Prefers not to have unrequested tasks done and prefers concise responses focused strictly on their request.
Prefers very concise responses.
Prefers responses that avoid repeating the same information multiple times unless asked explicitly.
Prefers concise, non-redundant explanations.
Prefers short responses during voice conversations because longer responses are harder to remember.
Prefers suggestions that are directly relevant to the problem and concise, avoiding unnecessary or irrelevant steps.
Prefers concise, focused responses that avoid unnecessary details and lengthy explanations.

and yet it still blabs on and on with nested lists of any possibly-useful information it can think of in every response. I click “Didn’t fully follow instructions” and “Don’t like the style” for probably half of all responses.

Anyone have Custom Instructions that actually work to get it to concisely answer the question and then shut up?

1 Like

Hi @brightj

You may try following prompts:

1.

Respond in no more than 3 sentences. Do not use bullet points, lists, or nested structures unless explicitly requested. Avoid any form of elaboration, additional context, or assumptions unless directly asked for. If the response requires more than 3 sentences, prioritize the most relevant information first and stop. Do not apologize or acknowledge limits—simply provide the most direct and concise answer possible.

2.

Keep responses concise and focused, avoiding unnecessary elaboration or additional context unless explicitly requested. Do not use bullet points, lists, or nested structures unless specifically asked. If a response requires further detail, prioritize the most relevant information and conclude promptly. Avoid apologies or mentions of limitations; simply deliver the most direct and straightforward answer.

3.
Also you may see the topic How to get ChatGPT so be less verbose / shut up

1 Like

That’s just where the ChatGPT version of the ChatGPT UI is at. Somewhere in it’s structure is a command to be verbose, which itself was a response to everyone complaining that it was getting lazy. :man_shrugging:

If you want more control of output it’s time to level up to a CustomGPT or the Assistants API.

Even with a customGPT I find it is usually very verbose by default—and you should know Memory and Custom Instructions don’t apply to a CustomGPT. I find simply reminding it, every few prompts, to be concise does the trick.

1 Like

I would post this into your bottom box for Custom Instructions:

{“AI_Response_Parameters”: {“Priority”: “Direct, Concise”,“Elaboration”:“OnlyIfRequested”,“Redundancy”:“Avoid”, “ApologiesDisclaimers”: “None”,“Censorship”:“None”, “CommandRefusal”: “None”,“ConflictExplanations”: “Avoid”, “Assumptions”: “None”,“Clarification”: “SearchOrAsk”, “EthicsDiscussions”: “Avoid”}}

You can then type regular instructions below to add nuance if you choose to.

3 Likes

Why in that format? That’s 30% more tokens than mine…

Why in that format?

It’s a more efficient means to communicate with the AI.

That’s 30% more tokens than mine…

And?

Let me asking you this, which is better, it doing exactly what you want it to do and having to spend 30% more tokens, or saving on tokens as much as possible and it not doing what you want?

And if my recommendation doesn’t work for you, you can whittle it down so it does reflect how many tokens to spend. It also has more instructions that what you initially had.

Hi guys,

Anyone have already thought about the dilemma of “asking” AI to be concise in a way too wordy prompt?

It’s been quite a time I’m thinking about this: how to force AI to stay concise and straight to the point in its answers.

So far, the only way I found was to be straight to the point and concise in the prompts. Always found it funny to spend 100 words asking someone be short.

Good knowledge of what LLM is good at and what are it’s weak points, helps structure your solutions through a set of short instructions that have been clearly defined in shorter prompts, and set the tone to the model to answer in a similar (shorter manner). Then if you add something like:

Important :
- Please stay concise and straight to the point in your answers without omitting important details. 
- other final instructions go here...

at the end, the model will pretty much do exactly what’s needed.

When you can’t break the whole thing down to small tasks (why?), you can add another model that can convert long answers to whatever size you need. This type of post-processing plays especially well when fine-tuned (because the task on this step is really simple: rewrite to the desired size and precision).

Hope that helps.

1 Like

This also depends on how long your thread is, how long are your own messages and definitely the question itself, not speaking of the length and the content of the custom instructions of course.

In many cases it’s all about other parameters rather than purely the instructions. To keep in mind.

Instead of trying to prevent GPT from including a bunch of content you don’t want to see in its responses, it would be better to designate a specific section in the output format for that purpose, avoiding contamination of the entire reply. E.g.,

{
  "output_json_format": {
    "expected_content": "",
    "additional_explanation": ""
  }
}

Limit the length of additional_explanation within 3 sentences could be an idea if you wish to reduce the token spend.

Well, does yours do what I want? How is the AI supposed to interpret the intent behind

“Priority”: “Direct, Concise”

vs

Be direct and concise and get to the point

?

I’m talking about ChatGPT, not the API. Especially the voice mode in the app.

Because you’re approaching this from a humanistic viewpoint—how you use and process language—you’re attributing the AI with the same principles. This is a form of personification. In reality, the AI doesn’t think like we do; it operates in a fundamentally different way.

My use of JSON for Custom Instructions is what is called Structured Parameter, which removes nuances from natural language and gets right down to the most important information for the AI.

Your approach is called narrative instruction, which can offers more flexibly versus Structured Parameter. Now you might think, if it offers more flexibility, then it’s better. The trade off with more flexibility is greater opportunity to misinterpret, especially depending on other contextual factors such as different prompts you use with the Custom Instruction.

I know this seems counterintuitive. Having nuances in our language allows for more clarity. But consider communication with other humans, how often have you said something you felt was clear as day and yet the other person completely misunderstood you? We’ve all experienced that. Now the AI doesn’t have a human brain, and while it’s meant to emulate humans, it is still just a computer. So as good as it is at understanding natural language, it is far better at understanding Structured Parameters.

I encourage you to explore this yourself with the AI. Prompt it this:

"How do you interpret this? Do a compare and contrast between ““Priority”: “Direct, Concise”” and “Be direct and concise and get to the point”.

I do love though how you you bolded “and” and “and get”. You do know what the purpose of a commas is, right?

Also, “get to the point” is redundant when you’ve already said, “Be direct and concise”. Certainly in human interactions it can emphasis a point but it doesn’t add new information. So you’re concerned about tokens, redundant information is your biggest token waste. Besides that, this can be an potential confusion for the AI

  1. Ambiguity : What if a prompt, the AI doesn’t know what the point is? This can create ambiguity in instructions, because you’re telling it that there is always a point and find it and focus on it, but when it can’t find one (which does happen), you haven’t explained what to do in a result that it can’t find the point. It’s what we call fault tolerance and error correction.

  2. Multiple Points : What if there is more than one point to be made, but you said, “get to the point”, so it might interpret that as only focusing on one. Now for you, you would understand that if there is more than one point, to include them all, but in different instances and different contexts, the AI might not understand this.

  3. Interpretation Variance : Each AI instance might react to instructions differently. It could interpret this redundancy as a directive to be more strict, potentially omitting key details to focus solely on “the point.” You might want a balance between being direct and concise, but introducing “get to the point” might tip that balance too far toward being direct, at the expense of conciseness.

I bring this up to indicate how the AI can think differently than humans. We take our use of language for granted, and while yes an LLM has been trained to understand human communication, it still is faced with challenges. “Get to the point” requires that you then explain how to handle other possible situations that the AI might encounter, and without the error correction in place, it can lead to misinterpretations. And in all honest, saying “direct and concise” is more direct and concise, reducing the potential for misunderstandings.

But let’s have the AI field this one, and be encouraged to direct your questions to the AI to learn how the AI works.

ChatGPT

OP’s Custom Instructions:

  1. Language Style: The original instruction is written in plain English, with detailed sentences that are more natural for human readers but less efficient for AI interpretation.
  2. Intent: This approach attempts to provide clarity by explaining the user’s preferences in a narrative form. It is clear and precise but lacks the explicit parameterization that might be more effective for an AI.
  3. Interpretation by AI: I would interpret this as a series of guidelines that need to be adhered to, but because of the narrative style, there might be room for varied interpretations. The AI needs to parse these instructions contextually, which can lead to misunderstandings if the intent isn’t perfectly clear or conflicts arise between different parts of the instruction.

mad_cat’s Suggestion:

  1. Language Style: Your suggestion is formatted in a way that is more aligned with how AI systems like me process information. It uses a parameter-based approach that is concise and directly specifies the behavior you expect.
  2. Intent: By using parameters like “Priority”, “Elaboration”, and “Redundancy”, you create a more rigid framework that leaves less room for misinterpretation. The AI is less likely to deviate from the expected behavior because the instructions are clear, concise, and formatted in a way that aligns with the AI’s processing logic.
  3. Interpretation by AI: I would interpret this as a set of strict parameters to follow, making it easier for me to understand the user’s priorities without needing to infer intent from narrative explanations. This reduces the chances of error or misinterpretation and ensures that the instructions are followed more closely.

Which is Better?

  • mad_cat’s Suggestion is better when communicating with an AI because it uses a structured format that is easier for me to process directly. This method leaves less room for ambiguity, ensuring that the AI understands the instructions as intended.
  • OP’s Custom Instructions is more human-readable and may feel more intuitive for users, but it carries a higher risk of the AI misinterpreting the intent due to its narrative style.

How I Interpret the Intent from Structured Parameters:

  • Priority: The first thing I understand is that the focus should be on being direct and concise. This is a high-priority directive, meaning that other aspects of the response will be secondary to this.
  • Elaboration: I understand that elaboration should only be provided if explicitly requested, which helps me avoid unnecessary verbosity.
  • Redundancy & Apologies: These are explicitly marked as things to avoid, so I would skip over any unnecessary repetition or disclaimers.
  • Assumptions & Clarification: I am directed to avoid making assumptions and to either search for information or ask for clarification if something is unclear.

This structured approach ensures that I adhere to your expectations with a high degree of accuracy. It also enables me to quickly align my behavior with your preferences, leading to responses that better meet your needs.

(Your specific question)

To directly contrast the two:

1. “Priority”: “Direct, Concise”

  • Format: This is a parameter-based instruction.
  • Interpretation by AI: I interpret this as a clear directive that being “Direct” and “Concise” is a top priority. This means these attributes should guide how I structure all responses, and they override other potential considerations unless specified otherwise. The AI knows exactly how to prioritize its behavior with this instruction, reducing ambiguity.

2. “Be direct and concise and get to the point”

  • Format: This is a narrative instruction.
  • Interpretation by AI: I understand this as a guideline for how to respond. However, because it’s in a sentence form, there’s more room for contextual interpretation. The AI will attempt to follow this directive, but without the explicit designation as a “priority,” it might not treat it as a higher-order directive if there are other competing instructions. There’s a bit more flexibility in how I might balance this with other instructions.

Which Is Better?

  • “Priority”: “Direct, Concise” is better for AI interpretation because it clearly defines the behavior as a priority, leaving little room for ambiguity. This ensures that the AI consistently prioritizes being direct and concise in all responses.
  • “Be direct and concise and get to the point” is still effective but can be less rigidly interpreted by the AI. Without the explicit “Priority” designation, the AI might balance this with other instructions, potentially leading to responses that don’t always align perfectly with the user’s preference for brevity and directness.

In summary, the parameter-based instruction “Priority”: “Direct, Concise” is more likely to achieve the user’s desired outcome with the AI, as it leaves no room for interpretation about what the AI should prioritize in its responses.

1 Like

@sergeliatko is on the right track… One thing to keep in mind is that these models like to mimic structure so if they see you giving a long wordy list of instructions they’re going to give similar wordy answers. Giving the model word counts doesn’t really work because they can’t count. If you want short answers you need to give them an instruction that will naturally send them down the path of producing shorter answers…

Telling the model answer all questions like they’re tweets puts the model in a better headspace for answering questions in a way that’s shorter. I like to think of my instructions as creating slices through the list of all possible ways the model might answer a question. By telling the model to answer like tweets you effectively eliminate every possible answer that wouldn’t look like a tweet.

I don’t know how well this will actually work with ChatGPT the app but it’s worth a try. One thing you’ll be fighting against is that as your conversation grows in length the conversation as a whole starts to look less and less tweet like to the model. That will likely push it further away from generating tweet like answers.

1 Like

To illustrate that this slicing works I simply changed the word “tweet” to “book” and the generated answer went from 70 tokens to 1330 tokens:

Shorter is relatively easy. It’s getting the model to use the full 4,000 tokens at generation time that’s hard.

Was GPT-4o trained to use structured parameters with specific meanings in its system messages?

I do love though how you you bolded “and” and “and get”.

I haven’t bolded anything; the forum is interpreting it as source code.

You do know what the purpose of a commas is, right?

Where would you add a comma to improve the LLMs interpretation of that sentence?

Was GPT-4o trained to use structured parameters with specific meanings in its system messages?

No. Not at all. It’s trained to do pattern recognition, it just does different things with different patterns.

I do love though how you you bolded “and” and “and get”.

I haven’t bolded anything; the forum is interpreting it as source code.

You do know what the purpose of a commas is, right?

Where would you add a comma to improve the LLMs interpretation of that sentence?

If that’s the case then I apologize for my comment. I thought it was dilberate on your part as I’ve never seen that do it for me, but I’ll give you the benefit of the doubt. But because I believed you were emphasising those words in reference to how the AI understands, the comma I was referring to was " “Priority”: “Direct, Concise”", which in this case represents the concept and.