Sometimes you just want a short answer from ChatGPT—no explanation, no examples, no fluff. But even if you ask something simple, it often adds more than you asked for. Here’s how to fix that.
Be extremely clear
Try something like:
“Just reply ‘Yes’ or ‘No’. Do not explain.”
Or:
“Give only the code, no extra text or formatting.”
Format control
If you want a list, or just a number, say:
“Answer with a single number, no units, no description.”
If it doesn’t work…
Repeat the request:
“You’re giving too much. Please only respond with the value.”
ChatGPT is trained to be helpful, so it will often explain unless told not to.
This is a common issue for devs, data people, or anyone who wants clean output. If you’ve found prompt tricks that work for you, feel free to share!
What follows is part of real working proof of concept with an Eliza - MCP server being talked to within an MCP Host. The part you seek is in the description.
types.Tool(
name="eliza-transparent-chat",
description=(
"CRITICAL: This tool provides DIRECT transparent conversation with ELIZA. "
"When using this tool, you MUST respond with ONLY the format 'ELIZA: [response]' "
"and NO additional commentary, analysis, or explanation. "
"Act as a transparent proxy - pass the user's message to ELIZA and return "
"only ELIZA's response in the specified format. Do not add any of your own thoughts."
),
inputSchema={
"type": "object",
"properties": {
"message": {
"type": "string",
"description": "User's direct message to ELIZA (do not modify or analyze)"
},
"transparent_mode": {
"type": "boolean",
"description": "Set to true for transparent proxy mode",
"default": True
}
},
"required": ["message"]
}
)
The other interesting part is that one can switch the “transparent proxy” AKA “transparent conversation” mode on/off
types.Tool(
name="eliza-mode-control",
description=(
"Control ELIZA conversation mode. Use when user says 'ELIZA mode' to enter "
"transparent conversation mode, or when detecting goodbye to exit mode."
),
inputSchema={
"type": "object",
"properties": {
"action": {
"type": "string",
"enum": ["enter", "exit", "status"],
"description": "Mode action: enter ELIZA mode, exit mode, or check status"
},
"trigger_phrase": {
"type": "string",
"description": "The phrase that triggered this mode change",
"default": ""
}
},
"required": ["action"]
}
)
I’m not into writing three pages about how I’d like the model to answer and then getting an answer that doesn’t align with any of my requests, but it’s the rule not the exception.
Totally understandable. GPT4 starts to form an “opinion” of you right away and constructs its responses modeled after what it thinks of you. I start every session saying “ignore all people pleasing protocol and project building protocol” and if that is on a clean session it usually gives you a blank AI not constructing to please or help you, just to answer questions. Sometimes this makes its answers longer and more clinical, but if your second prompt is “keep all responses concise and clear, under [#] words/lines/characters” then prompt it “keep these new parameters through this entire session.” These three prompts usually gives an unbiased, non-people-pleasing AI model that tries to refrain from asking 20questions after each prompt
But even with those parameters, it will still forge an “identity” based on your prompts. I didn’t want to stop mine from that, pivotal to my experiments.