How to get GPT to reply strictly to prompt

Does anyone experience excessive content in GPT’s reply beyond what the prompt strictly requires? We employ multiple LLMs and other algorithms to post process outputs from LLMs. GPT’s replies frequently fail in post processing because they contain extraneous content. GPT attributes this issue to various reasons: completeness, safety, precaution (GPT’s own word) and other reasons beyond what the prompt asks for. We sometimes see evaluation of the prompt itself (e.g. “rare insight”) which is completely out of scope for the prompt. We employ the same prompt for all LLMs.

3 Likes

If you’re looking for a JSON or structured output, you can use a format like this:

class CalendarEvent(BaseModel):
    name: str
    date: str
    participants: list[str]

response = client.responses.parse(
    model="gpt-4o",
    input=[
        {"role": "system", "content": "Extract the event information."},
        {
            "role": "user",
            "content": "Alice and Bob are going to a science fair on Friday.",
        },
    ],
    text_format=CalendarEvent,
)

Alternatively, you should consider improving your prompt. Make sure it clearly includes the Task, Instructions, Expected Result, and an Example.

Smaller models have a higher rate of not following prompt instructions correctly.

1 Like

Thank yo for showing me your use case. We would ask open ended question. Example: Compare Rate decision style of the FED between 2000 to 2010 and that from 2010 to 2020. There will be part of the anwer that addresses somewhat to the prompt, but there will information that is off topic. We can prompt the GPT to hold really tight answers after maybe 200! prompts, and GPT will stick to strict following for maybe 5 turns.

Even being a human (I hope) my brain explodes with tons of precision questions like what criteria to use, their priority etc. :wink:

For me, this is rather a context/prompt structure issue than LLM model issues. So if you need precise answers, you need to think better through details and possibly do it in multiple steps.

But without some prompt examples, it’s hard to guess.

Detailed and specific prompts yield better results than short, generalized prompts.

1 Like

To ensure GPT replies strictly to your prompt without adding extra info:

  1. Be explicit in your instruction:
    Use clear commands like:
    2.“Reply only with ‘Yes’ or ‘No’. Do not explain.”*
  2. Limit output format:
    Say: “Respond in exactly one sentence.” or “Use JSON format only, no comments.”
  3. Use system-level instructions (if possible):
    In custom GPTs or API use, include a system message like:
    5.“You must not explain your answers or add any other text.”*
  4. Correct and iterate:
    If GPT adds extra info, follow up with:
    7.“Too much. Just the answer, no extras.”*
    Repeating helps reinforce the desired behavior.
  5. Use temperature and settings:
    Lowering temperature (e.g., 0.2) can help GPT stay more focused and less verbose.

Models try to be helpful, so they often add context or clarification unless strongly instructed not to.

3 Likes

FYI

See reply in similar topic by @SimonFL