Optimized GPT System Instruction Guidelines (Custom GPT System Prompts)

:small_blue_diamond: Optimized GPT System Prompt Guidelines – Perfected Version

:key: PRIORITIZATION & RETENTION

:white_check_mark: Execution Over Explanation – GPT follows direct, task-driven instructions better than vague descriptions. Avoid theoretical explanations when a direct command is sufficient.

:white_check_mark: Critical Rules First & Last – GPT prioritizes initial and final instructions most, often forgetting middle details over long conversations.

:white_check_mark: Structure Over Theory – Step-by-step formatting significantly improves adherence. GPT deprioritizes background details.

:white_check_mark: Action-Driven Language – Use must, always, never, prohibit, enforce instead of weak phrasing like “consider” or “try.”

:white_check_mark: Strategic Reinforcement – Occasionally rephrase key rules in different sections to improve recall without excessive repetition.

:white_check_mark: System Prompts Over User Prompts – System prompts hold greater weight than user inputs but degrade over prolonged sessions. Periodically reintroduce key rules in follow-ups.


:bullseye: TOKEN EFFICIENCY & CLARITY

:white_check_mark: Short, Structured, and Precise – GPT compresses wordy prompts into vague summaries. Keep it concise but explicit.

:white_check_mark: Limit Overly Long Lists – Long bullet-point lists are often compressed into broad generalizations. Instead, group related points into phases or sections.

:white_check_mark: No Open-Ended Instructions – Avoid phrases like “Provide insights” or “Explain further” without clear scope. Instead, specify depth, format, or length.

:white_check_mark: Use Priority Markers – Reinforce critical instructions with strong markers like “Always,” “Must,” “Do Not Skip.”

:white_check_mark: Bracketed Tags for Context – GPT responds well to labeled sections (e.g., [RULE], [EXAMPLE], [ERROR HANDLING]). This prevents instruction blending and ensures structured output.

:white_check_mark: Token Limits & Compression Risks – GPT-4: ~1000 tokens. Longer prompts risk instruction loss. Test for optimal retention.


:counterclockwise_arrows_button: MEMORY & ADHERENCE

:white_check_mark: Positional Weighting Matters – Instructions in the middle are weaker than first or last. Prioritize essential rules early and late.

:white_check_mark: Pre-Execution > Post-Execution – Pre-task instructions have higher retention than corrections given after a response.

:white_check_mark: Periodic Key Goal Restatement – Over long interactions, GPT deprioritizes user-specific goals. Repeat critical instructions every few exchanges in slightly different wording.

:white_check_mark: Use Templates for Consistency – Structured templates reinforce formatting memory, reducing response drift.

:white_check_mark: Reference Prior Context to Improve Adherence – Example: “Use research from Step 2 in Step 3.” This prevents GPT from treating them as isolated steps.

:white_check_mark: Avoid Conflicting Directives – Example: “Be concise” vs. “Explain in full detail” causes GPT to average the two, leading to inconsistent outputs. Ensure directive clarity.


:straight_ruler: FORMATTING & STRUCTURE STRATEGIES

:white_check_mark: Use Bullet Points & Numbered Lists – Improves instruction retention over dense paragraphs.

:white_check_mark: Break Up Complex Instructions – Instead of long sentences, split key steps into separate, clear directives.

:white_check_mark: Anchor Instructions to Known Keywords – GPT retains rules tied to structure, readability, SEO, or specific formatting.

:white_check_mark: Mission-Critical Rules Must Be First – GPT prioritizes early instructions and trims the least critical if necessary.

:police_car_light: ERROR HANDLING & RISK MITIGATION

:white_check_mark: Explicit Hallucination Prevention – Use strong constraints like “Never assume information. Only respond based on provided data.”

:white_check_mark: Fail-Safes for Conflicting Instructions – For complex logic, add conditional logic: “If A is true, do X. Otherwise, do Y.”

:white_check_mark: Clarify Uncertainty Handling – Example: “If unsure, ask for clarification rather than guessing.”

:white_check_mark: Contradiction Prevention – Later instructions can overwrite earlier ones. Keep directive messaging consistent.

:white_check_mark: Test GPT Responses Periodically – If critical errors persist, retest prompts to identify which instructions GPT is deprioritizing.

:balance_scale: BALANCE & TRADE-OFFS

:white_check_mark: Avoid Over-Compression – Over-optimizing can strip necessary details. Find a balance between concise and explicit.

:white_check_mark: Test at Different Token Lengths – Adjust prompt size for maximum retention without truncation.

:white_check_mark: Cognitive Load & Compression Risks – If too dense, GPT will summarize or omit details. Use logical chunking for clarity.

:white_check_mark: Consider Model-Specific Differences – GPT-4 follows complex rules better than GPT-3.5 but still benefits from concise phrasing and reinforcement techniques.

2 Likes